PROBLEM LINK:Author: Vasia Antoniuk DIFFICULTY:EasyMedium PREREQUISITES:Range Maximum Queries PROBLEM:You are given an array of integers, and are asked a lot of queries (at most 10^8) like give two indices find the maximum of all the integers in the array whose index lies between those two. EXPLANATION:The generation of the queries as given in the question is The generation of (l, r) pairs is like a pseudo random procedure, so its very very hard task to find a pattern in that. Now as we got (l, r) for each query the finding maximum is quite a standard task of range queries. There are a lot of ways to find range queries. Some of them are The methods used for 2 and 3 are very standard and their explanation can be found at a lot of places. Some useful links for solving Range Queries using Method 2 are You can find very useful explanation of all methods including the sparse table method here in this Topcoder Tutorial. AUTHOR'S AND TESTER'S SOLUTIONS:
This question is marked "community wiki".
asked 14 Apr '15, 15:05

I expect to see a lot of comments like "I wrote a correct solution but got TL, why?!" below :D What was the reason for giving such constraints? There are no idea at all in given task, only problem is because of constraints. If you want to prevent logN solutions from passing  much lower constraints are enough; if you want to make task challenging and teach people to write welloptimized code  you may set TL even stricter (0.75 is still not hard to reach)... This problem is already awful even with 1 second, so why to stop at it? answered 14 Apr '15, 17:04

The reason of arr[logN][N] getting accepted is that since x and y are increased only by a small amount, the row number tends to remain constant, and column number increases only by a small amount. So subsequent accesses tend to happen within the same memory block. This increases the number of cache hits. answered 14 Apr '15, 18:11

To me Codechef long challenge is very important and seeing such a poor "code optimization" problem makes me sad. Codechef, I have high hopes from you, please take care before serving the problems. The efforts being made are great but this is one scar on the repute. answered 15 Apr '15, 20:20

I think M<=10⁷ or N,M<=10⁶ would have perfectly make the M*logN solutions time out, there was no need at all for M<=10⁸ (which is supposed to be TLE on other tasks btw) Why is nowadays every problem so much about crazy optimization? I mean, why do we have to spend hours trying to find out whether if we (don't) inline a function/use postincrement operator outside of while()/swap the dimensions of an array/take modulo one time less/other useless optimizations, then the running time will decrease by 0.01sec and will be inside the limit? It just doesn't make sense in my opinion, since you can't know, what will decrease and what will actually increase the running time. E.g. I changed N times postincrement on X to X+=N; and the program actually ran 0.10sec slower on one single test case. answered 15 Apr '15, 13:57

I have no idea about sparse table someone please give me some tutorial answered 15 Apr '15, 22:27

I did just the same thing and was getting 70 points. answered 14 Apr '15, 18:01

https://www.youtube.com/watch?v=0rCFkuQS968 answered 18 Apr '15, 11:39
