Author: Vasia Antoniuk
Tester: Sergey Kulik
Editorialist: Adury Surya Kiran
Range Maximum Queries
You are given an array of integers, and are asked a lot of queries (at most 10^8) like give two indices find the maximum of all the integers in the array whose index lies between those two.
The generation of the queries as given in the question is
x = x + 7 (mod N - 1)
y = y + 11 (mod N)
l = min(x, y)
r = max(x, y)
You need to find maximum of a[i] such that l <= i <= r.
The generation of (l, r) pairs is like a pseudo random procedure, so its very very hard task to find a pattern in that. Now as we got (l, r) for each query the finding maximum is quite a standard task of range queries.
There are a lot of ways to find range queries. Some of them are
- O(1) per Update and O(N) per query : Using an array, we can update each element in O(1) and when queried iterate through all elements from l to r.
- O(log N) per Update and O(log N) per query : There are a lot of ways to do this. We can use Segment trees, Binary index trees, etc.
- O(N) per Update and O(1) per query : We can use Sparse Table to solve this case.
The methods used for 2 and 3 are very standard and their explanation can be found at a lot of places.
Some useful links for solving Range Queries using Method 2 are
- Utkarsh Lath’s Blog using Segment trees
- Geeks for Geeks using Segment trees
- Using BIT
You can find very useful explanation of all methods including the sparse table method here in this Topcoder Tutorial.
AUTHOR’S AND TESTER’S SOLUTIONS:
I expect to see a lot of comments like “I wrote a correct solution but got TL, why?!” below
What was the reason for giving such constraints? There are no idea at all in given task, only problem is because of constraints. If you want to prevent logN solutions from passing - much lower constraints are enough; if you want to make task challenging and teach people to write well-optimized code - you may set TL even stricter (0.75 is still not hard to reach)… This problem is already awful even with 1 second, so why to stop at it?
will 10^8 run in one second? i used to assume around 10^7 calculations per second. what should is assume as upper limit for computations in one sec.
Can someone explain “Method 3”? How to use splay trees with O(N) update and O(1) query?
I did just the same thing and was getting 70 points.
I used and array for the sparse matrix as arr[N][logN] ,but as soon as I changed my array to arr[logN][N] ,it gave 100 Points…
What’s the reason for this?
Why did the running time decrease?
Can any one give me a useful link for splay tree ?
The reason of arr[logN][N] getting accepted is that since x and y are increased only by a small amount, the row number tends to remain constant, and column number increases only by a small amount. So subsequent accesses tend to happen within the same memory block. This increases the number of cache hits.
This problem was kind of a disaster . It
s very sad that all those people who didnt use a kind of worthless optimisation atleast from the respect of goals of codechef didn`t get the points and hence the ranking they deserved. Very sad .
I wrote the solution with sparse table but it took only 70 points. I managed to get 100 with these 2 further optimizations:
- When you have something like (A + b) % C and b is very small compared to C, using C++ the code
(a + b) % c is far slower than
a += b; if(a > c) a -= c; Don’t ask me why because I don’t know, but I noticed that only this was enough to take 100 point (from more than 1s to 0.8).
- N is O(10^5) so x can have at most 10^5 -1 different values, and y 10^5 different values. We can find the length of the two cycles ( length of x cycle = N / 7, length of y cycle = N/11, because 7 and 11 are prime numbers). Then you can find L = lcm of these 2 numbers and if L < M you can just make L queries and if sum[ i ] is the sum of answers at i-th iteration answer will be
sum[ L ]* (M/L) + sum[ M % L]
Hi i used method 1 to solve this problem. this is my code with this i got only 20pts. Can any please tell me why i got only 20pts?
I think M<=10⁷ or N,M<=10⁶ would have perfectly make the M*logN solutions time out, there was no need at all for M<=10⁸ (which is supposed to be TLE on other tasks btw)
Why is nowadays every problem so much about crazy optimization?
I mean, why do we have to spend hours trying to find out whether if we (don’t) inline a function/use post-increment operator outside of while()/swap the dimensions of an array/take modulo one time less/other useless optimizations, then the running time will decrease by 0.01sec and will be inside the limit? It just doesn’t make sense in my opinion, since you can’t know, what will decrease and what will actually increase the running time. E.g. I changed N times post-increment on X to X+=N; and the program actually ran 0.10sec slower on one single test case.
@prasadram126 the O(M*N) solution is really supposed to get 20 points, you should implement the third one to get 100p
can anyone post a link to a resource where i can learn this approach of using sparse tables.
And all the time i was thinking that the intended solution will be sleeker and awesome…what a disgrace
Such problems must not be allowed to appear in Long Contests… worst problem ever on CC… waste of time!!
To me Codechef long challenge is very important and seeing such a poor “code optimization” problem makes me sad. Codechef, I have high hopes from you, please take care before serving the problems. The efforts being made are great but this is one scar on the repute.
i have implemented my solution based on topcoder tutorial of sparse table algorithm during the contest! But still i got 70 points! Please have a look at it !
I have no idea about sparse table someone please give me some tutorial
arr[N][logN] worked for me (after lots of optimizations of course)
segment tree is the best to be understood if someone knows DIVIDE AND CONQUER METHOD.I finally gt it
can i get some better explanation for sparse table ,it’s difficult to understand,plz explain or give some link by which it can understand more easily