The method described above to find the maximum value in all possible subarrays of K is also called Sliding Window Maximum. Though this method pretty much passes all the subtask within time, I am not able to understand why is the combination of Sparse Table + Sliding Window Maximum failing to pass the Subtask # 3.
I precomputed the Sparse Table for each row, that is O(nmlogm), where n is the number of rows and m is the size of each row. So, now I can get minimum for each subarray(of any size, let say a) of each row in O(1), and then I apply Sliding window there to find minimum for each row, in O(b), where b is the number of rows in target submatrix.
So total time complexity becomes O(nmlogm + 4QMN) where as, if use two sliding windows in each query it becomes O(5QMN), how does the double sliding window pass and sparse+sliding window isn’t passing?
2MN is required to calculate sum matrix, MN is required for finding answer, and 2MN for two sliding windows for each submatrix.
This is only possible when logM>Q, but since Q can be 50 but logM is maximum 10, why isn’t the Sparse Table solution passing. Please correct me if I am wrong somewhere.
You can also use a 2-d version of the spare table to compute the maximum. With MNlogMlogN computation you can answer a max query on a any given sub-matrix in O(1).
Basically apply RMQ on every row of the matrix, then along the columns of the output of the first RMQ.
I have used 2-d version of sparse table for maximum element in sub matrix and pre-compute sum in o(2*n) which is O(n) time. but i cant pass third sub task can anyone tell me where i go wrong or how can i do better than this. https://www.codechef.com/viewsolution/10499476
thanks in advance!!.
I used multiple sliding window for getting the maximum element in sub-matrix.
I maximum element in all the sub-matrix of sizes 1x1 , 2x2 ,4x4,16x16 till 512x512 and for every query used appropriate window to find the maximum element in that sub-matrix.
Time complexity for pre-computation : 10xnxm Overall Time compleity = qnm*logn
Here is a link to my Solution using 2-D sparse tables.
For those who got TLE in some test cases/full subtask-3 using this, here are a few optimisations:
Multi-dimensional arrays take a lot of time for array access. To increase their caching and access speed, it is advised to keep the indices from smaller to larger i.e. declaring rmq array like rmq[11][11][1002][1002]. (Try this problem for more details)
You should pre-compute the log values, although internal log implementation is O(1), the constant factor is large. So computing log the number of times in query increases the time of your program.
Also, a not so needed optimisation is to pre-compute the powers of 2 as well and declare array size as much as required only.
I created 2 D table to store maximum element inside each submatrix. Using DP, I am filling this spare table in O(n*m). I am not getting TLE but I [got]WA in many cases. Please help me figure out what am I doing wrong here. Link to my solution : CodeChef: Practical coding for everyone
A lot of people have implemented 2d sparse table like this int sparse [m] [n] [logm] [logn] and they got an tle in subtask#3
Now the catch is if u would implemented like this int sparse [logm] [logn] [m] [n] then it would have passed all casses with flying colors.
Link to my submission by using 2d sparse table: CodeChef: Practical coding for everyone
Unable to view setter’s or tester’s solution. And also in the Explanation it should be: “the minimum number of operations required will be x∗a∗b−S.” instead of: “the minimum number of operations required will be x∗n∗m−S.”
I tried 2D sparse table during contest and it got TLE in subtask 3 and now I just changed the declaration of rmq array as suggested by likecs and it passed easily…
However, I managed to get 100 points during contest with the approach mentioned in editorial but I do think that time limit must be set in such a way that either both solutions should pass or neither of them.
Btw, it was a nice question and I really enjoyed cracking it !!
“Note that for calculating max[i][j+1]max[i][j+1] from max[i][j], we have to add a new element maxCol[i][j+1] and remove the element maxCol[i−a+1][j].” Could anybody please help me understand this statement? Why would we remove maxCol[i-a+1][j]?
I used summed area tables to find sum of any submatrix in O(1) with O(n^2) preprocessing time. Wikipedia explains them well : Summed Area Table
For finding maximum element I precalculated maximum elements in every 2^a row,2^b column combinations. Like for every sub matrix of size 1,2 1,4 1,8 1,16… 2,1 2,2 2,4 2,8 2,16… 4,1 4,2 4,4 4,8… and so on. This helps to find maximum element in any submatrix in O(1) with O(k.n^2) preprocessing time.