PROBLEM LINK:Author: Misha Chorniy DIFFICULTY:Hard PREREQUISITES:Range trees, segment trees, functional data structures, persistent segment trees PROBLEM:You are given an array $A[1\ldots N]$ of positive integers. You have to answer $Q$ queries on it of the following type:
Additionally, each query partly depends on the answer to the previous query, so you're forced to answer the queries in the given order (except in subtask 3). QUICK EXPLANATION:First, normalize and compress the values of $A[1\ldots N]$, so they are in $\{0, 1, \ldots, N1\}$. The answer for a query is equivalent to the smallest $V$ such that $A[l\ldots r]$ has at least $k$ distinct values, so we can find it using binary search. We just need to answer the following question: Given $l$, $r$, $V$, how many distinct elements in $A[l\ldots r]$ are $\le V$? This can be turned into a geometry query. First, we define $N$ points $\{(x_i,y_i,z_i) : 1 \le i \le N\}$, where $(x_i,y_i,z_i)$ is defined as follows: Suppose $A[i] = v$. Also, let $j$ be the index of the previous occurrence of $v$, i.e. $A[j] = v$ and $j < i$ is the largest such value. If no such $j$ exists, set $j = 0$. Then $(x_i,y_i,z_i) = (v,j,i)$. The answer to the query above is now the number of points $(x_i,y_i,z_i)$ such that $x_i \le V$, $y_i < l$ and $l \le z_i \le r$. Thus, we now have standard geometric range queries! Using 3D range trees, we can preprocess the points in $O(N \log^3 N)$ time and each query can be answered in $O(\log^3 N)$ time, but with binary search it becomes $O(\log^4 N)$ which might be too slow. Instead, we can use fractional cascading or functional/persistent segment trees to reduce these to $O(N \log^2 N)$ and $O(\log^3 N)$, respectively, and with some clever tree traversals we can even reduce query time to $O(\log^2 N)$. EXPLANATION:Subtask 1In the first subtask, we have $Q\times N \le 10^7$, so solutions that answer each query in $O(N)$ time, or even $O(N \log N)$ time, is acceptable. In this case, the straightforward approach works:
This takes $O(N \log N)$ time in the worst case, so doing this $Q$ times takes $O(QN\log N)$ time. With a reasonable implementation, this will pass the time limit of the first subtask. Subtask 2In the second subtask, we no longer have $Q\times N \le 10^7$, but instead we have the additional restriction $k = 1$. It means that we are finding the first element in the sorted list of distinct elements of $A[l\ldots r]$. Now, regardless of whether there are duplicates or not in $A[l\ldots r]$, this is clearly the same as the minimum element in $A[l\ldots r]$. In other words, the query reduces to a standard range minimum query, which has lots of possible solutions! Two fast approaches to this are the following: Approach 1: Construct a segment tree on top of the array, so that each node represents some subarray of $A$ and also contains the minimum among all elements in this subarray. Constructing this tree takes $O(N)$ time, and afterwards, each range minimum query can be answered in $O(\log N)$ time. Approach 2: Construct a table of minimums, $T[1\ldots N][0 \ldots \log_2 N]$, where $T[i][k]$ is the minimum element in the subarray $A[i\ldots i+2^k1]$. This table can be constructed in $O(N \log N)$ time using dynamic programming. Afterwards, each range minimum query can be answered in $O(1)$ time: The minimum value in $A[l\ldots r]$ is $\min(T[l][k], T[r2^k+1][k])$, where $2^k$ is the largest power of two $\le r  l + 1$. Asymptotically faster solutions exist, but they are much complicated, and in any case, this solution doesn't apply to other subtasks, so there isn't much use in them. You can read about faster solutions here. Subtask 3 & 4Now, let's move on to answering the hardest subtask, which is the original problem without any additional restrictions. Note that subtask 3 is just an offline version of the problem, which may permit some simpler solutions, but we won't discuss them here. Binary searchThe first step is realizing that a query can be converted into a form that is suitable for binary search. Each query is equivalent to the following: Find the lowest value $V$ such that the number of distinct elements of $A[l\ldots r]$ that are $\le V$ is at least $k$. This is certainly solvable with binary search for the value $V$, as long as we can answer the following question efficiently: Given $l$, $r$ and $V$, how many distinct elements of $A[l\ldots r]$ are $\le V$? So let's focus on this query, which now looks more like a geometric range query. Unfortunately, the word "distinct" prevents us from converting this problem directly into a geometric query. Without "distinct", one fast solution for this problem would be:
Thus we have reduced each query to 2D geometric queries, for which there are some wellknown solutions, for example using 2D range trees. But since we only need to count distinct elements, we need to be more creative with our query. The solution above counts duplicate entries multiple times, so we must find a solution that doesn't do that. One way is to only count the first occurrence of each distinct value in the range $A[l\ldots r]$. How do we accomplish this? Well, for a particular value $v$, suppose $A[i] = v$ for some $l \le i \le r$. For $A[i]$ to be the first occurrence of $v$ in $A$, the following additional condition must hold true: If $j$ is the largest index $< i$ such that $A[j] = v$, then $j < l$. (If $A[i]$ is the first occurrence of $v$ in the whole array, we can set $j = 0$.) Thus, we can now refine our geometric query above to the following:
Now we have properly converted the query into a 3D geometric range query! Another thing: due to the binary search, whatever complexity we get in solving this problem will get multiplied by $O(\log A_{\max})$ time. We can reduce this to just $O(\log N)$ by compressing the values of $A$ to the range $[0\ldots N1]$. This preprocessing step can be done in $O(N \log N)$ time, and when printing the answer we just need to remember to revert back to the original list of values. Geometric queriesOur goal is to answer 3D range queries. Specifically, given $N$ points $\{(x_i,y_i,z_i) : 1 \le i \le N \}$, answer the following kind of query: How many points are in the range $[a,b]\times [c,d]\times [e,f]$? The usual way of solving $k$dimensional range queries is with $k$dimensional range trees. You can think of a 1D range tree as a simple segment tree, and a $k$dimensional range tree as a balanced tree of $(k1)$dimensional range trees. Such a structure on $N$ points can be constructed in $O(N \log^k N)$ time, and each $k$dimensional "rectangle" query can be answered in $O(\log^k N)$ time, by using the usual segment tree recursion at each dimension of the tree. The following is a pseudocode of a 2D range tree:
Notice that both classes have very similar implementations, but each node for Tree2D contains a Tree1D which handles the "y" values. You can probably figure out how to construct 3D (or higherdimensional) range trees from this. Also, this construction takes $O(N \log^2 N)$ time because of the sorting. Afterwards, each Unfortunately, in our case, we will need 3D range tree, and each query takes $O(\log^3 N)$ time. And because we're doing a binary search on top of that, each query for the original problem takes $O(\log^4 N)$ time, which is probably too slow to pass the time limit. Instead, we will describe two techniques to improve the running time of each 3D range query. Technique 1: Fractional cascadingWe can use fractional cascading to reduce a 3D range query from $O(\log^3 N)$ to $O(\log^2 N)$. We can illustrate this idea using 2D range trees. We can think of a 2D range tree as a segment tree on the $x$ coordinate, where each node also contains a segment tree, this time on the $y$ coordinate. The first step is to replace the segment trees on $y$ with simple sorted arrays, and replace range queries with simple binary searches. Now this doesn't reduce the asymptotic running time, but it helps reduce the complexity of our structure. For example, we can now construct a 2D range tree in $O(N \log N)$ time instead of $O(N \log^2 N)$ by using a mergesortlike algorithm; Suppose we want to construct a 2D range tree over the points $\{(x_1, y_1), (x_2, y_2), \ldots, (x_N, y_N)\}$, and suppose this is sorted by $x$ and then by $y$. Then:
Notice that instead of sorting Now, here comes the "fractional cascading" part. The reason each range query runs in $O(\log^2 N)$ time is that we perform a binary search over every node we visit. But we can speed this up by noticing that we are binary searching for the same value over and over again, and that we are only visiting nodes in a single path from the root to the leaf! The idea is to use this fact so that after a single binary search at the root, we don't have to perform another binary search again. We do this by setting up lots of pointers. For each node, and for each $i$ from 0 to
Using these pointers, once we binary search for a value at the root, we don't have to binary search for this value at the child—we just follow the pointer! This allows us to reduce range query complexity to $O(\log N)$. Now, I admit that I skimmed over some details, but I hope you at least get the idea behind. By performing this technique for every 2D range tree inside our 3D range tree, our range query reduces now to $O(\log^2 N)$. Technique 2: Functional data structuresTo improve the running time, we will try to use a different approach, via functional data structures. Functional data structures are data structures that are implemented under the functional programming paradigm. One of the key characteristics of functional programming is that functions don't have "side effects", and variables don't change once assigned. (In fact, the term "variable" becomes a misnomer.) Data structures built under this paradigm are consequently persistent, i.e., older versions of the data structure are still fully available. To write our structures under this paradigm, we need only ensure that we keep our functions from having side effects, like changing some global variables or attributes. For example, to implement a functional 1D segment tree, instead of updating the attributes when we perform an update, we simply fully copy the nodes that have been updated. On an update, this makes us copy $O(\log N)$ nodes, but the upside is that we still have the old segment tree fully available. Note that the running time of an update and query is still $O(\log N)$! Once again, we'll illustrate with 2D range trees. Suppose we want to make a 2D range tree over $\{(x_1, y_1), (x_2, y_2), \ldots, (x_N, y_N)\}$, and assume this is sorted in $x$, and then in $y$. Each 2D range query counts the number of points some rectangle $[x_1,x_2]\times [y_1,y_2]$. If all coordinates are nonnegative, this is equal to the number of points in $[0,x_2]\times [y_1,y_2]$ minus the number of points in $[0,x_11]\times[y_1,y_2]$, so we can just assume that $x_1 = 0$. If the queries were offline, i.e. we know all queries beforehand, then we can answer all query rectangles in increasing order of $x_2$. The way we do this is to maintain a segment tree on the $y$s, initially empty. First, we sort all queries and points according to $x$, and then process them in that order:
In other words, we're effectively turning the $x$ coordinate into a "time" dimension. It's not hard to see why this works. Unfortunately, the queries are not offline, because some queries are dependent on the result of previous queries (note that we're performing binary search). The solution to this is to use a functional/persistent segment tree to construct all possible versions of the segment tree, by inserting the $(x_i,y_i)$ points in order. Then, for a query rectangle $[0,x_2]\times [y_1,y_2]$, we simply find the version of the segment tree corresponding to $x_2$, and query that segment tree! Using this approach for our 2D range tree makes each query $O(\log N)$, and for a 3D range tree, each range query $O(\log^2 N)$. Refined "binary search"The techniques above allow us to reduce the running time of a 3D range query from $O(\log^3 N)$ to $O(\log^2 N)$, but with binary search, our running time is still $O(\log^3 N)$, which might still be too slow. Our goal is to reduce this further. In fact, we can do this by performing a more clever version of our binary search! The key insight is that we are binary searching over the first coordinate, $x$: Remember that we are finding the lowest value $V$ such that $[0,V]\times [0,l1]\times [l,r]$ has at least $k$ points. Our original approach is the following:
which is just a normal binary search. However, we can use the structure itself as our guide for binary search! Here's an illustration:
To help you understand this better, this is the same sort of technique that allows us to find the $k$th smallest value in a normal binary search tree, as long as we keep track of the size of each subtree at each node. Since each 2D range query runs in $O(\log N)$ time, and we perform $O(\log N)$ of those queries, the overall running time reduces to $O(\log^2 N)$ time, which is good enough! Time Complexity:$O((N + Q)\log^2 N)$ AUTHOR'S AND TESTER'S SOLUTIONS:
This question is marked "community wiki".
asked 16 May '16, 03:18

It can be solved using persistent segment trees. More precisely, a persistent segment tree whose nodes are themselves segment trees which make up another persistent segment tree. AC solution: https://www.codechef.com/viewsolution/10078463 EDIT: This question is basically a combination of kth order (SPOJ MKTHNUM) and distinct count (SPOJ DQUERY) Here is a very good explanation of MKTHNUM. DQUERY has a similar approach but instead of an array that represents the count of elements you use an array that represents the count of prev[i] value of elements. Where, given an array A, prev[i] is the maximum index j < i such that A[i] = A[j]. Try and solve these beforehand if you haven't yet or at least make yourself familiar with the solutions. For this problem we implement a persistent segtree like the one in MKTHNUM but instead of storing count it stores a segtree like the one we used in DQUERY. So when it has to decide which direction to go it doesn't just compare count of all elements but count of all distinct elements. The key idea is that the segtree (which tracks prev[i]) of first N values over a given interval differs in logn values from the segtree over the first N  1 values over the same interval. Hope this clears it up. :) answered 16 May '16, 16:01

Answer is hidden as author is suspended. Click here to view.
answered 12 Jun '16, 14:00

Hi, can someone explain me the offline approach required to answer subtask 3. I know Mo's algorithm, so I can sort queries in each sqrt(N) bucket and process them in order. But how exactly can I mantain a structure to add/remove elements to the set , and give me the Kth element in the set, each in no more than O(logN) Can someone help me out ? answered 17 May '16, 15:46
I am familiar with MO's algorithm. You can use c++ map to store both the presence and the count of an integer. So everytime you increase or decrease the range, you either decrease or increase the count, and when the count is 0, you basically remove it. The number of distinct integers is basically the size of the map. The problem is to get the kth integer in the map, unforunately, c++ map doesn't support random access. One solution is to use an auxillary vector that works together with the map to allow random access. let me know if you want me to explain in more detail.
(17 May '16, 16:05)
@mightymercado Yes please, can you explain or give me a link that explains the how to use a auxillary vector that works together with the map to allow random access. Thanks a lot!
(17 May '16, 18:53)
@mightymercado, map with logn insertion and logn order statistics doesn't work unfortunately. You could use policy based data structure for this as in: https://www.codechef.com/viewsolution/10097559 However this tle's for 3 of the tasks. I am wondering whether my mo algorithm is correctly set up or something wrong with it
(17 May '16, 19:42)
Hmm sorry. I was wrong about auxillary array. I think you need to use an order tree https://en.wikipedia.org/wiki/Order_statistic_tree
(17 May '16, 19:45)

About the offline approach to solve subtask 3: First use coordinate compression on the array elements so that all array elements are to mapped to numbers 1 to N. Then create an array of previous and next occurance for all indices in the array. Then use MO's algorithm. For this create a segtree with N leaf nodes (i.e. 2*N1 total nodes ) and a node representing range (i,j) will contain count of the number of array elements in range (i,j) present for current position of moleft and moright. Now suppose we want to increase moright then we see that whether the previous occurance of the array element at moright+1 is less than moleft. If yes then add 1 to all the ranges in segtree including this array element. This takes O(log n) time. Once moleft = l and moright = r ( l and r represent query ) then query on the segtree takes O(log n) time. answered 17 May '16, 17:29

You understood it right that for certain moleft and moright I have a segtree each of whose node stores number of distinct elements in its corresponding range [i,j]. Clearly finding kth largest element means kth largest element in range [1,N] (Remember all array elements are mapped to numbers 1 to N). This is a basic query. Now if I query giving k to the root node it will check if its left child has more than k elements. If yes it will go to the left child and repeat the procedure. If left child has less than k elements it will go the right child with parameter k(count of left child) and then repeat the procedure. This process will continue until the leaf node is reached, hence taking O(height) = O(log n) time. answered 18 May '16, 03:05

@mightymercado I don't think that this problem can be solved using persistent segment trees. Naive way is to build segment tree for all n*n ranges. In order to reduce space and time complexity we use a trick for persistent segment tree, In segment tree for range (i,j) each node is calculated as follows, node for range (i,j)=node for range (1,j)node for range (1,i1) But to use this trick the elements should be distinct so i think persistent segment trees can't be used in this problem.I don't know if there's any other trick. answered 16 May '16, 15:40
@mightymercado Persistent Segment trees can be used. Build a merge sort tree with every node containing a vector of required positions and their previous occurrences, corresponding to the interval. Also maintain a PST at every node that gives you the number of points inside a rectangle. Using this, you can decide whether you have to move left or right. Link for code
(16 May '16, 15:59)

@grebnesieh Yes, can you expain it like how @himanshujaju explained his approach? Thanks! answered 16 May '16, 18:28

There is another approach using bitset (you have to code your own bitset though) and sqrt decomposition, which is pretty easy to code!