run a dfs upon it and get the order and maintain child count also, so now we can easily find the range for the vertex let say v, L = pos[v], R = pos[v] + child[v] - 1, Now it is just find Y in L to R such that xor is maximum, and now query upon segtree[Y] to get minimum vertex, as 1 << 20 is quite large it can be compressed upto 2e5 as there at max 2e5 distinct number possible in worst case.
I have solved it, I guess you are mis-reading my point. My sole point is the editorialist should have mentioned these stuffs, keeping in mind that there are various range of coders who will be looking towards the editorials. 1<<20 works fine, it is not that big.
I think if TL was 2.5 then it will be helpful as execution time matters person to person.
Can somebody tell me the difference between online queries and offline queries?
Offline: you may know all queries in advance.
Online: you can’t, you can get next query only after you have answered current one
Thanks that was helpful …
As far as the editorial is concerned its a very bad one for those who don’t know the approach. please write in more simpler way so that it is easy to understand.
There’s two parts to the solution for each query, find the value that minimises xor and then find the smallest node number with that particular value. The first part is a pretty standard problem. People seem to have used a segment tree for each distinct value for the second part, but I was wondering if something like this would work (I did not submit, so please let me know in case there is a mistake in this) : build a single segment tree over a sorted array of pairs (value,index) where, index is the index in flattened tree. We will maintain minimum of the node numbers (in the original tree) corresponding to continuous segments of pairs from this array in the segment tree. Once we decide that the value that minimises xor is ‘y’ and the allowed range of indexes is [l,r] ( the subtree’s range in the flattened tree ), we can query this segment tree. Now, implementation becomes easy, rather than explicitly specifying the query range in this segment tree, we can specify it using two pairs <y,l> and <y,r> (this will form a continuous subarray because of the sort order), comparisons that are usually done in a segment tree like l >= low && r <= high, can be done exactly that way, by comparing pairs instead, and this should work as expected because of how pairs are compared.
Its correct !..
Can someone suggest some easier problems related to persistence which I can solve before solving XORMIN for better understanding.
https://www.spoj.com/problems/MKTHNUM/
https://www.spoj.com/problems/COT/
These are the most common starting problems for understanding persistence. In case you need help, this is the editorial for the above problems
https://blog.anudeep2011.com/persistent-segment-trees-explained-with-spoj-problems/
I used some compressed Trie like data structure.
Created a Trie at each node of given tree, where this trie would contain all the nodes in its subtree.
So that Query time complexity = O(log(MAX)) ~ constant time.
Average time complexity to build the tree of Trie ~ O(n * log(MAX))
Solution
Here merge function merges the trie of child nodes to that of the parent node.
Wouldn’t the merge over all tries be O(n^2) on a tree of the following form?
1--3--5--7 ...
| | |
2 4 6 ...
if(a == NULL)
return b;
if(b == NULL)
return a;
This snippet of merge function will tend to give complexity of order O(n) in example given by you.
I think the worst case time complexity may be of order O(n * n^1/2) in the case where root node have n^1/2 children and each children’s subtree have n^1/2 nodes…
Use of compressed trie might have balanced this.
Can someone explain how the author’s solution uses O(N * log N * log C ) space for the persistent data structure ? Also, for a node, why are we excluding it’s child with largest subtree size, while calling update ? I’m guessing both are linked… it would be really helpful if someone could explain this… Thanks
The author’s solution was to apply persistence to each node in the tree and you can do this while traversing the tree in bottom-up fashion. Computing the tries for childrens you can build the trie for the parent. It requires merging and thus merging as explained here is done(small to large trick) .
Thus the merging part takes N * logN thereby making the overall space complexity just
O(N * log N * log C ).
Thanks for the reply… I had already gone through this post on codeforces. For the proof of complexity they just gave a reference to HLD/dsu, without details… I tried deriving it on my own, but I couldn’t… Is there any proof that clearly explains how merging takes N * log N time, in this case space ?
I am not sure why you mean by compressed trie, but my solution is very similar to yours and doesn’t use compressed trie. I think there is just difference in the merge function (union of 2 tries) we have used. CodeChef: Practical coding for everyone
As far as I know, O(N*Log^2(n)) appears only when we use some other D.S like map/set with this technqiue (DSU on trees). But typically for our case, it must be O(LogN*LogC*N), where additional LogC is because of insertion in a trie.
Yep. That’s true. I deliberately avoided writing C or N here. Just mentioned there are 2 log factors.