Problem Link:Difficulty:Hard Prerequisites:Heavylight Decomposition, Persistent data structures Problem:You are given a tree of N vertices. Each vertex is initialized to value 0. Further you are given 3 kinds of operations: The input is given in a manner that we require an online solution for the problem (for exact details, refer to original problem statement). Explanation:Note: The solution describes using segment trees, heavy light decomposition, and persistence over a tree. To avoid confusion in terminology, "node" refers to the data structure component (segment tree or persistence), whereas "vertex" refers to the vertex of the Tree given in the Problem. We develop the solution by analyzing the problem under various special cases, as follows:
Tree is a chain, there are no rollbacksHere, we have a segment  lets use a segment tree. Now, we wish to update the values of vertices from X to Y. Suppose we have two such operations on the same vertices: X Y A B followed by X Y C D. Then the values will be (going along the path from X to Y) Thus, let us store in our segment tree the pairs (A,B) associated with the operation. Now, when we get to a segtreenode which is completely contained in our XY path, we just update the A, B value of that node. Further, while querying, we would like to return an answer when our querynode is completely contained in our required segment. Thus, we also need to store a field "sum" in our segtree which basically stores the sum of the left subtree + right subtree. Pseudocode for this is as follows:
This can be found in Setter's Solution (lines 24  54), if you ignore the persistence that is introduced in the modify() function. You have a chain, but there are rollback queriesIntroducing PersistenceHere, we need to keep past information as well. The term used for this is Persistence. If we were to generalize the segment tree approach, we need to implement a segment tree that is capable of going back in time as well as forward in time. To illustrate persistence, let us first take the example of a singly linked list. Imagine we have the list
And we wish to insert the element D between C and E. Also, we need to keep track of both "versions" of the list. That is, we require our list to look like
while keeping track of the original. We could make an entirely new copy of the entire list, but this will take O(N) memory and time. The thing to notice is, from E on till the end of the list, it remains completely the same. Hence, we "clone" the nodes A, B, and C, and insert D after the clones. We finally associate the cloned root A with the new version. Thus, our data structure looks somewhat like this now:
Note that if we had a doubly linked list, then we would need to clone the entire list in order to implement Persistence. Applying persistenceNow, how does this apply to our generalization? We can imagine our segment tree as a binary tree with links to left and right children. In this setting, what would a modification to the tree look like? It would just be a path starting from the root upto a particular node! What we need to do here, is clone just the path from the root to the node, and associate the new root with the new version. Here, when the tree has N nodes, a path of length atmost log(N) is being cloned. If you compare with the linked list example, over there, in the worst case we could clone a path of size O(N). Thus, there is a huge improvement in the case of the segment tree. So, if we were to have a global array of Versionroots, and perform our updation of the tree with cloning, we get exactly what we desire. Refer to Setter's code : lines 42  62 for the description of this. You have a tree, and no rollback queriesHeavylight Decomposition of Tree (HLDoT)The heavylight decomposition is used to break up a Tree into s set of disjoint paths, with certain useful properties. First, root the tree. Then, for each vertex x, we find the child of x having the largest subtree. Lets call that vertex y. Then the edge xy is a heavy edge, and all other xchild_vertex edges are light edges. The most important property of this is, from any vertex x, the path from x to the root goes through at most logN different lightedges. This is because, on this path, whenever you take a light edge, you are atleast doubling the size of the subtree rooted at the new vertex. Hence you cannot perform this "doubling" effect more than logN times. A number of problems that require you to find some property of the path between two arbitrary nodes of a tree have heavylight decomposition used. For example, DGCD is one such problem, which asks you to find the GCD of the numbers along the path from X to Y. If you can solve the problem for a chain using a segment tree, then there is a very good chance that you can solve the problem for a tree using HLDoT. Indeed, if you make segment trees over the heavy edges, then the answer for your path XY can be broken up into two paths from X to LCA(X, Y) and from Y to LCA(X, Y). Then, using that you make only logN shifts from one heavychain to another, you are actually making only log(N) segmenttree queries. Applying HLDoT hereLet us perform Heavy Light Decomposition of the tree here. We make chains consisting only of heavy edges. We also need to find LCA efficiently. This can be done in O(N log N) time by storing information up[x][i] = the ancestor of x which is at a height of 2^i above x. Clearly, up[x][i] = up[up[x][i1]][i1] (take a 2^(i1) upward jump from the 2^(i1)'th ancestor of x). Then,
Now, given an update query, from X to Y, first find L = LCA(X, Y). Then, update path from X to L and from L to Y. This is accomplished in the Setter's code lines 195220. Pseudocode follows. Let chain[vertex] = an identifier for the particular segment tree's root that we require.
Overall SolutionThe overall solution merges persistence with Heavy Light Decomposition. At the high level, it goes as follows:
Complexity Analysis:Memory: Each update operation on a segment tree takes atmost O(logN) memory. Each update operation on the tree affects atmost O(logN) chains. Hence, memory complexity is O(N log^2 N). Time: Updates on each segment tree take O(logN) time. There are atmost O(logN) segment trees to update on an Update Operation. Hence, O(log^2N) per update operation. Queries behave similarly: O(log^2N) for queries as well. Rollback operation: O(1) time to update global versionnumber. Appendix: Solution for offline problemThe requirement of lastans in the input format ensured that offline solutions would not work here. There is a offline solution to the problem which does not use persistence. Imagine versions as nodes of a graph. Now, when you have an update operation, that amounts to creating a new versionchild of the current node. If you had no rollbacks, then your graph would be a chain. Now that there are rollbacks, you may branch out from an earlier version. In this case you have a tree. Note that, if you went from a version to its child using the operation X Y A B, then you can go back from the child to its parent using the operation X Y A B. Thus, first you build this versiontree, and then you traverse it using a DFS. When you are in a versionnode n, then solve all queries that pertain to that node before moving on. Setter's Solution:Can be found here Tester's Solution:Can be found here
This question is marked "community wiki".
asked 16 Feb '13, 17:37

Complexity analysis is bad. But the method of making persistent the collection of segment trees seems wrong.
The output when root of tree is 1 and 3 are different for the query. I scratched the head for a while to handle this properly and my idea was to construct a balanced binary search tree where each node corresponds to some segment tree with key equal to the upvertexes of heavylight paths. Then we indeed can have array of roots of this megatree for different versions. Now indeed I have O(log^2 N) time and memory since union of paths to chosen nodes can be >= C log^2 N is the worst case (>= C log N segment trees to update and they all lie at leafs of megatree). It seems that in tester's solution something similar is used. answered 16 Feb '13, 21:26
"The reason is that for at most 3 (or 4 in case of bad choice) heavylight paths among O(log N) paths we do nontrivial query or update. " I think it`s wrong. Make tree with following recursive procedure. Make(N)  make tree with N vertices. Make(N) :
Now, if we start from leftmost vertex going to root we do O(log(N)) nontrivial queries.
(17 Feb '13, 00:40)
Cool! Then it only means that test data are weak :)
(17 Feb '13, 00:52)
thanks @anton_lunyov
(15 Sep '14, 23:44)

It's really a nice editorial! Just being curious, do we have any other decomposition on tree having the same efficiency as HLDoT? answered 02 Mar '13, 12:52
