An alternative trick to solve many HLD problems using basic dfs

I use this post to try and explain an alternative to HLD that I have been using for more than a couple of months.

As many of you might be familiar HLD is a very tough thing to implement and is time-taking and must be implemented carefully.

Also if HLD sums are asked in short contests then it is going to be difficult to implement. Hence I came up with this technique.

I will try and explain this concept with the july long contest problem.

In this problem the value of number of nodes is given to be 10^5 which means maximum depth of the tree is 10^5 (worst case).
What I do is I perform a dfs in the tree and for every node whose depth%1000=1, I store the values of all its ancestors.

In worst case the memory complexity is (1000+2000+…10^5)= 1000(1+2+3+…+100) = 5*10^6.
After this I sort all these values and take a prefix xor.

Now for each query I have to travel up atmost 1000 ancestors and arrive at an ancestor whose depth%1000=1 and from there we can find xor of all elements less than k.
We do this for both nodes U and V(source and destination).
Because of the property of XOR all the values of the ancestors are canceled out.

Hence each query is (1000*Q) in the worst case.

Though this is somewhat testing the upper limits we can actually dynamically change the value in which we store the ancestors(1000 in this case). However this has not been required so far for me.

This is because we such situations(which test upper limits) rarely occur but even for that depending on the tree we can change the depth value.

Another question I solved using this techique is GPD in codechef.
GPD can also be solved using persistent trie but this method is far more easier.

My solutions for : PSHTTR
: GPD

In case for sum of values in the path between 2 nodes we can store sum of ancestors and we can find answer by:

sum upto U + sum upto V — sum upto LCA(U,V)

Upd1: If the maximum depth of tree is less than 1000 you can directly climb up the tree and do the calculations.

There can be cases where there are 10^5-1000 nodes which have depth%1000=1. For overcoming this we can have an initial dfs that has a count of depth%1000=1,2,3,4,…,999 and we can choose values which satisfy the memory limit using count(This is the dynamic change of value I mentioned).Also I believe we can strech upto depth%2000.
Note: I have not tried implementing this technique of changing node depth of storing values dynamically.

Upd2: Possible proof that this technique will not exceed memory limit if the depth is choosen according to Upd1 suggested by abx_2109 (thank you very much for the proof):

Proof : Let freq[i] store the number of nodes with value i=(depth%1000). Note that i takes value from 0 to 999. Also note that it covers all the nodes of the tree.Also if we take the minimum of all these 1000 values of freq[i], the minimum value <= 100 (approx).Because in the worst case we can have 100 nodes at each level.

28 Likes

Thats surely a great trick. Officially its named as the square-root-decomposition which you did(almost) of highest number of nodes.
Similarly like you did with value 1000 you can chose sqrt(n) for any value of n from 1 to 10^5

1 Like

What is happening if your tree looks like a pth from the root to node 1000 and node 1000 has 99000 children. Looks like memory complexity (and thereby time complexity) grows to O(n^2) in this case.

3 Likes

You can solve such questions using DFS order property. TAQTREE problem on codechef will help you understand it. Following is link to editorial of that question, there both HLD and DFS order property are explained in detail.
TAQTREE Editorial

Nice Trick . But how would you decide the depth at which the ancestor storing will be applied … ? What would be the criteria for the appropriate depth ?

What is vec[] and cost[] in your PSHTTR implementation…Please explain…

Brilliant decomposition trick! Though at first glance, the best choice is mod \sqrt{N} \approx 316 for O(Q\sqrt{N}), but interestingly when I ran an algorithm, using mod 1000 was surprisingly faster in practice…!!

I’m interested in finding out a method regarding the average case performance of such trick on graphs. The best modulo around some sort of expected value depending on graph size would be ideal. Hope that someone can make the time to come up with some method, or better yet publish a paper, that would be elegant! :slight_smile:

2 Likes

Nice Trick…! but how to handle updates as well?

Amazing Trick!
Thanks for sharing

Yeah I realized it is similar to square-root-decomposition but I just wanted to mention it here so that others could benefit from it as it is a viable alternative for HLD in short contests.

1 Like

For this…we can just run an initial DFS and find the value of the appropriate depth for which memory complexity is within the limits.Maybe for your case 999 works just fine.Similarly it can be proved that always such a value exists.

3 Likes

Thats nice. With this addition you seem to get a widely applicable n sqrt(n)-Algorithm for trees.

I had mentioned this fact in upd1 but I couldn’t prove it . I mentioned it by intuition. Could you give me a formal proof so that I could add it???

Proof : Let freq[i] store the number of nodes with value i=(depth%1000). Note that i takes value from 0 to 999. Also note that it covers all the nodes of the tree.Also if we take the minimum of all these 1000 values of freq[i], the minimum value <= 100 (approx).Because in the worst case we can have 100 nodes at each level.Hence that choice of i will be just perfect.Please correct me if I am wrong.

1 Like

A simple modification would be to store values in nodes only if their depth%sqrt(n) == 0 as well as height > sqrt(n). Now you’ll need to go up at most 2*sqrt(n) nodes for each query, but this guarantees space efficiency, even in the case you mentioned.

@drajingo What about the space complexity if there are 10^5 - 10* sqrt(10^5) nodes at depth 10sqrt(10^5).It increases to about 310^8 .

@abx_2109 I think I get your proof. Thank you very much. I will put in this post.

You can do this by performing an initial dfs and having count of number of nodes having depth%1000=1,2,3,… and take the least count value as the depth at which ancestor storing is done.

@abx_2109 I think you misunderstood me. By height of a node I meant the maximum number of children in a path to a leaf from that node. For example if the tree is a path, the leaf node has height 0, and the root node has height n.

cost[]:
Assume you have 2 nodes u and v which have an edge of cost ‘C’ between them. If u is the parent of v then I assign cost[v]=‘C’. Using this I turn this edge-cost tree into sort of node-cost tree which I found it easier for processing. Since each node has exactly one parent(except root) this technique is useful.

vec[]:
In my implementation I store cost of all ancestors of a node for all nodes whose depth%1000=1 . I store this information in the vec[] vector.

Let me know if you have any other doubts.