I have absolutely no idea how you passed the last three cases but failed the first two.

I have 2 versions : version 1 version 2, in 1, I got 80 points, I compressed the path while finding parent, in version 2, I got 30 points, I compressed the path while assigning the parent. and I witness huge time diffrence. Any reason why so ??

P.S. I donāt know why I am getting WA in first test case.

Nah. Since time multiplier of java is 2. I divided it by 2 !!

How can you not take constants into consideration.

If a log(n) algorithm takes 1 sec to run then obviously if an algorithm takes even twice time then also it wonāt pass.

2nd one is not using path compression.

Path compression is you compress it every time you traverse it.

agreed. What I ment to say is how do you know that one algoās constant is larger ??

Just by looking at the time taken by submitted solution ?? Or is it mentioned somewhere ??

PS: I believe what @taran_1407 said

And I have experience of PBDS taking more time compared to few other algos or DS

shouldnāt the second one be considered better than first ? as In first, you are allowing the chain to grow large ( compressing only when one need to find nodeās parent ) while in the second, right from the start, I aināt allowint the chain to grow and always accumulating it to one parent. ??

Height should be minimized in order to reduce time complexity. 1st one is doing that.

If I keep on adding elements n-1 time, length grows to n, and then query once, leading to O(n) I guess ?? whereas in second, the height doesnāt grow to n ?? Any wrong logic here ??

Height canāt grow to n. As while merging it you will find root and while finding root you should compress.

agreed ! But have a look at the version 1 again, While merging, I aināt finding the root.

I doubt if thatās DSU .

In DSU I learnt we merge two connected components.

You are making a graph with given edges instead.

yeah, thatās what I am confused about . While adding nodes, in first version I am doing the same and making the height really large, still it gives better time complexity than the second version ( which I claim should be more time efficient as the depth isnāt allowed to grow ) ā¦ ā¦ . . . .

Do you agree results are weired ??

ooh. I misread Bonus problem as Update Ap to some X(not P). So, In original bonus problem we would just remove the element P from DS if present and proceed normally . Could you think of any algorithm if query was Update Ap to X?

Send me some nice persistent tutorialsā¦please sir _/_ (akele akele maje mat karo aap)

Lol . . . . .

This should be my comment in reply to yours oneā¦ . . . . . .

Hey, you are not clearing the data stored at each node in each test case. still why isnāt that affecting the solution for more than one test cases ? for example, here, shouldnāt the `cont`

at node=2 be affected as there is something left from the previous test case which should have been cleared ?

TC2 represents the expected test case2 but actually the tree would look diffrent. Is there something that I am missing ?

I am clearing the data, the last loop which is iterating from 1 to ctr is making everything 0.

The problem is you used unordered sets which in STL implementation of min heap and I used Unordered map which in STL uses Red black trees which reduce Time Complexity by huge amount. Though you did great buddy.

Suppose if the queries were like below (let us assume queries are not online they are just simple queries i.e. x , l and r are directly given for respective type queries) :

1

1000000000000000000 500000

1 1000000000000000000

2 1 1000000000000000000

1 999999999999999999

2 1 1000000000000000000

1 999999999999999998

2 1 1000000000000000000

1 999999999999999997

2 1 1000000000000000000

ā¦

ans so on

Then, will this algo work