updated code
see the change i made in line 95, because it’s possible that r <= mid, in that case the end should be r not mid.
Awesome !! Thanks 

The sum of Q over all test cases does not exceed 1,000,000 what is the meaning of this ?? is it useful like we get Q till 5*10^5 only, so why is the sum necessary
Yup it’s is very important.
Que says Q \leq 5*10^5 for each test case and here we have t = 10. Which means we will have to answer Q*t \leq 5*10^6 queries in give time limit.
Which might not be possible to solve easily in given time limit. Even scanning 5*10^6 queries requires a lot of time.
Hence they decided that Q*t should be \leq 10^6
Which reduces the execution time by 5 times ( takes 2 secs instead of 10) .
Suppose if the queries were like below (let us assume queries are not online they are just simple queries i.e. x , l and r are directly given for respective type queries) :
1
1000000000000000000 500000
1 1000000000000000000
2 1 1000000000000000000
1 999999999999999999
2 1 1000000000000000000
1 999999999999999998
2 1 1000000000000000000
1 999999999999999997
2 1 1000000000000000000
…
ans so on
Then, will this algo work
Yup it should work for all cases.
Trace algorithm on pen and paper and you will realize.
In fact every operation will be one step I think.
Suppose a query came after many (lets assume after more than 100000) queries in same way as I mentioned above :
1 999999999999899997
2 1 1000000000000000000
then this algo will work like
mp[1000000000000000000]=mp[mp[1000000000000000000]]=999999999999999998
r=999999999999999998
then
mp[999999999999999998]=mp[mp[999999999999999998]]=999999999999999996
r=999999999999999996
ans so on that is just reducing two values in each iteration i.e many iteration for one query.
Please correct me if I am wrong anywhere
For each query type 2. Your height (distance from 10^18 to answer will become half.
Meaning if we have values in range [10^18-1024,10^18] then
Answer will come in 1024 interations.
Next time it will take 512 iterations and so on.
So 1024+512+128+… < 2*1024 which is O(n).
So in short average will be O(log(n)) per each query.
First time it skips one( first query of type two). Next time it will skip 4 , next to next time it will skip 8 and so on.
Now I got it. I don’t know why I took this much time to digest it. Thanks for the help 
I was calculating complexity like N,N-1,N-2,N-3, … and so on which is N*N from Lunchtime.
Okay glad that you were able to digest it now.
I have some doubts,please help me out clearing them.
1.How using only path compression as in tester’s solution has time complexity of O(logn) as observed through discussions above.
2.What is (alpha) in Q.
3.According to me time complexity should be O(Q) as in worst case we can perform q updates on consecutive elements and answering range queries.
@l_returns Can you explain in your solution how do you account for the fact that root having lower rank is made child of root having higher rank in union by rank method.In this method time complexity should be O(logn) as it form balanced trees but if we use only path compression then how time complexity is O(logn) as in tester’s solution.
This is also solvable using a balanced BST [set] where we insert a pair of elements which represents the range of active elements [\ne 0]. Initially the set it will have only 1 pair : [1, n] and when we delete an element we find the pair on which it lies and delete it. After deletion, insert two more pairs. Like for deletion x, find [l, r] l \le x \le r and then delete this pair. Insert two other pairs : [l, x-1] and [x+1, r]
Total operations are no more than 3.Q. Complexity is O(logQ * Q)
Using path compression complexity is O(Qalpha(Q)) for processing all queries
where alpha is a function which return a number less than 4Q in CP constraints so yeah linear complexity for path compression
Hey you can’t call it constant.
It’s O(Q*alpha(Q)).
You are saying like complexity of BS on q queries is O(q*\log_2(n)) and here n=10^5 and hence \log_2(n)=17
And time complexity is O(q*17).
Which means time complexity of is O(q) 

Though for our convince we can think as if it’s equivalent to O(Q) as alpha(Q)<=4 for all practical values of Q.
IDK proof of it. But you can Google about it. “Time complexity of Disjoint set union using path compression” or something similar to it. You will surely find some explaination on why is it’s O(Q*alpha(Q)) which people often refer as ~ O(Q*\log_2(Q))
https://www.codechef.com/viewsolution/25036895
I am getting 80 points. My solution is working for original constraints but surprisingly not working for smaller subtasks. can anybody check my solution?
I know its Q*alpha(Q) by mistake I left one character.I have edited it now and I had written linear at the last before also. IDK what the confusion is about
Lazy Propagation would help if there would have been range updates. However, here we are only updating 1 index.
I have maintained a map which contains the zeroed out elements. How can we apply binary search in the range to get the max element. ??