Thanks a lot for this wonderful answer using persistent segment tree. Can you share some more of your answers using same ??
But I changed the DS too to use Maps instead of a Set, still TLE.
https://www.codechef.com/viewsolution/25021777
I switched to this array implementation recently, however you can have a look at this persistent trie. GPD - persistentTrie
Thank a lot for this ( though it will take some time to understand it).I tried to code up the persistent segment tree on my own way but got up getting WA. To make it easier to understand, I changed your code a little bit to work similarly as mine, but I don’t understand why it gives WA.
Best solution of tester, small ,neat and easy to understand.
Consider the example where n is equal to 6, you have already inserted 4,5 in tree and your search query is [1 5]. Do a dry run for this example on your code and as well as on mine. You will get the error.
I did the same with map but it gave tle. When I switched it to unordered_map it gave AC.
Tle solution: CodeChef: Practical coding for everyone
Withour TLE solution: CodeChef: Practical coding for everyone
I just checked your code, You didn’t used path compression technique well,that’s why your solution didn’t pass. And for unordered_map it did slight speed due to it’s average time complexity is less than map,But I think it passeed due to test cases are not that strong.
Soppose this case: let array after some updates be like 1 1 0 0 0 0 .
here if query is for [1,6]
a2=1;a3=6;
Then solution works as following:
check mp.find(6)->true
mp.find(mp[6])->true
assign mp[6]=mp[mp[6]] -->>mp[6]=4;
a3=4;
mp.find(4)->true
mp.find(mp[4])->true
assign mp[4]=mp[mp[4]] -->>mp[4]=2;
a3=2;
So, if you carefully observe, mp[4] becomes 2, but mp[6],mp[5] remains equal to 4.
So if there is again this query [1,6]
you will again do the same process, because mp[6] shoud return 2;
But if you used path compression, then mp[6] will directly return 2.
Hope you understand my this long explanation.
I see no difference between updated code and your original code tried on the given test cases.
Even the tree created is same. The only difference is that you are allowing the ‘r’ to be outside the range of node ( the leaves that the current node covers ) while I am always making sure to include it. But at the return statement, you are taking min( ) making the output same ?
I think there might be a problem in your find_set function
Look at my python solution for reference:-
https://www.codechef.com/viewsolution/25025506
updated code
see the change i made in line 95, because it’s possible that r <= mid, in that case the end should be r not mid.
Awesome !! Thanks
The sum of Q over all test cases does not exceed 1,000,000 what is the meaning of this ?? is it useful like we get Q till 5*10^5 only, so why is the sum necessary
Yup it’s is very important.
Que says Q \leq 5*10^5 for each test case and here we have t = 10. Which means we will have to answer Q*t \leq 5*10^6 queries in give time limit.
Which might not be possible to solve easily in given time limit. Even scanning 5*10^6 queries requires a lot of time.
Hence they decided that Q*t should be \leq 10^6
Which reduces the execution time by 5 times ( takes 2 secs instead of 10) .
Suppose if the queries were like below (let us assume queries are not online they are just simple queries i.e. x , l and r are directly given for respective type queries) :
1
1000000000000000000 500000
1 1000000000000000000
2 1 1000000000000000000
1 999999999999999999
2 1 1000000000000000000
1 999999999999999998
2 1 1000000000000000000
1 999999999999999997
2 1 1000000000000000000
…
ans so on
Then, will this algo work
Yup it should work for all cases.
Trace algorithm on pen and paper and you will realize.
In fact every operation will be one step I think.
Suppose a query came after many (lets assume after more than 100000) queries in same way as I mentioned above :
1 999999999999899997
2 1 1000000000000000000
then this algo will work like
mp[1000000000000000000]=mp[mp[1000000000000000000]]=999999999999999998
r=999999999999999998
then
mp[999999999999999998]=mp[mp[999999999999999998]]=999999999999999996
r=999999999999999996
ans so on that is just reducing two values in each iteration i.e many iteration for one query.
Please correct me if I am wrong anywhere
For each query type 2. Your height (distance from 10^18 to answer will become half.
Meaning if we have values in range [10^18-1024,10^18] then
Answer will come in 1024 interations.
Next time it will take 512 iterations and so on.
So 1024+512+128+… < 2*1024 which is O(n).
So in short average will be O(log(n)) per each query.
First time it skips one( first query of type two). Next time it will skip 4 , next to next time it will skip 8 and so on.
Now I got it. I don’t know why I took this much time to digest it. Thanks for the help
I was calculating complexity like N,N-1,N-2,N-3, … and so on which is N*N from Lunchtime.