I think your link is broken. Can you please fix it.
I’ve fixed it
During contest, I first thought of this solution only. But I thought time complexity will be too much. So as your time complexity is O(TQsqrt(N)log(N)) and let T=2 and N=Q=1e5. So that summation is less than 2e5. Then 2 X 1e5*sqrt(1e5)*log(1e5) comes out to be roughly 1e9. I thought that 1e9 will not work in 4 sec. Nice to see it worked. I coded segment tree during contest to remove sqrt factor and place logN factor. So my solution is O(QNlogNlogN). My solution for segment tree : CodeChef: Practical coding for everyone
Can you explain how the frequency map is constructed?
Replied on your post. Data types int vs long.
that multiset idea was really cool
i have done the same but used sqrt decomposition for handing that(using vectors) but this was really awesome !
thanks for sharing this
@ramini For each block, we use a map maintaining the frequency of all (distinct) elements in that block.
we don’t have multiset in java any suggestion for this case
@ayush201 You could use a Map and store the count of each element. It should have the same complexity.
Hey misha, I edited your answer a little to give some formats (Decent formatting + Decent explanation= Profit ). Please have a look in case you find concern.
@prince367 The A[i]'s are the keys of the maps. The way data is stored in maps is different. If I insert 10^9 into the map, it goes to the first instead of the 10^9 th position as it would do for arrays.
You’re welcome
Btw, if you’ll use X-fast trie - Wikipedia you can update your solution theoretically to O(Q sqrt (N log log N)), I don’t how it goes practically, next and previous elements can be found in time O(log log N), but insert/erase in O(log N)
Done. Please have a look and verify
@bazsi700 - CodeChef: Practical coding for everyone i have implemented similar approach here, but it is timing out can you have a look why?
@divyansh_gaba7 The problem is you use std::lower_bound in query, which is O(N) on sets. You should use std::set::lower_bound which runs O(\log{N}), thus replace lower_bound(t[l].begin(),t[l].end(),m) by t[l].lower_bound(m).
I faced the same problem once, and posted a blog about it on CF, where I got explanation for it:
Probably the problem is with double precision. However you don’t need to precalculate the division into a double, because you can perform standard division each time which takes floor operation anyways.