Help with question "Powerful Arrays"

Hey guys, I need help in understanding the solution of this question - Problem - D - Codeforces

We are solving it using Mo’s algorithm. My doubt here is that, most of the solutions have this-

        while(l+1<=q[i].l) ret-=a[l]*(2*(key[a[l++]]--)-1);
		while(l-1>=q[i].l) ret+=a[--l]*(2*(key[a[l]]++)+1);
		while(r-1>=q[i].r) ret-=a[r]*(2*(key[a[r--]]--)-1);
		while(r+1<=q[i].r) ret+=a[++r]*(2*(key[a[r]]++)+1);
		ans[q[i].p]=ret;

While I understand the concept of sorting the queries and optimally moving points, what is the significance of diving array into sqrt(N) blocks if we are traversing through array 1 by 1 instead of bucket by bucket? Or am I missing something?

Editorial just confused me even more because of its format. I think there is some issue with English used cause-

 Clearly, there doesn't exist an order in which we can go through the queries in o(n sqrt(n)) time as the transition from any interval to the other one is done in no less than sqrt(n) steps.
Nevertheless, there exists a traversal order that guarantees O(n sqrt(n)) steps in the worst case.

Even more lost after reading editorial. I will appreciate if anyone can explain the -

  1. Solution (comments/explanation appreciated :slight_smile: )
  2. Time Complexity
  3. Why does it not get TLE if we are updating points by 1 instead of root(N)/Moving element by element instead of bucket by bucket.

The problem is a very standard application of Mo’s algorithm. Assume you have answer for [l, r] and can get answer for [l, r + 1] (adding one element in the segment) fast, answer for [l + 1, r] and [l - 1, r] fast, then you can use this algorithm. In summary, you should have a data structure D that maintains some quantity for a range and on which you can easily add or an element to one of the end points.

If you read about Mo’s algorithm, you will see how add and rollback operation can help you in solving this problem. I will be happy to provide details if needed.

1 Like

@vijju123 The point here is you first sort queries into buckets of size sqrt(n) depending on l and for each bucket you sort them in increasing order of r. So now for each bucket you move the right pointer by atmost n since the r variables are sorted in each bucket.

And if you are concerned about the movement of the left pointer, for each query of a bucket it moves atmost sqrt(n) and in one bucket there can be atmost sqrt(n) queries. Hence worst case sqrt(n)*sqrt(n)=n.

Worst Case: Left and Right pointer — each O(n)

So time complexity for one bucket is O(n).
And there are sqrt(n) buckets.

Hence the algorithm complexity comes out to be O(n*sqrt(n)).

2 Likes

Thanks for your answer @dpraveen :slight_smile:

My main doubt is, we are sorting queries by increasing value of r (if i get the editorial correctly). Shouldnt the approach time out for cases where queries are of type {1,r1},{r1,r1+1}. Meaning, making left pointer juggle b/w [1,r-1]

Eg-

1,5
5,6
1,7
7,8
1,9

Making the pointer move back and forth.

And I didnt get the purpose of decomposing array in buckets, because how is it relevant/saving time if we are changing pointer by value 1 instead of size of bucket.

I get it now. Thanks! I missed the part of first sort queries into buckets of size sqrt(n) depending on l (and so I was all wondering and clueless that “Isnt it O(NQ) :/”)

That’s the awesome thing about Mo’s algorithm :smiley:

2 Likes