CHEFLKJ - Editorial

Was (O(log n)^3) supposed to pass with normal maps? I tried this by maintaing a sorted subarray in every subsegment and then counting kth element in given subsegment which runs in ((logn^2 * log(10^9)) per query, but it gave tle

it can be solved in O((N+Q)logN) as well

Suppose we have an array with N elements , if you pick up any 2 different elements with unequal value and remove them from the array and keep doing this until only 1 distinct value is left , that value is the only possible candidate for answer

Now To make this faster , we store this information in segtree , segtree node denotes that if we do this stuff on segment [l , r] , what will be the value left and its frequency.
Now we have to check the frequency of an element from l to r , we can either do it online with a bst or do it offline with a simple bit in nlogn time.


[1]


  [1]: https://www.codechef.com/viewsolution/10897065
11 Likes

My solution uses square root decomposition. For segments of length < 2000, just loop over and calculate frequency of elements. Report answer.

For segments of length > 2000, the dominant element should have frequency > 1000. There will be at most 200 such elements. So I maintain one BIT (binary index tree, or seg tree) for each of those elements, loop over all 200, find the number of elements in range l…r using BIT.

Update is 2 fold, update the base array. Also update BIT.

Update is O(log N), you update at most one BIT.
Query is O(2000) or O(200 * log N).

7 Likes

Even Solution using maps (Complexity O(log^3n) per dominant query) pass within time limit. Here is


[1].


  [1]: https://www.codechef.com/viewsolution/10901084
2 Likes

I have also passed with (somehow) “not good” complexity.

Worst case [H - means complexity of hash-map]:

Update query: O(H^2log(10^9))

Get query: O(sqrt(N)*H+sqrt(N)*Hlog(10^9)*log(N))


[1] (well just to see, it is barely readable :/ )

Thought: I had (for each distinct numbers) a Fenwick-Tree [BIT Tree] in which I held positions. I also held a "priority queue" in which I stored numbers sorted by total frequency.

Get query was then to choose between two:

A) If the range was lesser than sqrt(N), I brute-forced it.

B) If the range was higher than sqrt(N), I looked upon all distinct numbers which had frequency greater than "RANGE/2" and checked (by Fenwick) how many are there on my range. (PS: there could be at most sqrt(N) of these numbers)

Anyway I'm wondering how it managed to pass TLE :)

Have nice day ^_^

  [1]: https://www.codechef.com/viewsolution/10897137
4 Likes

I feel the explanation for dominant sub-arrays in the editorial is insufficient since it deals with only sub-arrays of size N/2. This is not always the case - especially in case of queries where the query range can span multiple segments(sub-arrays). I think this would be better way to put it,

Over a range(i,j) of length N, let there be k sub-arrays {1,2,3…,k} each of length {n1,n2,…nk} respectively. If a majority element ‘e’ exists in range(i,j) then one of the k sub-arrays must have e as the majority element.
Proof is by contradiction (along the same lines as in the editorial),

n1 + n2 + … + nk = N

Assume e exists which means freq(e) in (i,j) > N/2

Now lets say none of the k sub-arrays have e as the majority element. Then,

freq(e) in sub-array {1] <= n1/2

freq(e) in sub-array {2} <= n2/2

freq(e) in sub-array{k} <= nk/2

Therefore, freq(e) in (i,j) <= n1/2 + n2/2 + … + nk/2 <= N/2 which contradicts our assumption that e is a majority element.
So at least one of the k sub-arrays must have e as the majority element.

Thanks!

1 Like

Could anyone tell me where I could be going wrong in this solution?
It is very similar to the editorial and though the running time is O(Q * (log N)^3), I am getting a WA verdict and not TLE. So could anyone tell me if there’s something wrong with the logic?

Can someone please explain the meaning of the phrase ‘elegant lazy propagation’ in the line ‘Clearly, There isn’t a notion of an elegant lazy propagation here since the updates we perform are to single elements’

1 Like

I have doubt in complexity for each query.Anyone please explain how complexity is O(logn^3) by map?

getting WA for this link text code. Any help?

How can a segment be dominating if either of subarrays are dominating?? Consider a case [1,2,3,4][2,2,2,5],here 2nd subarray is dominating but when we merge them , the resulting array is not dominating!!

Editorialist’s solution is too helpful!

1 Like

No, it was not supposed to pass.

Seems, like, you can use Square Root Decomposition on each and every question :stuck_out_tongue: :stuck_out_tongue:

3 Likes

Won’t there be total 500 such elements with frequency > 1000?

Initially we have 10^5 elements. Even if you consider the 10^5 query operations as “append” operation instead of replace, at the end you would have 2 * 10^5 elements => at most 200 with >= 1000 frequency

@anudeep2011, yes, you are right.

@vsp4, I think your solution gives tle because of the ordered statistic set used in your solution. Though, the complexity of ordered statistic set is as desired, but constants are too large. It happened with me too when orderedd statistic set gave tle, but bit with binary search passed easily.

1 Like

@vsp4 Can you tell me what is the idea of your solution?

@rishivikram Find the middle element of the sorted segment [l, r]. If there is a dominator this middle element has to be that dominator. Finding kth element could be done in (logn)^3 time in subarray with updates. Then count how many times this element appears in the array [l,r] This could be done in (logn)^2