OPME - Editorial


Contest: Division 1
Contest: Division 2
Contest: Division 3
Contest: Division 4

Author: pols_agyi_pols
Tester: pols_agyi_pols
Editorialist: iceknight1093




Binary search


You’re given an array A. You can perform the following operation on it:

  • Choose any subarray of A with length \gt 1, and replace it with its median.

For each i = 1, 2, 3, \ldots, N, find out whether there exists a sequence of operations such that A_i is the last remaining element.


Since we’re allowed to choose subarrays of length \geq 2, let’s first look specifically at what operating on a subarray of length 2 does.
If we choose a subarray containing the two elements x and y, their median is just \min(x, y).
So, we replace the two elements x and y with \min(x, y).
Another way to see this is that we essentially delete the single element \max(x, y) from the array.

This is a useful operation to have, so keep it in mind.
Intuitively, it allows us to delete “large” numbers if necessary.
Further, note that deleting “large” numbers reduces the median of the array.

Now, let’s fix an index i, and try to decide when A_i can be the final element.

Suppose c_1 elements of A are \lt A_i, and c_2 of them are \geq A_i.
Note that c_1 + c_2 = N.

  • If c_1 \lt c_2, the median will be \geq A_i.
    This is useful, since we’re able to delete large numbers.
    So, if the median is \gt A_i, we can simply repeatedly delete the maximum element.
    This will eventually make the median exactly equal to A_i, at which point we can replace the entire array with its median and be done.
  • If c_1 \geq c_2, the median of A will be \lt A_i, which is not ideal.

Our goal is thus to reach a state where c_1 \lt c_2 by performing some sequence of operations - if we can never reach this, it’s impossible to make A_i the median.

So, ideally, we want to make c_1 as small as possible.
One simple way to do that is to choose any subarray of length \gt 1 containing only elements that are \lt A_i, and replace them with their median.
This will delete all but one of them (while not changing c_2 at all), so if we chose a subarray of length x, c_1 will reduce by x-1.

Suppose we perform this operation till we no longer can - that is, any two elements that’re \lt A_i are separated by at least one element that’s \geq A_i.
In this array, if c_1 \lt c_2, we’ve achieved our goal.

If c_1 \geq c_2 even now, it’s impossible to make c_1 \lt c_2 no matter what we do.
This is because deleting any number of “small” elements requires us to delete at least as many “large” elements as well due to the small elements being separated, so the difference between them cannot improve.

We now have a rather simple way to check whether A_i can be the final value or not, in linear time:

  • Compress all blocks of elements \lt A_i into a single element.
  • Let c_1 be the number of elements \lt A_i, and c_2 be the number that are \geq A_i.
    Simply check whether c_1 \lt c_2.

In particular, note that this check depends only on the value of A_i, and not on its position in A at all.
So, rather than determining which indices can be the final element, we can instead try to determine which values can be the final element.

It’s now not hard to see that if x can be the median, so can any integer \leq x that’s in the array.


Suppose x can be the median, meaning c_1 \lt c_2 after shrinking blocks of small elements into a single one.

Consider what happens when looking at x-1 instead.

  • c_2 will increase by the number of occurrences of x-1 in the array.
  • Each occurrence of x-1 in the array will, at best, break one block of small elements into two.
    So, each occurrence of x-1 in the array will increase c_1 by at most 1.

Since c_1 \lt c_2 initially, and c_1 doesn’t increase more than c_2 does, the inequality still holds.
This proves the correctness of binary search.

This allows us to binary search to find the largest x that can possibly be the median — say, M.
Since the check for a single x takes \mathcal{O}(N) time, finding M can be done in \mathcal{O}(N\log \max(A)) time (or \mathcal{O}(N\log N) if you compress elements).

Once M is known, simply output 1 for every index i such that A_i \leq M, and 0 for the others.


\mathcal{O}(N\log \max(A)) per testcase.


Author's code (C++)
#include <bits/stdc++.h>
using namespace std;
#define ll long long

int main(){
    ll kitne_cases_hain;
        ll n;
        ll a[n];
        pair <ll,ll> p[n];
        for(int i=0;i<n;i++){
        ll cnt[n+2]={};
        ll ans[n]={};
        ll last=-1;
        ll cntg=n;
        ll cnts=0;
        vector <ll> v;
        for(int i=0;i<n;i++){
                for(auto it:v){
                    if(cnt[it-1]==cnt[it+1] && cnt[it-1]==0){
                    if(cnt[it-1]==cnt[it+1] && cnt[it-1]==1){
        for(int i=0;i<n;i++){
	return 0;
Editorialist's code (Python)
for _ in range(int(input())):
    n = int(input())
    a = list(map(int, input().split()))
    mxadj = 0
    for i in range(n-1): mxadj = max(mxadj, min(a[i], a[i+1]))
    lo, hi = 1, 10**9
    while lo < hi:
        mid = (lo + hi + 1)//2
        more, less = 0, 0
        for i in range(n):
            if a[i] >= mid: more += 1
                if i == 0 or (a[i-1] >= mid): less += 1
        if more > less: lo = mid
        else: hi = mid-1
    ans = [0]*n
    for i in range(n):
        if a[i] <= lo: ans[i] = 1
    print(''.join(str(x) for x in ans))

Amazing binary search application. Nice one :smiley: .


Thanks. Liked the explanation as always


Alternatively, once you know the c_1 < c_2 criterion, you could use a disjoint set union to find the number of mergeable groups for each element. Add the elements in sorted order and merge them with their neighbors. Here’s a link to my implementation.


Also, with a bit of casework, one can arrive to a simple linear solution (link).

1 Like

Can you please explain your solution.

Something like this (as in process elements in ascending order while maintaining c_1 and c_2) was in fact the author’s intended solution - though he doesn’t use a DSU to do it, and instead uses a bit of casework, as can be seen in the implementation linked above.

Binary search is shorter to explain though so it was less work to write :stuck_out_tongue:

1 Like


To avoid doing casework as much as possible let me prove first the following lemma:

Suppose you are trying to obtain a_i as only element left, that’s possible iff you can obtain 1 as the only element left in the array where you put 1 if an element is \geq a_i and 0 otherwise.

If you can obtain a 1 then you can obtain an element \geq a_i and by the lemma in the editorial you can obtain a_i. In the other direction, if you cannot obtain 1 certainly you cannot obtain any element \geq a_i so you cannot obtain a_i.

So we have reduced our problem to just 0 s and 1 s. Call such a array good if we can obtain 1 as the only remaining element.

Note that in a good array we can substitute 0 with 010 and also in the other direction and nothing changes. I won’t prove it, but looking at a few examples it might become easier to understand why.

Also note that if the array after inserting 0 s or deleting 1 s is good, the original one is.

Now we can start with casework:

  1. We have 1 s at both ends, then any array of the form 1, 101 or 10101 is good, the rest of good arrays can be reduced to this form with the moves mentioned above.

  2. We have 1 s at one end, then we only have to consider 110 or 10110.

  3. We have no 1 s at any end, then we only have to consider 01110 and 0110110.

Now, in our implementation we have to consider all arrays that after the operations above become one of the string above and then update the threshold with the minimum element.

In practice you only need three loops to process the string.

For more details please look at the submission (where I have commented where each case is taken care of).