ANKINTER - Editorial

editorial
snck151c

#1

PROBLEM LINK:

Contest
Practice

Author: Ankit Srivastava
Tester: Kevin Atienza and Sergey Kulik
Editorialist: Kevin Atienza

PREREQUISITES:

Divide and conquer, two-pointer algorithm

PROBLEM:

Let’s call an array of length n good if it can be rearranged to form an interval, where we define an interval to be an increasing sequence where each element is greater than the previous by exactly one. Given a permutation a of [1, 2, 3, \ldots, n] and a number w, how many contiguous subarrays of a of length \ge w are there which are good?

QUICK EXPLANATION:

We can solve this in O(n \log n) using a divide and conquer algorithm. First, split the array into two (roughly) equal arrays and count the good subarrays from both recursively. Then we only need to compute the subarrays that start from one and end in the other.

Suppose you split a into a[1..k] and a[k+1..n]. Let’s define M* as follows:

M* = \begin{cases} \max(a_i, \ldots a_k) & ext{ if $i \le k$} \\ \max(a_{k+1}, \ldots a_i) & ext{ if $i > k$} \end{cases}

Define m* similarly but with \min. Then the subarray starting at i \le k and ending at j > k is good if and only if \max(M*, M[j]) - \min(m*, m[j]) = j - i.

To compute the answer, we walk among all values of j from n down to k+1. For a fixed j, compute the subinterval where M* > M[j], say [1,i_1] and also the subinterval where m* < m[j], say [1,i_2]. Both i_1 and i_2 can be computed efficiently for all j using two-pointer algorithms. After that, one can compute the number of $i$s such that \max(M*, M[j]) - \min(m*, m[j]) = j - i and j - i + 1 \ge w using an array that holds frequencies.

EXPLANATION:

Characterization of good subarrays

Before we get to the algorithms, let’s first try to characterize what it takes for a subarray to be good. Clearly, sorting it is not a very efficient way to check whether a subarray is good.

Without sorting, how can we tell immediately whether a subarray [a*,a[i+1],\ldots,a[j]] is good or not? The first observation is that if \max(a[i..j]) - \min(a[i..j]) > j - i, then if we sort that subarray, there will be a gap as we go from \min(a[i..j]) to \max(a[i..j]). Therefore, we can restrict ourselves to the cases where \max(a[i..j]) - \min(a[i..j]) \le j - i.

But in this case, we can use the fact that the elements of a are distinct to conclude that the case \max(a[i..j]) - \min(a[i..j]) < j - i is impossible, and that if \max(a[i..j]) - \min(a[i..j]) = j - i, then each integer from \min(a[i..j]) to \min(a[i..j]) appears exactly once. Therefore, we have proven that the subarray a[i..j] is good if and only if \max(a[i..j]) - \min(a[i..j]) = j - i.

Slow solution

The naïve solution runs in O(n^3), but with the above observation about good subarrays, we can reduce it to O(n^2), by simply keeping track of the running minimum and maximum. However, O(n^2) is too slow for the bound n \le 10^6.

Instead, let’s try to find other approaches that could possibly get us closer to a fast solution. Let’s fix a j, and try to find all i \le j such that \max(a[i..j]) - \min(a[i..j]) = j - i, in other words \max(a[i..j]) - \min(a[i..j]) + i = j. This means that the quantity S* := \max(a[i..j]) - \min(a[i..j]) + i for each i is important, so let’s try to keep track of that value for all i \le j. To compute the good subarrays that end in j, we can just go through the array S[1..j-w+1] and count those that are equal to j. But as we go to the next higher j, we need to update this array.

The key to this is to notice that \max(a[i-1..j]) = \max(a[i-1], \max(a[i..j])) and \max(a[i..j+1]) = \max(a[j+1], \max(a[i..j])) (similar equations hold for \min(a[i..j])). Therefore, if we also keep track of the values M* := \max([i..j]) and m* := \min([i..j]) for each i, then we can easily update the $M*$s, $m*$s and the $S*$s as we increase j.

Since there are n steps, and in each step we need an O(n) update and query, the algorithm runs in O(n^2), which is also slow.

A faster solution

The second O(n^2) algorithm can actually be improved. First, as we increase j to j+1, we need to do the following updates:

  • M* := \max(M*, a[j+1])
  • m* := \min(m*, a[j+1])

But notice that the sequence M is (weakly) decreasing and m is (weakly) increasing. Therefore, the only indices of M and m that are updated are M[i_1..j] and m[i_2..j], where i_1 and i_2 are the first indices where M[i_1] < a[j+1] and m[i_2] > a[j+1]. Furthermore, these values will all be set to a[j+1], which means there will probably be long ranges where M* and m* is constant.

If we keep track of these constant ranges, then we don’t have to check each one of the indices during update, because either they will all be updated or they will all remain the same. Furthermore, after the update they will remain a constant range, or possibly get replaced with a larger such range.

We can keep track of these constant ranges in M with a stack, where each element in the stack denotes the length of the range it represents and the constant value. During update, we can just pop the elements in the stack with a lower constant value than a[j+1], perform range updates on these ranges (and the corresponding updates in the array S), and push a new element representing the new constant range (with value a[j+1]). It can be seen that there are O(n) range updates required (because each element is pushed and popped at most once in the stack). A similar stack can be set up for m.

Note that we need to perform O(n) updates and n queries to the array S, where an update is of the form "increase a given subrange by some value v" and a query is of the form “how many times does v appear in a given subrange?” We can set up a data structure over S to do these operations efficiently for us.

First, we decompose the array S into blocks of size u (if n is not divisible by u, then the last block can be smaller). Note that there are \lceil n/u \rceil such blocks. Also, in each block, we set up a hash map whose keys are the elements of that block and values are their frequencies in that block. Finally, each block also holds an integer “ ext{add}” which denotes a lazy value to be added to all the elements of the array.

To implement an update in the interval [i,j], we simply increment the ext{add} variable of all blocks completely contained in [i,j], which leaves us with at most two intervals at the ends that partially overlap with [i,j]. In those cases, just update the elements one by one. To implement a query, i.e. to find out how many times v appears in [i,j], we simply check the hash maps of all blocks completely contained in [i,j], and just check the remaining ones at the ends one by one. However, during the query we must honor the ext{add} variable of each interval. We do this by checking for the value (v - ext{add}) instead of just v in each block.

Both the update and query run in O(n/u + u) time, so the optimal choice for u would be O(\sqrt{n}). Therefore, each query runs in O(\sqrt{n}) time and the whole algorithm runs in O(n\sqrt{n}) time. This is faster than O(n^2), but this will probably fail because the bound n \le 10^6 is still too high (unless your implementation is incredibly efficient! Has anyone been able to get accepted using an O(n\sqrt{n}) method?).

Divide and conquer

To go even faster than O(n\sqrt{n}), we will use a divide and conquer algorithm. First, we split the array into two (roughly) equal arrays a[1..k] and a[k+1..n], where k is, say, \lfloor n/2 \rfloor. Next, we count the good subarrays that belong completely to one of these subarrays, by calling our function on both halves recursively. Then we only need to compute the subarrays that belong to both, i.e. that starts on a[1..k] and ends in a[k+1..n].

Let’s define M* and m* as follows:

M* = \begin{cases} \max(a_i, \ldots a_k) & ext{ if $i \le k$} \\ \max(a_{k+1}, \ldots a_i) & ext{ if $i > k$} \end{cases}
m* = \begin{cases} \min(a_i, \ldots a_k) & ext{ if $i \le k$} \\ \min(a_{k+1}, \ldots a_i) & ext{ if $i > k$} \end{cases}

For any subarray a[i..j] with i \le k < j, it’s easy to see that \max(a[i..j]) = \max(M*,M[j]) and \min(a[i..j]) = \min(m*,m[j]). Therefore, a[i..j] is good if and only if \max(M*, M[j]) - \min(m*, m[j]) = j - i. Notice also that m[1..k] and M[k+1..n] are (weakly) increasing, and M[1..k] and m[k+1..n] are (weakly) decreasing.

Next, let’s try to fix a j. We want to count the number of indices i \le \min(k, j + 1 - w) such that \max(M*, M[j]) - \min(m*, m[j]) = j - i. Since j is fixed, we can find two indices i_1 and i_2 where:

  • i_1 is the index i where M* first becomes smaller than M[j], and
  • i_2 is the index i where m* first becomes larger than m[j].

And since M[1..k] and m[1..k] are decreasing and increasing, respectively, all subsequent indices also hold those properties. Therefore, we have three ranges in consideration:

  • For i < \min(i_1, i_2), we have \max(M*, M[j]) - \min(m*, m[j]) = M* - m*.
  • For \min(i_1, i_2) \le i < \max(i_1, i_2), we have \max(M*, M[j]) - \min(m*, m[j]) is either M[j] - m* or M* - m[j] (depending on whether i_1 < i_2 or not).
  • For \max(i_1, i_2) \le i, we have \max(M*, M[j]) - \min(m*, m[j]) = M[j] - m[j].

Therefore, for a fixed j, the number of indices i we are looking for is the sum of the following three values:

  • The number of indices i < \min(i_1, i_2) such that j = i + M* - m*.
  • The number of indices \min(i_1, i_2) \le i < \max(i_1, i_2) such that j - M[j] = i - m* (or j + m[j] = i + M*, depending on whether i_1 < i_2 or not).
  • The number of indices \max(i_1, i_2) \le i such that j - M[j] + m[j] = i.

In all cases, we must include the restriction i \le \min(k, j + 1 - w).

The third one is easy to compute: we just check whether the value j - M[j] + m[j] is within the range [\max(i_1, i_2), \min(k, j + 1 - w)] and add 1 to the answer if so. But what about the remaining two?

Let’s define three new arrays S_1, S_2 and S_3, where S_1* = i + M* - m*, S_2* = i - m* and S_3* = i + M*. Notice that in the remaining ones we are counting the number of times that some value (e.g. j, j - M[j] or j + m[j]) appears in some subarray of S_1, S_2 or S_3. Therefore, we must be able to answer these queries quickly, i.e. we should build a structure on top of an array S that answers the following query:

  • Given a subrange [i,j] and a value v, how many times does v appear in S[i..j]?

However, notice that in our case, we already know all the queries beforehand, so the structure described above can be implemented with a simple array! First, we replace all queries by prefix queries: Since the number of times v appears in S[i..j] is equal to the number of times it appears in S[1..j] minus the number of times it appears in S[1..i-1], we can replace a subrange query [i,j] with two queries [1,j] and [1,i-1]. Next, we sort and process the queries [1,j] in increasing order of j. We will use an array of frequencies C, initialized to zero. As we process each subrange [1,j], we ensure that C[v] will hold the number of times v appears in S[1..j], so we can answer query intervals [1,j] by just returning C[v] :slight_smile: Note that between any two consecutive queries [1,j] and [1,j'], the values S[j+1], S[j+2], \ldots S[j'] must be incorporated into the array C, i.e. the values C[S[j+1]], C[S[j+2]], \ldots, C[S[j']] need to be incremented.

Sorting the queries can be done in O(k \log k) = O(n \log n) time, but notice that it can also be done in O(n) time using bucket sort with k buckets.

Also, how do we compute i_1 and i_2 for all j? We can do that in O(n \log n) by doing binary searches, but we can do it faster by relying on the fact that as j decreases, i_1 and i_2 never decrease (i.e. each either increases or stays the same). Therefore, we just maintain two pointers for i_1 and i_2, and as we iterate j in decreasing order, we just update i_1 and i_2. Since i_1 and i_2 never decrease, overall process runs in O(n) time.

If we assume that the running time of the solution is T(n), then we have the recurrence T(n) = 2T(n/2) + O(n), because there are two recursive calls at half the size, and an O(n) combination step. This recurrence is similar to the recurrence of merge sort, thus we have T(n) = O(n \log n). If implemented well, this should pass the time limit of this problem!

Time Complexity:

O(n \log n)

AUTHOR’S AND TESTER’S SOLUTIONS:

Setter
Tester 1
Tester 2


#2

Not able to access setter’s solution .


#3

Really excellent editorial!