FEWDIFF - Editorial

Author: Anton Trygub
Tester: Alexander Morozov
Editorialist: Colin Galen

Medium

Observations, DP

PROBLEM:

You’re given an array of n integers. A subarray of this array of length k is called good is there are at most 1 + \dfrac{k}{2} distinct values in this subarray. In one operation, you can change one element’s value to any value you want. Find the minimum number of operations to make all subarrays good.

QUICK EXPLANATION:

Observation 1

If all subarrays of length 3 are good, then all subarrays are good.

We can get an O(n^3) solution with simple dynamic programming - storing the minimum number of changes to get to a certain state. The only information necessary to keep about the state is the position and the previous two elements. If the previous two are equal, then the transition is O(n) (we can change a_i to anything), but there are only O(n^2) such states and everything else is O(1) (we only have two options for a_i).

Observation 2

Because all subarrays we care about are length 3, the final array can be shown to have a special structure. Specifically, it’s a collection of groups of consecutive adjacent elements, separated by alternating “runs” of elements. For example, an array can be aaaaaabababbbbbbbbaaaaaacacaccc.

The conclusion from this is that all alternating elements must be present in the original array. The reason for this is demonstrated in the full solution.

We can exploit the special structure from the second observation to optimize our DP from subtask 1. I don’t really know how to explain this briefly, so please just see the full explanation for subtask 2.

Main solution

There’s always an optimal solution where each element is either unchanged or equal to the original value of one of its neighbors. That means each element can only take on 3 possible values.

From this, we can create a dynamic programming solution - the only necessary information for the state is the value of the previous two elements a_{i - 2} and a_{i - 1}, and the transition is choosing the value for a_i. We use it to compute the minimum number of changes for a given prefix and given values of the previous two elements. Since each element has only 3 possibilities, we now only have O(n) states.

EXPLANATION:

A lot of this explanation was given by the author himself, to give credit where it’s due (although I modified it somewhat).

Observation 1

An important question to ask is, which subarrays can even be bad?

Denote \lim(x) to be the limit of the number of distinct elements we can have in a subarray with length x: 1 + \dfrac{x}{2}, and d(x) to be the maximum possible number of distinct elements in a subarray with length x (this is just for formality, obviously it’s just equal to x). All subarrays of length 1 are automatically good, as d(1) = 1\leq \lim(1) = 1.5 . The same holds for all subarrays of length 2: d(2) = 2 \leq \lim(2) = 2 .

Subarrays of length 3 are the first interesting length. d(3) = 3 > \lim(3) = 2.5, so in any subarray of length 3, it’s good if and only if all of the elements aren’t distinct. But if no 3 elements can be distinct, what does it mean for the whole array? Imagine constructing an array so that all subarrays of length 3 are good, not caring about other lengths. We’re going to append elements to the end of the array, one by one. What exactly can we append?

• If there are less than two elements, we can do whatever we want.
• If the previous two elements are equal, we can do whatever we want.
• Otherwise, the previous two elements are distinct, so the new number has to be equal to one of those new numbers.

Now let’s think of it from the perspective of: how many distinct elements can we get in the worst-case?

• If there are less than two elements, we can add another distinct element.
• If the previous two elements are equal, we can add another distinct element.
• Otherwise, the previous two elements are distinct, so the new number has to be equal to one of those new numbers, and we can’t add another distinct element.

So every time we want to add another distinct element (after the first two elements have been added), we have to first add an element that’s equal to the previous one. In other words, in the worst-case, around half of the elements aren’t distinct from other ones. I say “around” because we have a bit of freedom for the first couple of elements, but it ends up not making a difference.

For the first element, it’s distinct, so we have 1 distinct element. Then, the best we can do is add a different number, but then next time we have to add that number again to be able to add another distinct element again. After that, we can add another distinct number, and etc. The total number of distinct elements is 1 + \left\lceil \dfrac{x - 1}{2} \right\rceil = 1 + \left\lfloor \dfrac{x}{2} \right\rfloor \leq 1 + \dfrac{x}{2} = \lim(x), meaning that any subarray created with all subarrays of length 3 being good is also good itself. Even though we have many subarrays in the final array, they can all be represented as an array created using these restrictions on the process, so it will apply to all subarrays in the array simultaneously.

For some visualization, the worst-case array I describe will look like this: 1, 2, 2, 3, 3, 4, 4, 5, 5, \dots

Now, we figure out how to use this information…

Imagine that we’ve already constructed a prefix of the first i elements, satisfying that every subarray of length 3 in that prefix is good. Now, for deciding what the i-th element should be, we really only care about the previous two elements, because anything before that doesn’t affect any subarray of length 3 that contains i. Therefore, in a dynamic programming fashion, the only information we need in a state is the position we’re at and the two previous values.

There are O(n) possible values, so we can store the indices of values rather than the values themselves. This gives us O(n^3) states. What are the transitions? Either the previous two values are equal, or they’re different.

If they’re different, then a_i has to be equal to one of them, otherwise, we’d have 3 distinct elements in a row. So this transition is O(1) as there are only two possible values for a_i. Otherwise, they’re the same, but then, you have complete freedom over what a_i can be. It’s enough to say that there exists an optimal solution where a_i is equal to some value in the original array, which is true because an element not in the array will be different from strictly more values than an element in the array. So the transition for when they’re the same is O(n). However, there are only O(n^2) states where they’re equal (O(n) positions, O(n) values), so these transitions consume a total of O(n^3) time.

Observation 2

Because all subarrays we care about are length 3, the final array can be shown to have a special structure. Because no 3 consecutive elements can be adjacent, our array can be shown as a series of adjacent “blocks” of equal elements, separated by alternating sequences of the elements of adjacent blocks. This is vague to describe in words, so we’ll say it always looks like this type of structure:

xxxxx|yxyxyxyx|yyyyyyy|xyx|yy|xx|zzzzz

where the dividers represent the different blocks and alternating sequences. Note that alternating sequences aren’t exactly well-defined like this, so we can call any element a part of an alternating sequence if it’s adjacent to an element different from it.

Now comes the reasoning for this structure. Of all optimal solutions, let’s find the one that maximizes the number of adjacent elements that are equal. For example, say that the array is 1 2 3 2 1 and we’re considering the final sequence 1 2 1 2 1. While this is a valid optimal sequence, we can do better. We can increase the number of adjacent elements by changing the third element to 2 instead, getting rid of that alternating sequence and increasing the number of adjacent elements. In general, we can make it so that the only time we have alternating elements is if they’re present in the original array. All other elements will be surrounded by equal elements.

For subtask 2, O(n^3) isn’t enough anymore. But we can make use of the ideas from the second observation. It turns out to be enough to only store the dp states where the previous two elements are equal. So let dp(i, c) be the minimum number of moves to create a prefix of i elements, with the previous two elements both being equal to c. We can restore the answer from this if we assume the last two elements are also the last time we have an adjacent pair - that is, the rest of the array is alternating and/or valid. We can check this in O(n^2) time in total. If this isn’t the last time we should have an adjacent pair, then we’ll find the answer later on.

It remains to find the transitions. For notation, let the value of [x] on a boolean variable x be 1 if the variable is true, and 0 otherwise. We’ll also use 1-indexing throughout. We’ll focus on a_i (the i-th element of the prefix of length i), and there are two possibilities.

• If a_{i - 2} = c, then we get the transition of dp(i - 1, c) + [a_i = c]
• Otherwise, there are two possible transitions:
• a_i is part of an adjacent equal block: then the string looks like ...kkcc, where k is anything, possibly equal to c. Thus the transition is dp(i - 2, k) + [a_{i - 1} = c] + [a_{i - 2} = c], where k is chosen to minimize dp(i - 2, k). This can be computed a single time after we fully compute all of the states for i - 2.
• Or, a_i is part of an alternating sequence. Both of these are possible, so they should both be tried. We know a_{i - 2} \neq c, since otherwise, we would be in the first transition. So let d = a_{i - 2}. Then the sequence will look like ...dcdcdcdcdc. There’s only a single value of c that satisfies this: the value of a_{i - 3}, so we end up with only 1 state to update here for a value of i. This state has an O(n) transition, since we need to go back in the array and check the minimum over all of the points where it’s still alternating, since we know all of them have a cost of 0.

All of these transitions sum up to O(n^2) time in total.

Main solution

So you see how complicated subtask 2 is. How can we possibly reduce this further? Well, the main solution comes from an even more outlandish observation: there exists an optimal solution where every element in the final array is either unchanged or equal to the original values of its direct neighbors. While this isn’t necessarily obvious to think of, the proof comes down to casework:

We already know any element in an alternating sequence is unchanged. So we just need to handle blocks of equal elements. Let’s say a block takes on the form: xyxy | xxxxxxx | zxzx, where the length of xxxxxxx is arbitrary.

That length can’t be 1 because we would have yxz, which isn’t allowed.

If the length is 2, then at least one of these two elements must be x, otherwise we could change it to yz and have more adjacent equal elements.

Otherwise, if the length > 2, we can apply a different argument to both ends: the first x must be equal to itself or one of its original neighbors, and so must the last x. This is because, if they aren’t, we could decrease the number of changes by changing the two x's on the relevant end to a value in the original array, therefore not changing it and reducing the number of changes. Similarly, if any 3 consecutive elements don’t have x, then say they’re abc, we can change it from xxx to xbb or bbx and be strictly better off in terms of changes.

So each element only has 3 possibilities - the element to the left, itself, and the element to the right. We can extend the DP solution from subtask 1, but now there are only 9 possibilities for previous elements and 3 possible transitions. Therefore the solution will take only O(n) time - not the greatest constant, but it’s definitely enough to pass comfortably.

TIME COMPLEXITY:

O(n^3) for the DP in total - O(n^3) states and O(n^3) transitions (in total).

O(n^2) for the DP in total - O(n^3) states and O(n^3) transitions (in total).

Main solution

O(n) for the DP in total - O(n) states and O(1) transitions each.

SOLUTIONS:

Setter's Solution
#include <bits/stdc++.h>
using namespace std;

const int N = 100005;

int n, t, a[N], dp[N][3][3];

bool check(int a, int b, int c) {
return (a != b) + (b != c) + (c != a) < 3;
}

int main() {
ios_base::sync_with_stdio(false);
cin.tie(nullptr);
cin >> t;
while (t--) {
cin >> n;
for (int i = 1; i <= n; i++) {
cin >> a[i];
}
a[n + 1] = 0;
if (n <= 2) {
cout << "0\n";
continue;
}
for (int cur = -1; cur <= 1; cur++) {
for (int lst = -1; lst <= 1; lst++) {
dp[2][cur + 1][lst + 1] = (cur != 0) + (lst != 0);
}
}
for (int i = 3; i <= n; i++) {
for (int cur = -1; cur <= 1; cur++) {
for (int lst = -1; lst <= 1; lst++) {
dp[i][cur + 1][lst + 1] = N;
for (int prv = -1; prv <= 1; prv++) {
if (check(a[i - 2 + prv], a[i - 1 + lst], a[i + cur])) {
dp[i][cur + 1][lst + 1] = min(dp[i][cur + 1][lst + 1], dp[i - 1][lst + 1][prv + 1] + (cur != 0));
}
}
}
}
}
int ans = N;
for (int cur = -1; cur <= 1; cur++) {
for (int lst = -1; lst <= 1; lst++) {
ans = min(ans, dp[n][cur + 1][lst + 1]);
}
}
cout << ans << '\n';
}
}

Tester's Solution
#include <bits/stdc++.h>

using namespace std;
using nagai = long long;

int main() {
cin.tie(0);
ios::sync_with_stdio(false);
int t;
cin >> t;
while(t--){
int n;
cin >> n;
vector<int>a(n);
for(auto&x:a)
cin >> x;
if (n <= 2){
cout << 0 << '\n';
continue;
}
int S = 2;
int SZ = S + S + 1;
const int oo=0x3f3f3f3f;
vector<vector<vector<int>>> dp(n+1, vector<vector<int>>(SZ, vector<int>(SZ, oo)));
dp[0][0][0]=0;
for(int i=0;i<n;++i)
for(int j=0;j<SZ;++j)
for(int k=0;k<SZ;++k) {
if (dp[i][j][k] == oo)continue;
for(int l=0;l<SZ;++l) {
if (i + (l - S) < 0 || i + (l - S) >= n)
continue;
if (i >= 2) {
int a1 = a[i - 2 + (j - S)], a2 = a[i - 1 + (k - S)], a3 = a[i + (l - S)];
if (!(a1 == a2 || a1 == a3 || a2 == a3))
continue;
}
dp[i+1][k][l] = min(dp[i + 1][k][l], dp[i][j][k] + add);
}
}
int ans = oo;
for(int i=0;i<SZ;++i)
for(int j=0;j<SZ;++j)
ans = min(ans, dp[n][i][j]);
cout << ans << '\n';

}
return 0;
}


Video Editorial

My video: https://youtu.be/TB_krkk_U9A?t=2932