PROBLEM LINK:
Practice
Contest: Division 1
Contest: Division 2
Contest: Division 3
Contest: Division 4
Author: raysh07
Tester: sushil2006
Editorialist: iceknight1093
DIFFICULTY:
Medium
PREREQUISITES:
Familiarity with bitwise XOR
PROBLEM:
You’re given an array A and an integer X. You can do the following operation as many times as you like:
- Increase any element of A by X.
Find the minimum possible value of the bitwise XOR of the elements of A.
EXPLANATION:
Let K be the largest integer such that 2^K divides X.
In terms of binary representation, this means X has K trailing zeros.
Observe that adding X to any A_i cannot change the last K binary digits of A_i at all, since they’re all 0 in X.
So, no matter what we do, the last K binary digits of every A_i will remain unchanged, and contribute to the final XOR.
With this in mind, let’s compute the XOR of the last K digits of every A_i first, and call it Y.
We can now right-shift each A_i and X by K digits, solve this reduced problem, then multiply its answer by 2^K and add Y to obtain the final answer.
Note that in particular, X being shifted by K digits means it’s odd.
The “reduced” problem still seems not-so-easy, so maybe it’s in our best interests to work out a few small examples to see what’s going on.
First, consider N = 1.
There’s only one element in A, and we can only increase it - so it’s best to do nothing since we want to minimize the XOR (which is just A_1).
The answer in this case is A_1 itself.
Next, look at N = 2.
Our best-case scenario is if we’re able to make A_1 = A_2, so that their XOR becomes 0.
This is possible if and only if X divides |A_1 - A_2|, so what do we do when it isn’t?
Intuitively, it seems like a good idea to make A_1 and A_2 be as close to each other as possible - if they’re far apart, they’s likely to be larger bits set in one but not the other which is not ideal for us.
Specifically, let d = |A_1 - A_2|. Then, the smallest difference we can obtain by adding X to A_1 and/or A_2 is:
M = \min(d \bmod X, (X - d) \bmod X).
The fact that we’re dealing with the difference |A_1 - A_2| is rather helpful, because there’s a useful (to us) relation between the XOR and difference of two numbers:
For any non-negative integers x \leq y we have x\oplus y \geq y-x.
Further, equality holds if and only if x is a submask of y.
This is fairly easy to prove: if a bit 2^i is set (or unset) in both x and y it contributes 0 to both the XOR and the difference; if it’s set in y but not x it contributes 2^i to both sides, and if it’s set in x but not y it contributes 2^i to the XOR but -2^i to y-x.
What this tells us is that the minimum possible XOR definitely cannot be smaller than the minimum possible difference between A_1 and A_2.
That is, we know \text{ans} \geq M.
If we’re now able to construct a solution with the XOR being equal to M, we know for sure it’s optimal.
This is indeed always possible.
Construction
Recall that we’ve reduced the problem to where X is odd, which means \gcd(2, X) = 1.
Euler’s theorem now tells us that 2^{\varphi(X)} \equiv 1 \pmod X.
Since \varphi(X) \gt 0, there are in fact an infinite number of positive powers that are congruent to 1 modulo X; by just considering multiples of \varphi(X).Next, observe that A_1 can be turned into any value \geq A_1 that shares the same remainder modulo X; same with A_2.
So, we can instead focus on finding appropriate final values for them - as long as these are large enough we’ll be fine.Without loss of generality, let A_1 - A_2 \equiv M \bmod X (if it’s A_2 - A_1, swap the roles of A_1 and A_2).
Let d_1 = A_1 \bmod X be the desired remainder.We start out with B_1 = M and B_2 = 0.
These values have a XOR and a difference of 0 which is what we want.
Now, every new bit we add must be set in both values, so as to not disturb this issue.However, a rather simple construction arises: while B_1 \bmod X \neq d_1, choose some sufficiently high power of 2 which is 1 mod X and not set in either number, and set this bit in both B_1 and B_2.
This doesn’t change the XOR and difference, but does increase the remainders of both B_1 and B_2 by 1.
Repeat this operation till B_1 \bmod X = d_1 is reached (which will also simultaneously satisfy the remainder condition for B_2, since that started off at 0).If B_1 \lt A_1 or B_2 \lt A_2 at this point (possible if no common powers of 2 were added), simply choose any X distinct large powers of 2 (which are 1 mod X) and give them to both, and we’re done: B_1 and B_2 are the final values which A_1 and A_2 must take, and getting them there will give us a XOR of M as desired.
Next, we consider N = 3.
We have a bit more freedom now: so much, in fact, that it’s always possible to make the bitwise XOR 0.
This applies to every N \geq 3 as well, which finishes the problem.
Proof
Just as in the N = 2 case, we’ll use the fact that X is odd, so there are an infinite number of powers of 2 that are 1 modulo X.
The idea is similar: we’ll create an array B such that A_i \equiv B_i \bmod X, A_i \leq B_i, and B_1 \oplus \ldots \oplus B_N = 0.
The array B represents the final value each A_i will take.To do this, let’s start out with a large enough (in terms of value) array whose XOR is 0.
One way to do this is to let M = 2^{1000}, and then use B = [M, M, \ldots, M] if N is even, or
B = [M, 2M, 3M, M, M, \ldots, M] if M is odd.
This takes care of both the bitwise XOR = 0 condition and ensuring that B_i is large, so only the remainder condition needs to be satisfied now.Consider an index i such that A_i \neq B_i \bmod X.
Let j, k be any two other indices.
The phrase “add 1 to B_i” will mean “choose a sufficiently large, previously unused, power of 2 that’s congruent to 1 modulo X and add it to B_i”.
Now,
- Add 1 to B_i and B_j (using the same power of 2, so that XOR = 0 is preserved).
Note that all further operations will be done on a pair of indices, to ensure that the XOR doesn’t change. This implicitly means the same power of two is to be used.- Add 1 to B_i and B_k.
Now, B_i has increased its remainder modulo X by 2, while B_j and B_k have increased by 1 each.- Next, add 1 to B_j and B_k X-1 times.
This will reset the remainders of B_j and B_k to their initial values, so the only thing that’s changed is that B_i has increased by 2.Yet again, we (ab)use the fact that X is odd: repeatedly increasing a remainder by 2 allows us to cycle through all the remainders modulo X.
So, we’re able to repeat the above operation several times till B_i \equiv A_i \bmod X is reached, without affecting any other remainders and without affecting the XOR.Once this is done for every index, we have a valid array B and we’re done!
In the end, we have a very simple (to implement) solution to the problem:
- Compute K, the largest power of 2 that divides X.
- Solve the smallest K bits separately, noting that these can’t be changed in any A_i.
Let this value be Y. - For the larger bits:
- If N = 1, the optimal solution is to do nothing.
- If N = 2, the answer is \min(d \bmod X, (X - d) \bmod X), where d = |A_1 - A_2|.
Note that this is after A_1, A_2, X have been right-shifted by K. - If N \geq 3, the answer is 0.
- The final answer is obtained by multiplying the answer to the reduced problem by 2^K, and then adding Y.
TIME COMPLEXITY:
\mathcal{O}(N) per testcase.
CODE:
Editorialist's code (PyPy3)
for _ in range(int(input())):
n, x = map(int, input().split())
a = list(map(int, input().split()))
k = 0
while ~x & (1 << k): k += 1
if n == 1: print(a[0])
elif n == 2:
ans = 0
for y in a: ans ^= y & ((1 << k) - 1)
d = abs((a[0] >> k) - (a[1] >> k)) % (x >> k)
print(ans + (min(d, (x >> k) - d) << k))
else:
ans = 0
for y in a: ans ^= y & ((1 << k) - 1)
print(ans)