### PROBLEM LINK:

**Author:** Kevin Atienza

**Testers:** Sergey Kulik and Vasya Antoniuk

**Translators:** Vasya Antoniuk (Russian), Team VNOI (Vietnamese) and Hu Zecong (Mandarin)

**Editorialist:** Kevin Atienza

### DIFFICULTY:

Simple

### PREREQUISITES:

Greatest common divisors, Euclid’s algorithm, Linear combinations, Bézout’s identity

### PROBLEM:

Alvin has A candies and Berto has B candies. You can give C candies to Alvin at a time, and D candies to Berto at a time. What is the minimum possible absolute difference between their candies that can be achieved?

### QUICK EXPLANATION:

Let G = \gcd(C,D). Then every possible absolute difference is of the form |A - B + kG| for some integer k, and every difference of that form can be achieved. The lowest difference that can be achieved is \min((A - B) \bmod G, (B - A) \bmod G).

### EXPLANATION:

First, any absolute value is nonnegative, so the lowest possible answer in every test case is 0. Sometimes, this can be achieved, like in the first sample input. But in some cases this cannot be achieved. For example, in the second sample input, A = 1 and B = C = D = 2. In this case, you can only increase A and B by an even amount, so you can’t make them have the same parity.

Here’s another example. Suppose A = 11, B = 35, C = 50 and D = 130. Notice that \gcd(C,D) = 10. Thus, no matter what you do, *you cannot change the last digit of A and B*. The difference between any pair of numbers whose last digits are 1 and 5 respectively ends in either 4 or 6. Thus, the lowest possible difference we can get is 4, and it turns out we can achieve this difference by adding C to A three times and D to B one time: |(11 + 3\cdot 50) - (35 + 1\cdot 130)| = |165 - 161| = 4.

We can generalize the insight we got from the previous example. Suppose we add C to A m times and D to B n times. Then the absolute difference that we get is |(A + mC) - (B + nD)| = |(A - B) + (mC - nD)|. Every number of the form mC - nD (a linear combination) is divisible by \gcd(C,D)! Thus, we can state the following:

**If G = \gcd(C,D), then every possible absolute difference is of the form |A - B + kG| for some integer k.**

The smallest possible value of the form |A - B + kG| depends on the sign of the number inside the absolute value. If it’s possible, then the smallest you can get is

and if it’s negative, then it is

(Remember that x \bmod G is always nonnegative even if x is negative.) Thus, the answer is at least

Unfortunately, we haven’t yet proved that every integer of the form |A - B + kG| can be achieved as a difference, so “\min((A - B)\bmod G, (B - A)\bmod G)” is just a lower bound at this point. Thus, we want to know whether the following is true:

**Let G = \gcd(C,D). Can every possible value of the form |A - B + kG| for any integer k be expressed in the form |A - B + (mC - nD)| for some nonnegative integers m and n?**

If we can show this, then we establish that the answer is \min((A - B)\bmod G, (B - A)\bmod G). If not, then we’ll have to find other ways.

Note that the word “nonnegative” is pretty important; Without it, then we can easily answer the problem by using Bézout’s identity:

**If G = \gcd(C,D), then there exist integers x and y such that Cx + Dy = G.**

Using this theorem, we can now achieve any difference |A - B + kG| by setting m = kx and n = -ky. Unfortunately, the theorem doesn’t guarantee anything about the signs of x and y.

But actually, there’s a way to convert any choice (m,n) to a new choice (m',n') such that *both are nonnegative*, by noticing that if (m,n) is a valid choice, then (m+D,n+C) is also valid! It’s simply because

But m+D and n+C are strictly greater than m and n, respectively, so we can apply this rule over and over again until both m and n become positive!

This last piece proves that **the answer is \min((A - B) \bmod G, (B - A) \bmod G)**.

Another way to look at it is that we can (partially) strengthen Bézout’s identity to the following two statements:

**1. If G = \gcd(C,D) and C and D are positive, then there exist integers x > 0 and y < 0 such that Cx + Dy = G.**

**2. If G = \gcd(C,D) and C and D are positive, then there exist integers x < 0 and y > 0 such that Cx + Dy = G.**

The proof goes similarly: For any (x,y) satisfying Cx + Dy = G, the pairs (x + D, y - C) and (x - D, y + C) also satisfies it, so we can keep on applying these until we get the signs that we want. And since we choose (m,n) = (kx,-ky), then:

- If k \ge 0, then we can choose x > 0 and y < 0 to make (m,n) nonnegative.
- If k < 0, then we can choose x < 0 and y > 0 to make (m,n) nonnegative.

# Appendix: Bézout’s identity

Here we’ll prove Bézout’s identity:

**If G = \gcd(C,D), then there exist integers x and y such that Cx + Dy = G.**

We will actually prove this constructively, that is, by giving an algorithm that *finds* such x and y.

Our solution here will involve **Euclid’s algorithm** to compute GCD’s. The basic Euclid recursion for GCD goes:

Euclid’s algorithm simply applies this over and over again until b becomes 0, in which case \gcd(a,0) = a.

Another way of looking at it is that we’re generating a sequence of numbers r_0, r_1, r_2, \ldots such that:

- r_0 = a
- r_1 = b
- r_i = r_{i-2} \bmod r_{i-1}

And we halt this sequence when r_i becomes 0, in which case r_{i-1} is the GCD!

For example, suppose we want to find the GCD of C = 602 and D = 427. Then:

Since r_7 = 0, we find that the GCD is r_6, or 7.

But notice that “a\bmod b” is equivalent to “a - qb”, where q = \lfloor a/b \rfloor. Thus, every number is a linear combination of the numbers above it: r_i = r_{i-2} - qr_{i-1}, where q = \left\lfloor \frac{r_{i-2}}{r_{i-1}} \right\rfloor. By also noticing that C and D can be written as linear combinations of themselves: C = C\cdot 1 + D\cdot 0 and D = C\cdot 0 + D\cdot 1, we can express each subsequent value as a linear combination of C and D too, and by doing so we’re able to **extend** Euclid’s algorithm to actually find the coefficients x and y!

For example, using C = 602 and D = 427 again:

Thus, we establish that \gcd(C,D) = 7, and C\cdot 22 + D\cdot -31 = 7! It’s easy to see that this method can also be applied for any pair (C,D). This is one version of the **extended Euclid’s algorithm.**

### Time Complexity:

O(\log \min(C, D))