KOL1511 - Editorial



Author: Piyush Kumar
Tester: Kevin Atienza
Editorialist: Kevin Atienza


Euler’s totient function, Euclidean algorithm, gcd


Given A, B and N satisfying A > B \ge 1, \text{gcd}(A,B) = 1, and N \ge 1, compute the following:

\sum_{i=1}^N \sum_{j=1}^N \text{gcd}(A^i - B^i, A^j - B^j)


The sum is equivalent to:

\sum_{1 \le i, j \le N} \left[A^{\text{gcd}(i,j)} - B^{\text{gcd}(i,j)}\right]


\begin{aligned} S(N) &= \sum_{{ 1\le i,j\le N, \text{gcd}(i,j)=1}} 1 \\ F_a(N) &= \sum_{1 \le i, j \le N} a^{\text{gcd}(i,j)} \end{aligned}

Then the answer is F_A(N) - F_B(N), and we also have the following:

\begin{aligned} S(N) &= N^2 - \sum_{g=2}^N S(\left\lfloor N/g \right\rfloor) \\ F_a(N) &= \sum_{g=1}^N a^g S(\left\lfloor N/g \right\rfloor) \end{aligned}

Such a sum can be split into two parts, where \lfloor N/g \rfloor \ge s and \lfloor N/g \rfloor < s, where s = \lfloor \sqrt{N} \rfloor, so each sum can be computed in O(N^{1/2 + \varepsilon}) time.

By precomputing S(n) for n = 1 \ldots s and (S(\lfloor N/g \rfloor) for g = s \ldots 1, F_a(N) (and thus the answer) can be computed in O(N^{3/4}) time. With the help of a sieve for smaller values, this can be reduced to O(N^{2/3} (\log \log N)^{1/3}).


Let’s study \text{gcd}(A^i - B^i, A^j - B^j) first, because obviously we won’t be able to reach anywhere without simplifying this somehow. (The numbers involved are very large!) Without loss of generality, assume i \ge j.

You might notice that (A - B) is a common factor of A^i - B^i and A^j - B^j because the following is true for any n:

A^n - B^n = (A - B)(A^{n-1} + A^{n-2}B + A^{n-3}B^2 + \ldots + AB^{n-2} + B^{n-1})


However, (A - B) is not necessarily the gcd! For one, if both i and j are even, then by the same principle (but replaceing A by A^2 and B by B^2), A^2 - B^2 is a common factor too. In fact, we can generalize this:
If x divides y, then A^x - B^x divides A^y - B^y. Thus, we find that if g is the greatest common divisor of i and j, then A^g - B^g is a common divisor of A^i - B^i and A^j - B^j. But is this the greatest?

Let’s run the Euclidean algorithm on \text{gcd}(A^i - B^i, A^j - B^j) to find out. The main property that makes this algorithm true is the following:

Nice property: For any x, y and s, we have \text{gcd}(x,y) = \text{gcd}(x-sy,y)

This holds true even for nonpositive x, y and s, although one has to be careful with interpreting \text{gcd}(x,y). (For example, define \text{gcd}(x,y) as the positive gcd. There are other elegant ways to handle this.)

Euclidean’s algorithm is just an instance of this, with s = \left\lfloor x/y \right\rfloor. Noting that x \bmod y = x - \left\lfloor x/y \right\rfloor y, we have:

\begin{aligned} \text{gcd}(x,y) &= \text{gcd}(x \bmod y,y) \\ \text{gcd}(x,0) &= x \end{aligned}

The second one is just the base case.

So let’s use the nice property:

\begin{aligned} \text{gcd}(A^i - B^i, A^j - B^j) &= \text{gcd}(A^i - B^i - A^{i-j}(A^j - B^j), A^j - B^j) \\ &= \text{gcd}(A^i - B^i - A^i + A^{i-j} B^j, A^j - B^j) \\ &= \text{gcd}(A^{i-j} B^j - B^i, A^j - B^j) \\ &= \text{gcd}(B^j (A^{i-j} - B^{i-j}), A^j - B^j) \end{aligned}

What can we say about B^j and A^j - B^j? Note that we can use the nice property again to get \text{gcd}(B^j, A^j - B^j) = \text{gcd}(B^j, (A^j - B^j) + B^j) = \text{gcd}(B^j, A^j). But it’s given that A and B are coprime, so we know that \text{gcd}(B^j, A^j - B^j) = \text{gcd}(B^j,A^j) = 1. Thus, B^j and A^j - B^j have no common factors other than 1, so we find that \text{gcd}(B^j (A^{i-j} - B^{i-j}), A^j - B^j) is just \text{gcd}(A^{i-j} - B^{i-j}, A^j - B^j). Thus, we have the following identity:

\text{gcd}(A^i - B^i, A^j - B^j) = \text{gcd}(A^{i-j} - B^{i-j}, A^j - B^j)

Applying this rule s times, we get:

\text{gcd}(A^i - B^i, A^j - B^j) = \text{gcd}(A^{i-sj} - B^{i-sj},A^j - B^j)

Notice the similarity with the nice property! If we let s = \left\lfloor i / j \right\rfloor, we get the following:

\text{gcd}(A^i - B^i, A^j - B^j) = \text{gcd}(A^{i \bmod j} - B^{i \bmod j},A^j - B^j)

It feels like we’re taking the gcd of the exponents instead. In fact, A^0 - B^0 = 0, so even the base case holds: \text{gcd}(A^i - B^i, A^0 - B^0) = \text{gcd}(A^i - B^i, 0) = A^i - B^i, which means that, if we let F(x) = A^x - B^x, then:

\begin{aligned} \text{gcd}(F(i), F(j)) &= \text{gcd}(F(i \bmod j),F(j)) \\ \text{gcd}(F(i), F(0)) &= F(i) \end{aligned}

It really feels like we’re doing the Euclidean gcd on the exponents! In fact, using these two, we can prove by induction that:

\text{gcd}(F(i), F(j)) = F(\text{gcd}(i,j))

Thus, we have just shown that if g is the gcd of i and j, then A^g - B^g is the greatest common divisor of A^i - B^i and A^j - B^j!

Now, we can proceed with the sum.

\sum_{1 \le i, j \le N} \text{gcd}(A^i-B^i, A^j-B^j) = \sum_{1 \le i, j \le N} \left[A^{\text{gcd}(i,j)} - B^{\text{gcd}(i,j)}\right]

We can separate this into two sums:

= \sum_{1 \le i, j \le N} A^{\text{gcd}(i,j)} - \sum_{1 \le i, j \le N} B^{\text{gcd}(i,j)}

Let’s define F_a(N) = \sum_{1 \le i, j \le N} a^{\text{gcd}(i,j)}. Thus, the answer is F_A(N) - F_B(N), so we can just focus on computing F_a(N) for a given a and N.

Computing F_a(N) quickly

To compute, F_a(N) = \sum_{1 \le i, j \le N} a^{\text{gcd}(i,j)}, we can instead loop among all possible gcds g, and count the number of pairs 1 \le i, j \le N whose gcd is g. in other words:

\begin{aligned} F_a(N) &= \sum_{1 \le i, j \le N} a^{\text{gcd}(i,j)} \\ &= \sum_{g=1}^N \sum_{{1 \le i, j \le N, \text{gcd}(i,j) = g}} a^g \\ &= \sum_{g=1}^N a^g \sum_{{1 \le i, j \le N, \text{gcd}(i,j) = g}} 1 \end{aligned}

But if \text{gcd}(i,j)=g, then i and j are multiples of g, so let i = i'g and j = j'g, to get:

\begin{aligned} &= \sum_{g=1}^N a^g \sum_{{1 \le i'g, j'g \le N, \text{gcd}(i'g,j'g) = g}} 1 \\ &= \sum_{g=1}^N a^g \sum_{{1 \le i', j' \le \left\lfloor N/g \right\rfloor, \text{gcd}(i',j') = 1}} 1 \end{aligned}

Define two new functions:

\begin{aligned} S(N) &= \sum_{{ 1\le i,j\le N, \text{gcd}(i,j)=1}} 1 \\ T(N) &= \sum_{1 \le i, j \le N} 1 \end{aligned}

So that F_a(N) is just \sum_{g=1}^N a^g S(\left\lfloor N/g \right\rfloor). Thus, we want to compute the $S(\left\lfloor N/g \right\rfloor)$s quickly.

The former, S(N), is just the number of coprime pairs in the [1,N]\times[1,N] grid of lattice points, while T(N) is just the number of pairs all in all, which is just N^2. However, by again looping among all possible gcds g, we also have:

\begin{aligned} T(N) &= \sum_{1 \le i, j \le N} 1 \\ &= \sum_{g=1}^N \sum_{{1 \le i, j \le N, \text{gcd}(i,j) = g}} 1 \\ &= \sum_{g=1}^N \sum_{{1 \le i', j' \le \left\lfloor N/g \right\rfloor, \text{gcd}(i',j') = 1}} 1 \\ &= \sum_{g=1}^N S(\left\lfloor N/g \right\rfloor) \end{aligned}

This gives us the following recurrence:

S(N) = N^2 - \sum_{g=2}^N S(\left\lfloor N/g \right\rfloor)

Amazingly, we can convert this into an algorithm to compute S(N) in o(N) (sublinear) time! To do so, notice that there aren’t very many distinct values possible for \left\lfloor N/g \right\rfloor. Indeed, when g \ge \sqrt{N}, N/g becomes \le \sqrt{N}, so after the first \sqrt{N} terms, there are only \sqrt{N} distinct values left for \left\lfloor N/g \right\rfloor to take. Thus, there are at most 2\sqrt{N} distinct values of \left\lfloor N/g \right\rfloor!

To use this to simplify the sum, we group together the summands with the same \left\lfloor N/g \right\rfloor value. First, we split it into two halves. Let s = \left\lfloor \sqrt{N} \right\rfloor. Then:

S(N) = N^2 - \sum_{{g=2, \left\lfloor N/g \right\rfloor \ge s}}^N S(\left\lfloor N/g \right\rfloor) - \sum_{{g=2, \left\lfloor N/g \right\rfloor < s}}^N S(\left\lfloor N/g \right\rfloor)

The first summation only has at most \sqrt{N} terms, so it can be computed naïvely. Let’s focus on the second sum:

\sum_{{g=2, \left\lfloor N/g \right\rfloor < s}}^N S(\left\lfloor N/g \right\rfloor)

We can instead iterate over all distinct values of \left\lfloor N/g \right\rfloor.

\begin{aligned} &= \sum_{v=1}^{s-1} \sum_{{g=2, \left\lfloor N/g \right\rfloor = v}}^N S(v)\\ &= \sum_{v=1}^{s-1} S(v) \sum_{{g=2, \left\lfloor N/g \right\rfloor = v}}^N 1 \end{aligned}

Which g s satisfy \left\lfloor N/g \right\rfloor = v? Simple manipulation gives the following list:

\left\lfloor \frac{N}{v+1} \right\rfloor + 1, \left\lfloor \frac{N}{v+1} \right\rfloor + 2, \left\lfloor \frac{N}{v+1} \right\rfloor + 3, \ldots, \left\lfloor \frac{N}{v} \right\rfloor - 1, \left\lfloor \frac{N}{v} \right\rfloor

There are \left\lfloor \frac{N}{v} \right\rfloor - \left\lfloor \frac{N}{v+1} \right\rfloor such numbers. Therefore, the sum above is equal to:

\sum_{v=1}^{s-1} S(v) \left(\left\lfloor \frac{N}{v} \right\rfloor - \left\lfloor \frac{N}{v+1} \right\rfloor \right)

Such a sum can be computed in O(s) = O(\sqrt{N}) time! Thus, S(N) can be computed in O(\sqrt{N}) time assuming we have the necessary smaller values of S.

By first computing S(n) for n = 1, 2, \ldots, \sqrt{N} and then S(\left\lfloor N/g \right\rfloor) for g = \left\lfloor \sqrt{N} \right\rfloor, \left\lfloor \sqrt{N} \right\rfloor -1, \ldots, 1 using this sum, one can compute S(N) quickly. (And indeed, all S(\left\lfloor N/g \right\rfloor) together with it.) One can work out that the running time of this algorithm is O(N^{3/4}), which is sublinear in N!

What about F_a(N) itself? Remember that:

F_a(N) = \sum_{g=1}^N a^g S(\left\lfloor N/g \right\rfloor)

We can group the summands into two cases similarly as above, but this time, the second sum is more complicated:

\sum_{{g=1, \left\lfloor N/g \right\rfloor < s}}^N a^g S(\left\lfloor N/g \right\rfloor)

If we proceed similarly as above, we will get the following equivalent sum:

= \sum_{v=1}^{s-1} S(v) \sum_{{g=1, \left\lfloor N/g \right\rfloor = v}}^N a^g

Thankfully, the sequence of g s satisfying \left\lfloor N/g \right\rfloor = v are consecutive integers, as we have seen above. And summing a^g for a list of consecutive integers can be done with a simple geometric series formula:

a^{x+1} + a^{x+2} + \ldots + a^y = \frac{a^{y+1} - a^{x+1}}{a - 1}

Such a number can be computed with fast modular exponentiation and computing the modular inverse of (a-1). (If a = 1, then a-1 is not invertible, but in that case we simply have F_a(N) = N^2.) Thus, F_a(N) can be computed in O(\sqrt{N} \log N) time! (Assuming we have already computed the S(\left\lfloor N/g \right\rfloor) s.)

The overall running time is O(N^{3/4}), dominated by the computation of the S(\left\lfloor N/g \right\rfloor) s.


Once can improve this running time by sieving the small values of S. First, remember that:

T(N) = \sum_{g=1}^N S\left(\left\lfloor \frac{N}{g} \right\rfloor\right)

Now, consider T(N) - T(N-1). Let’s look at T(N-1) first:

T(N-1) = \sum_{g=1}^N S\left(\left\lfloor \frac{N-1}{g} \right\rfloor\right)

We have allowed g to go up to N because \left\lfloor \frac{N-1}{g} \right\rfloor will become 0 anyway. Thus:

T(N) - T(N-1) = \sum_{g=1}^N \left(S\left(\left\lfloor \frac{N}{g} \right\rfloor\right) - S\left(\left\lfloor \frac{N-1}{g} \right\rfloor\right)\right)

Now, \left\lfloor \frac{N-1}{g} \right\rfloor is usually equal to \left\lfloor \frac{N}{g} \right\rfloor. The only times this doesn’t happen are when g is a divisor of N, in which case we have \left\lfloor \frac{N-1}{g} \right\rfloor = \left\lfloor \frac{N}{g} \right\rfloor - 1. Thus:

\begin{aligned} T(N) - T(N-1) &= \sum_{g=1}^N \left[S\left(\left\lfloor \frac{N}{g} \right\rfloor\right) - S\left(\left\lfloor \frac{N-1}{g} \right\rfloor\right)\right] \\ T(N) - T(N-1) &= \sum_{g|N} \left[S\left(\left\lfloor \frac{N}{g} \right\rfloor\right) - S\left(\left\lfloor \frac{N-1}{g} \right\rfloor\right)\right] \\ T(N) - T(N-1) &= \sum_{g|N} \left[S\left(\left\lfloor N/g \right\rfloor\right) - S\left(\left\lfloor N/g \right\rfloor - 1\right)\right] \\ T(N) - T(N-1) &= \sum_{g|N} \left[S\left(N/g\right) - S\left(N/g - 1\right)\right] \\ T(N) - T(N-1) &= \sum_{g|N} \left[S\left(g\right) - S\left(g - 1\right)\right] \end{aligned}

We can extract S(N) - S(N-1) from that last equality to get:

S(N) - S(N-1) = T(N) - T(N-1) - \sum_{{g|N, g < N}} \left[S\left(g\right) - S\left(g - 1\right)\right]

This equality allows us to create a sieve to compute S(n) for n = 1 \ldots U quickly, as shown below (noting that T(N) - T(N-1) is just 2N - 1):

S = array[1..U]
for n in 1..U:
    S[n] = 2*n - 1

for g in 1..U:
    for n in 2g..U by g:
        S[n] -= S[g]
    S[g] += S[g-1]

After running this code, S[n] now contains S(n) for n = 1 \ldots U. And since for each g we are only iterating through multiples of g, the running time is O(U/1 + U/2 + U/3 + \ldots + U/U) = O(U \log U).

To compute all S(\left\lfloor N/g \right\rfloor) quicker than O(N^{3/4}), we first choose a value for U, then compute S(n) for n = 1 \ldots U using the sieve above, and finally compute S(\left\lfloor N/g \right\rfloor) for \left\lfloor N/g \right\rfloor > U using our original O(\sqrt{N/g}) algorithm above. It turns out that the optimal choice for U is \Theta\left(\left(\frac{N}{\log N}\right)^{2/3}\right), which gives us an overall running time of O(N^{2/3} (\log N)^{1/3}).

Finally, we mention that in fact there’s a slightly faster sieve than the above, which runs in O(U \log \log U) time rather than O(U \log U). With such a sieve, and with a different choice for U, the running time becomes O(N^{2/3} (\log \log N)^{1/3}). We leave finding this sieve to the reader. (Hint: Express S(n) using Euler’s totient function.)

Time Complexity:

O(N^{3/4}), O(N^{2/3} (\log N)^{1/3}), or O(N^{2/3} (\log \log N)^{1/3})



@admin The problems are not accessible in practise session.Please look into the matter.Thank you