MGCHPERM - Editorial

PROBLEM LINK:

Contest
Practice

Author: Misha Chorniy
Tester: Sergey Kulik
Editorialist: Kevin Atienza

PREREQUISITES:

Convex polygon, linear recurrences, polynomial interpolation, precomputation

PROBLEM:

Let F(a _ 1, \ldots, a _ k) = 1 if it is possible to form a convex polygon with side lengths a _ 1, \ldots, a _ k, and 0 otherwise.

Given k and n, find the sum of F(a _ 1, \ldots, a _ k) among all distinct size-k subsets \{a _ 1, \ldots, a _ k\} of \{1, 2, \ldots, n\}.

Note that the problem has nothing to do with permutations.

QUICK EXPLANATION:

If a _ 1 < a _ 2 < \ldots < a _ k, then they can form a convex polygon if and only if a _ 1 + a _ 2 + \ldots + a _ {k-1} > a _ k. Thus, we’re counting subsets that satisfy this inequality.

It’s easier to count the number of subsets that satisfy a _ 1 + a _ 2 + \ldots + a _ {k-1} \le a _ k, and just subtract this from the total number of subsets, or {n \choose k}.

Let C _ k(n) be the number of size-k sets of positive integers whose sum is \le n.
Let T _ k(n) be the sum of all elements of all such sets.

The answer is {n \choose k} - \left((n+1)C _ {k-1}(n) - T _ {k-1}(n)\right).

We can compute these functions using the following recurrences:

C _ k(n) = \begin{cases} 0 & \text{if $n < k$} \\\ 1 & \text{if $k = 0$} \\\ C _ k(n-k) + C _ {k-1}(n-k) & \text{otherwise} \end{cases}
T _ k(n) = \begin{cases} 0 & \text{if $n < k$} \\\ 0 & \text{if $k = 0$} \\\ kC _ k(n) + T _ k(n-k) + T _ {k-1}(n-k) & \text{otherwise} \end{cases}

Also, for each k and r:

  • C _ k(xk!+r) is a polynomial on x of degree k
  • T _ k(xk!+r) is a polynomial on x of degree k+1

So for every k \le 7 and 0 \le r < k!, precompute the coefficients of the polynomials C _ k(xk!+r) and T _ k(xk!+r). With these coefficients, we can now compute C _ k(n) and T _ k(n) in O(k) time each (by using the correct polynomial), so each test case can be answered in O(k) time.

EXPLANATION:

Computing F(a _ 1\ldots a _ k)

We want to answer the following question:

Given the numbers a _ 1 \ldots a _ k, is it possible to form a convex polygon with these side lengths?

Triangle inequality

The case k = 3 is a bit easier. Given three numbers a _ 1, a _ 2 and a _ 3, can we form
a triangle with these sides? The famous [triangle inequality](Triangle - Wikipedia _ inequality#Euclidean _ geometry) says that this is possible if and only if the sum of the lengths of any two sides is greater than the length of the remaining side. In other words:

a _ 1 + a _ 2 > a _ 3, a _ 2 + a _ 3 > a _ 1, a _ 3 + a _ 1 > a _ 2

If we assume that a _ 3 is the largest side, then only one inequality is needed:

a _ 1 + a _ 2 > a _ 3

It’s easy to see why this is necessary: Let S _ 1, S _ 2 and S _ 3 be the sides with lengths a _ 1, a _ 2 and a _ 3 respectively. Now, S _ 1 and S _ 2 share an endpoint, and the other endpoints of S _ 1 and S _ 2 must be the endpoints of S _ 3. The shortest path connecting the endpoints of S _ 3 is through S _ 3 itself, so a _ 3 is the length of the shortest path. Since S _ 1 + S _ 2 is another path between the endpoints of S _ 3, it must be the case that a _ 1 + a _ 2 \ge a _ 3. Finally, a _ 1 + a _ 2 = a _ 3 if and only if S _ 1 + S _ 2 coincides with S _ 3 itself, but this triangle is degenerate (i.e. it has zero area), and such polygons are not counted in the problem. So it must be that a _ 1 + a _ 2 > a _ 3.

We can easily generalize this proof to obtain an inequality for general polygons: The length of the longest side must be less than the sum of the lengths of the other sides.

But is this condition sufficient? That is, is it possible to form the convex polygon, assuming the lengths satisfy our polygon inequality? The answer is yes, and we will give a construction here.

Constructing the polygon

Let a _ 1, a _ 2, \ldots, a _ {k-1}, a _ k be the lengths satisfying the polygon inequality. Also, assume that a _ k is the largest value. Thus, a _ 1 + a _ 2 + \ldots + a _ {k-1} > a _ k. First, we will construct a [polygonal path](Polygon - Wikipedia _ chain) with vertices V _ 0, V _ 1, V _ 2, \ldots, V _ k such that |V _ i - V _ {i-1}| = a _ i for 1 \le i \le k.

We start with a horizontal line, and place the points V _ 0, \ldots V _ k there. First, place V _ 0 anywhere. Then place V _ 1 to the right of V _ 0 such that |V _ 1 - V _ 0| = a _ 1. Then place V _ 2 to the right of V _ 1 such that |V _ 2 - V _ 1| = a _ 2. Continue doing this until V _ {k-1}. Finally, place V _ k, but this time to the left of V _ {k-1}, such that |V _ k - V _ {k-1}| = a _ k. Note that because of the polygon inequality, V _ k is strictly between V _ 0 and V _ {k-1}.

Now, bend the line into an arc (initially with very low curvature), but keep the points on the arc and maintain the distances between them. Then gradually shrink the radius of the arc. As you do this, V _ k gets closer and closer to V _ 0. Stop when V _ k coincides with V _ 0. Once V _ 0 and V _ k coincide, we now have the polygon with sides a _ 1 \ldots a _ k! Moreover, since the vertices are in a circle, this polygon is convex. (such a polygon is called a [cyclic polygon](Circumscribed circle - Wikipedia _ circle#Cyclic _ n-gons))

Thus, assuming that a _ 1\ldots a _ k are sorted, we have just shown that:

F(a _ 1\ldots a _ k) = \begin{cases} 1 & \text{if $a _ 1 + a _ 2 + \ldots + a _ {k-1} > a _ k$} \\\ 0 & \text{otherwise} \end{cases}

We can express this succinctly with the [Iverson bracket](Iverson - Wikipedia _ bracket) notation:

F(a _ 1\ldots a _ k) = [a _ 1 + a _ 2 + \ldots + a _ {k-1} > a _ k]

This assumes that a _ 1\ldots a _ k are sorted. If they’re not, simply sort them first. The order of the $a _ i$s doesn’t matter anyway.

Computing \sum _ a F(a _ 1\ldots a _ k)

We want to know the sum of F(a _ 1\ldots a _ k) among all size-k subsets \{a _ 1\ldots a _ k\} of \{1 \ldots n\}. This sum is equivalent to the number of subsets having F(a _ 1\ldots a _ k) = 1. Let G(k,n) be this sum. Since we are talking about subsets, it doesn’t matter what order a _ 1\ldots a _ k are in, so we can assume they’re sorted, and so G(k,n) is just the number of increasing sequences (a _ 1\ldots a _ k) such that a _ 1 + a _ 2 + \ldots + a _ {k-1} > a _ k.

It’s actually easier to count the number of sequences such that a _ 1 + a _ 2 + \ldots + a _ {k-1} \le a _ k. Let’s call this number L(k,n). Note that G(k,n) + L(k,n) is simply the number of subsets of \{1 \ldots n\} of size k, which is just {n \choose k}, so once we have L(k,n) we can easily get G(k,n) as {n \choose k} - L(k,n). So let’s focus on L(k,n).

Using the Iverson bracket notation, we have:

L(k,n) = \sum _ {1 \le a _ 1 < a _ 2 < \ldots < a _ k \le n} [a _ 1 + a _ 2 + \ldots + a _ {k-1} \le a _ k]

Next, we separate a _ k into its own summation:

L(k,n) = \sum _ {1 \le a _ 1 < a _ 2 < \ldots < a _ {k-1} \le n} \sum _ {a _ k} [a _ {k-1} < a _ k \le n][a _ 1 + \ldots + a _ {k-1} \le a _ k]

However, since a _ k is already \ge the sum of all other lengths, the condition a _ {k-1} < a _ k redundant, so we have:

\begin{aligned} L(k,n) &= \sum _ {1 \le a _ 1 < a _ 2 < \ldots < a _ {k-1}} \sum _ {a _ k} [a _ k \le n][a _ 1 + \ldots + a _ {k-1} \le a _ k] \\\ &= \sum _ {1 \le a _ 1 < a _ 2 < \ldots < a _ {k-1}} \sum _ {a _ k} [a _ 1 + \ldots + a _ {k-1} \le a _ k \le n] \\\ &= \sum _ {\substack{1 \le a _ 1 < a _ 2 < \ldots < a _ {k-1} \\\ a _ 1 + \ldots + a _ {k-1} \le n}} (n - (a _ 1 + \ldots + a _ {k-1}) + 1) \\\ &= \sum _ {\substack{1 \le a _ 1 < a _ 2 < \ldots < a _ {k-1} \\\ a _ 1 + \ldots + a _ {k-1} \le n}} ((n+1) - (a _ 1 + \ldots + a _ {k-1})) \\\ &= (n+1)\left(\sum _ {\substack{1 \le a _ 1 < a _ 2 < \ldots < a _ {k-1} \\\ a _ 1 + \ldots + a _ {k-1} \le n}} 1\right) - \left(\sum _ {\substack{1 \le a _ 1 < a _ 2 < \ldots < a _ {k-1} \\\ a _ 1 + \ldots + a _ {k-1} \le n}} (a _ 1 + \ldots + a _ {k-1})\right) \end{aligned}

Let’s define two new functions:

\begin{aligned} C _ k(n) &= \sum _ {\substack{1 \le a _ 1 < a _ 2 < \ldots < a _ k \\\ a _ 1 + \ldots + a _ k \le n}} 1 \\\ T _ k(n) &= \sum _ {\substack{1 \le a _ 1 < a _ 2 < \ldots < a _ k \\\ a _ 1 + \ldots + a _ k \le n}} (a _ 1 + \ldots + a _ k) \end{aligned}

C _ k(n) is simply the number of positive increasing sequences whose sum is \le n, and T _ k(n) is the sum of the elements of all such sequences.

Using these functions, we now have:

L(k,n) = (n+1) C _ {k-1}(n) - T _ {k-1}(n)

Thus, to compute L(k,n) we need to be able to compute C _ k(n) and T _ k(n) quickly.

Computing C _ k and T _ k

Let’s first derive a formula for C _ k(n). Let (a _ 1, \ldots a _ k) be a positive increasing sequence satisfying:

a _ 1 + a _ 2 + \ldots + a _ k \le n

Let b _ i = a _ i - 1. Thus, we have:

\begin{aligned} (b _ 1+1) + (b _ 2+1) + \ldots + (b _ k+1) &\le n \\\ (b _ 1 + b _ 2 + \ldots + b _ k) + k &\le n \\\ (b _ 1 + b _ 2 + \ldots + b _ k) &\le n - k \end{aligned}

All $a _ i$s are positive, so the $b _ i$s must be at least 1, except possibly b _ 1. So we consider two cases, depending on whether b _ 1 is zero or not:

  • If b _ 1 is zero, then (b _ 2, \ldots b _ k) is a size-(k-1) positive increasing sequence whose sum is \le n-k.
  • If b _ 1 is nonzero, then (b _ 1, b _ 2, \ldots b _ k) is a size-k positive increasing sequence whose sum is \le n-k.

Since any sequence (a _ 1, \ldots a _ k) with sum \le n corresponds to exactly one of these cases, we can count these two kinds of sequences and combine to obtain C _ k(n).

  • There are C _ {k-1}(n-k) sequences corresponding to the case b _ 1 = 0
  • There are C _ k(n-k) sequence corresponding to the case b _ 1 \not= 0

So we have:

C _ k(n) = C _ {k-1}(n-k) + C _ k(n-k)

This gives us a neat recurrence for C _ k(n)! For base cases, we have C _ k(n) = 0 if n < \frac{k(k+1)}{2}, and C _ 0(n) = 1. (The latter corresponds to the empty tuple.)

Now, we can do the same for T _ k(n), except that this time we have to sum the sequences, to get T _ k(n-k) + T _ {k-1}(n-k). However, note that the sum of the $b _ i$s is always k less than the sum of the $a _ i$s, so when summing the sets, we need to add these k s back, once for every sequence. Since there are C _ k(n) such sequences, we have:

T _ k(n) = kC _ k(n) + T _ {k-1}(n-k) + T _ k(n-k)

For base cases, we have T _ k(n) = 0 if n < \frac{k(k+1)}{2}, and T _ 0(n) = 0.

With these recurrences, a dynamic programming approach to computing C _ k(n) and T _ k(n) is now possible, and we can now solve the problem in O(nk) time! We can also precompute these values in O(n _ {\max}k _ {\max}) time before processing any queries, so that computing C _ k(n) and T _ k(n) takes only O(1) after that.

The only drawback of this algorithm is that it doesn’t work for the last subtask, because of time and memory constraints. For that, we will need a few additional tricks.

Faster way to compute C _ k and T _ k

C _ k(n) and T _ k(n) both satisfy linear recurrences, which means matrix exponentiation methods can be used. However, due to the constraints of the problem and the sizes of the matrices involved, these methods may be slow.

For example, there is an O(k^6 \log n)-time algorithm to do it, by constructing a \frac{k(k+1)}{2}\times \frac{k(k+1)}{2} matrix and raising it to the power n. Using more [advanced](https://en.wikipedia.org/wiki/Berlekamp–Massey _ algorithm) recurrence-based techniques, one can reduce this to O(k^4 \log n) or even O(k^2 \log k \log n). However, there is a much simpler solution that doesn’t use any of these techniques, which we’ll describe below.

First, let’s try to derive a formula for C _ 1(n). We have C _ 1(n) = C _ 0(n-1) + C _ 1(n-1), but C _ 0(n-1) is simply 1, so:

\begin{aligned} C _ 1(n) &= C _ 0(n-1) + C _ 1(n-1) \\\ &= 1 + C _ 1(n-1) \\\ &= 1 + 1 + C _ 1(n-2) \\\ &= 1 + 1 + 1 + C _ 1(n-3) \\\ &\ldots \\\ &= \underbrace{1 + 1 + 1 + \cdots + 1} _ {n} \\\ &= n \end{aligned}

which gives us the formula C _ 1(n) = n. Great!

Next, let’s try T _ 1(n). Similarly, we have T _ 1(n) = 1C _ 1(n) + T _ 0(n-1) + T _ 1(n-1), but T _ 0(n-1) is just 0, and we already know C _ 1(n) = n, so:

\begin{aligned} T _ 1(n) &= n + T _ 1(n-1) \\\ &= n + (n-1) + T _ 1(n-2) \\\ &= n + (n-1) + (n-2) + T _ 1(n-3) \\\ &\ldots \\\ &= n + (n-1) + (n-2) + \cdots + 1 \\\ &= \frac{n(n+1)}{2} \end{aligned}

So we have T _ 1(n) = \frac{n(n+1)}{2}. Great!

Let’s continue. Suppose we want to get C _ 2(n). We have C _ 2(n) = C _ 1(n-2) + C _ 2(n-2), but we already know that C _ 1(n-2) = (n-2), so:

\begin{aligned} C _ 2(n) &= (n-2) + C _ 2(n-2) \\\ &= (n-2) + (n-4) + C _ 2(n-4) \\\ &= (n-2) + (n-4) + (n-6) + C _ 2(n-6) \\\ &\ldots \\\ &= (n-2) + (n-4) + (n-6) + \ldots \end{aligned}

This is a little bit trickier, because this is not a simple polynomial unlike C _ 1(n) and T _ 1(n). So we consider two cases, depending on whether n is even or odd.

  • If n is even, let n = 2m. Then:
\begin{aligned} C _ 2(2m) &= (2m-2) + (2m-4) + (2m-6) + \ldots + 2 \\\ &= 2[(m-1) + (m-2) + (m-3) + \ldots + 1] \\\ &= 2\frac{m(m-1)}{2} \\\ &= m(m-1) \end{aligned}
  • If n is odd, let n = 2m+1. Then:
\begin{aligned} C _ 2(2m+1) &= (2m-1) + (2m-3) + (2m-5) + \ldots + 1 \\\ &= [(2m-2) + (2m-4) + (2m-6) + \ldots + 2] + \underbrace{1 + 1 + \ldots + 1} _ m \\\ &= m(m-1) + m \\\ &= m^2 \end{aligned}

Thus, we have the formulas

\begin{aligned} C _ 2(2m) &= m(m-1) \\\ C _ 2(2m+1) &= m^2 \end{aligned}

which are nice simple polynomials! Similarly, we can derive the following formulas:

\begin{aligned} T _ 2(2m) &= \frac{m(m-1)(8m+5)}{6} \\\ T _ 2(2m+1) &= \frac{m(m+1)(8m+1)}{6} \end{aligned}

which are also nice simple polynomials!

These formulas can actually be combined into one:

\begin{aligned} C _ 2(n) &= \frac{2n^2-4n+1 - (-1)^n}{8} \\\ T _ 2(n) &= \frac{8n^3-6n^2-14n+3 - (-1)^n(2n+1)}{48} \end{aligned}

However, note that these are not polynomials.

At these point, we now start to see a pattern. Namely that the formulas that compose C _ k and T _ k seem to be polynomials of degree k and degree k+1, respectively.

Unfortunately, more problems arise once we proceed to C _ 3(n) and T _ 3(n). For example, let’s try C _ 3(n). Since C _ 3(n) = C _ 2(n-3) + C _ 3(n-3), we can derive the following:

C _ 3(n) = C _ 2(n-3) + C _ 2(n-6) + C _ 2(n-9) + \cdots

Now, our first instinct would again be to split it into the three cases: C _ 3(3m), C _ 3(3m+1), C _ 3(3m+2), hoping that they’re polynomials too. Unfortunately, they’re not. This is because the sequence (n-3,n-6,n-9,\ldots) are alternately even or odd, so they will need the odd and even cases of C _ 2(n).

We will actually need to separate C _ 3(n) into 6 cases: C _ 3(6m+r) for r = 0, 1, \ldots 5, in order to get polynomial closed-form formulas. It’s a bit tedious, but you may check it yourself that these are all polynomials of degree 3. Similarly, the T _ 3(6m+r) for r = 0, 1, \ldots 5 are all polynomials of degree 4, which confirms our suspicion about C _ k and T _ k.

In fact, it turns out that our suspicion is right all along! The following general statement about C _ k and T _ k is true:

For a fixed k\ge 0 and r, C _ k(x k!+r) is a polynomial on x of degree k, and T _ k(x k!+r) is a polynomial on x of degree k+1.

As an example, one can verify that, for example:

C _ 4(24x+11) = 576x^4 + 624x^3 + 238x^2 + 37x + 2

which is a polynomial on x with degree 4.

The proof of this is just a natural generalization of the things we did above, coupled with a few facts about sums of polynomials. See the appendix for more about the proof.

From this theorem, we now have another way to compute C _ k(n):

  • Let m and r be the quotient and remainder of the division n / k!, respectively, so we have 0 \le r < k! and n = mk! + r.
  • Compute the values C _ k(xk!+r) for 0 \le x \le k, using our previous algorithm for C _ k(n).
  • Compute the coefficients of the polynomial P(x) := C _ k(xk!+r), from the values computed in the previous step. (One way to do this is with Gaussian elimination, and it runs in O(k^3) time. See the appendix for more details.)
  • C _ k(n) can now be obtained as P(m).

There’s also a very similar algorithm for T _ k(n).

The slowest part of this algorithm is computing the values C _ k(xk!+r) in the second step, which runs in O\left(\left(k+2\right)!\right), so this is also the complexity of the whole algorithm. One way to improve it would be to precompute the values C _ k(n) for k \le \k _ {\max} and n \le \left(k _ {\max}+1\right)! in a table beforehand. This precomputation runs in O\left(\left(k _ {\max}+2\right)!\right) time, and afterwards, we can simply perform table lookups to get the values we want. Using this, answering a test case now runs in O(k^3) time, which is good enough for the last subtask.

Finally, we can take this even further: precompute the coefficients of all the C _ k(xk!+r) themselves. With this, the running time for a test case becomes the very fast O(k)!

Appendix: Computing coefficients of polynomial given values

Suppose we know the degree of a polynomial P(x), say k, and k+1 of its values P(a _ 0) = b _ 0, P(a _ 1) = b _ 1, P(a _ 2) = b _ 2, \ldots P(a _ k) = b _ k. But we don’t know its coefficients. How do we compute the coefficients? In other words, how do we compute the $p _ i$s, where:

P(x) = \sum _ {i=0}^k p _ i x^i = p _ 0 + p _ 1x + p _ 2x^2 + \ldots + p _ kx^k

Let’s try to write down the things we know:

\begin{aligned} p _ 0 + p _ 1a _ 0 + p _ 2a _ 0^2 + \ldots + p _ ka _ 0^k &= b _ 0 \\\ p _ 1 + p _ 1a _ 1 + p _ 2a _ 1^2 + \ldots + p _ ka _ 1^k &= b _ 1 \\\ p _ 2 + p _ 1a _ 2 + p _ 2a _ 2^2 + \ldots + p _ ka _ 2^k &= b _ 2 \\\ \ldots & \ldots \\\ p _ k + p _ 1a _ k + p _ 2a _ k^2 + \ldots + p _ ka _ k^k &= b _ k \end{aligned}

But this is just a [system of linear equations](System - Wikipedia _ of _ linear _ equations) in k+1 variables! Thus, the standard algorithms such as Gaussian elimination would work well here. Gaussian elimination runs in O(k^3) time.

We can also view this as a matrix inversion problem, because:

\left(\begin{array}{ccccc} 1 & a _ 0 & a _ 0^2 & \cdots & a _ 0^k \\\ 1 & a _ 1 & a _ 1^2 & \cdots & a _ 1^k \\\ 1 & a _ 2 & a _ 2^2 & \cdots & a _ 2^k \\\ \vdots & \vdots & \vdots & \ddots & \vdots \\\ 1 & a _ k & a _ k^2 & \cdots & a _ k^k \end{array}\right) \left(\begin{array}{c} p _ 0 \\\ p _ 1 \\\ p _ 2 \\\ \vdots \\\ p _ k \end{array}\right) = \left(\begin{array}{c} b _ 0 \\\ b _ 1 \\\ b _ 2 \\\ \vdots \\\ b _ k \end{array}\right)

(the left-hand side is a [Vandermonde matrix](Alexandre-Théophile Vandermonde - Wikipedia _ matrix))

Since we want to compute the $p _ i$s, we need to multiply both sides by the inverse of the matrix to the left. Computing the inverse of a matrix requires O(k^3) time. Now, this is a bit slower than Gaussian elimination, but this has the advantage that the inverse can be computed once and reused many times, so that answering each case only takes O(k^2) time.

Appendix: Proof that C _ k(x k!+r) and T _ k(x k!+r) are polynomial

We want to prove the following statement:

For a fixed k\ge 0 and r, C _ k(x k!+r) is a polynomial on x of degree k, and T _ k(x k!+r) is a polynomial on x of degree k+1.

Note that there are no restrictions on r.

The proof is by induction, so let’s assume that the statement holds for k-1, and we’ll try to prove it for k.

Assume first that 0 \le r < k!.

From the recurrence, C _ k(x k!+r) = C _ {k-1}(x k!+r-k) + C _ k(x k! + r-k). By expanding this sum like we did before, we can get:

\begin{aligned} C _ k(x k!+r) &= C _ {k-1}(x k!+r-k) + C _ k(x k! + r-k) \\\ &= C _ {k-1}(x k!+r-k) + C _ {k-1}(x k! + r-2k) + C _ k(x k! + r-2k)\\\ &= C _ {k-1}(x k!+r-k) + C _ {k-1}(x k!+r-2k) + C _ {k-1}(x k!+r-3k) + \ldots \\\ &= \sum _ {i=1}^{x(k-1)!}C _ {k-1}(x k!+r-ik) \\\ &= \sum _ {i=0}^{x(k-1)!-1}C _ {k-1}(x k!+r-k-ik) \end{aligned}

where the sum ranges for all i which makes the argument to C _ {k-1} in the sum positive. Next, we can split this sum into (k-1)! sums, depending on the i \bmod (k-1)!. We let q and s be numbers such that i = q(k-1)!+s and 0 \le s < (k-1)!, and manipulate the above, to get:

\begin{aligned} C _ k(x k!+r) &= \sum _ {s=0}^{(k-1)!-1} \sum _ {q=0}^{x-1}C _ {k-1}(x k!+r-k-(q(k-1)!+s)k) \\\ &= \sum _ {s=0}^{(k-1)!-1} \sum _ {q=0}^{x-1}C _ {k-1}((x-q)k\cdot (k-1)!+(r-k-sk)) \end{aligned}

Now, consider the inner sum. Let r' = r - k - sk. By induction, C _ {k-1}(x(k-1)!+r') is a polynomial of degree k-1, say P(x). Thus, the inner sum is of the form \sum _ q P((x-q)k). But summing a polynomial this way results in a polynomial one degree higher, so in fact the inner sum is some polynomial of degree (k-1)+1 = k. Thus, C _ k(x k!+r) is just the sum of (k-1)! polynomials, each of degree k, so C _ k(x k!+r) must be a polynomial of degree k itself.

A similar derivation can be done for T _ k(x k!+r).

If r doesn’t satisfy 0 \le r < k!, then we can easily translate the polynomial so r satisfies it. For example, if 5k! \le r < 6k!, then C _ k(x k!+r) = C _ k((x+5)k!+(r-5k!)) where, 0 \le r-5k! < k!. The latter is a polynomial of degree k, so C _ k(x k!+r) must also be a polynomial of degree k.

Note that a similar argument can show that C _ k (x L_k + r) and T _ k (x L_k + r) are polynomials, where L_k = \text{lcm}(1,2,\ldots,k).

Time Complexity:

O\left(\left(k _ {\max}+2\right)!\right) preprocessing, O(k) per test case

AUTHOR’S AND TESTER’S SOLUTIONS:

Setter
Tester
Editorialist

4 Likes

My solution:

  • notice that the answer for given K and the remainder of N modulo a reasonably small number (I used LCM(1…K), but there may be a smaller one) is a polynomial of a small (\le K) degree in N
  • compute the answers for all K and N \le 700 with some stupid DP, use Lagrange polynomial interpolation to compute the answer for any N

Tada!

I also initially assumed the answer is a polynomial for each K, but I didn’t account for the fact that there’s a different polynomial for each K and remainder r. So, of course, the coefficients I got didn’t work. But then I figured that the answer for every N can be computed by matrix exponentiation. I started from a simple O(N * K) DP-like recurrence and turned it into an O(log(N) * k^6) solution (per test case). The matrix had O(K*K) rows and columns (I don’t remember exactly how many - I think for the largest K it had 50+ rows/columns). I only did two optimizations in order to not get TLE: 1) precompute the powers of 2 of this matrix for each K (and, of course, use them when computing the answer for each test case); 2) use as few modulo operations as possible. These made it run easily within the time limit (I think the worst case running time was < 0.5 sec).

2 Likes