PROBLEM LINK:
Author: Misha Chorniy
Tester: Sergey Kulik
Editorialist: Kevin Atienza
PREREQUISITES:
Convex polygon, linear recurrences, polynomial interpolation, precomputation
PROBLEM:
Let F(a _ 1, \ldots, a _ k) = 1 if it is possible to form a convex polygon with side lengths a _ 1, \ldots, a _ k, and 0 otherwise.
Given k and n, find the sum of F(a _ 1, \ldots, a _ k) among all distinct size-k subsets \{a _ 1, \ldots, a _ k\} of \{1, 2, \ldots, n\}.
Note that the problem has nothing to do with permutations.
QUICK EXPLANATION:
If a _ 1 < a _ 2 < \ldots < a _ k, then they can form a convex polygon if and only if a _ 1 + a _ 2 + \ldots + a _ {k-1} > a _ k. Thus, we’re counting subsets that satisfy this inequality.
It’s easier to count the number of subsets that satisfy a _ 1 + a _ 2 + \ldots + a _ {k-1} \le a _ k, and just subtract this from the total number of subsets, or {n \choose k}.
Let C _ k(n) be the number of size-k sets of positive integers whose sum is \le n.
Let T _ k(n) be the sum of all elements of all such sets.
The answer is {n \choose k} - \left((n+1)C _ {k-1}(n) - T _ {k-1}(n)\right).
We can compute these functions using the following recurrences:
Also, for each k and r:
- C _ k(xk!+r) is a polynomial on x of degree k
- T _ k(xk!+r) is a polynomial on x of degree k+1
So for every k \le 7 and 0 \le r < k!, precompute the coefficients of the polynomials C _ k(xk!+r) and T _ k(xk!+r). With these coefficients, we can now compute C _ k(n) and T _ k(n) in O(k) time each (by using the correct polynomial), so each test case can be answered in O(k) time.
EXPLANATION:
Computing F(a _ 1\ldots a _ k)
We want to answer the following question:
Given the numbers a _ 1 \ldots a _ k, is it possible to form a convex polygon with these side lengths?
Triangle inequality
The case k = 3 is a bit easier. Given three numbers a _ 1, a _ 2 and a _ 3, can we form
a triangle with these sides? The famous [triangle inequality](Triangle - Wikipedia _ inequality#Euclidean _ geometry) says that this is possible if and only if the sum of the lengths of any two sides is greater than the length of the remaining side. In other words:
a _ 1 + a _ 2 > a _ 3, a _ 2 + a _ 3 > a _ 1, a _ 3 + a _ 1 > a _ 2
If we assume that a _ 3 is the largest side, then only one inequality is needed:
a _ 1 + a _ 2 > a _ 3
It’s easy to see why this is necessary: Let S _ 1, S _ 2 and S _ 3 be the sides with lengths a _ 1, a _ 2 and a _ 3 respectively. Now, S _ 1 and S _ 2 share an endpoint, and the other endpoints of S _ 1 and S _ 2 must be the endpoints of S _ 3. The shortest path connecting the endpoints of S _ 3 is through S _ 3 itself, so a _ 3 is the length of the shortest path. Since S _ 1 + S _ 2 is another path between the endpoints of S _ 3, it must be the case that a _ 1 + a _ 2 \ge a _ 3. Finally, a _ 1 + a _ 2 = a _ 3 if and only if S _ 1 + S _ 2 coincides with S _ 3 itself, but this triangle is degenerate (i.e. it has zero area), and such polygons are not counted in the problem. So it must be that a _ 1 + a _ 2 > a _ 3.
We can easily generalize this proof to obtain an inequality for general polygons: The length of the longest side must be less than the sum of the lengths of the other sides.
But is this condition sufficient? That is, is it possible to form the convex polygon, assuming the lengths satisfy our polygon inequality? The answer is yes, and we will give a construction here.
Constructing the polygon
Let a _ 1, a _ 2, \ldots, a _ {k-1}, a _ k be the lengths satisfying the polygon inequality. Also, assume that a _ k is the largest value. Thus, a _ 1 + a _ 2 + \ldots + a _ {k-1} > a _ k. First, we will construct a [polygonal path](Polygon - Wikipedia _ chain) with vertices V _ 0, V _ 1, V _ 2, \ldots, V _ k such that |V _ i - V _ {i-1}| = a _ i for 1 \le i \le k.
We start with a horizontal line, and place the points V _ 0, \ldots V _ k there. First, place V _ 0 anywhere. Then place V _ 1 to the right of V _ 0 such that |V _ 1 - V _ 0| = a _ 1. Then place V _ 2 to the right of V _ 1 such that |V _ 2 - V _ 1| = a _ 2. Continue doing this until V _ {k-1}. Finally, place V _ k, but this time to the left of V _ {k-1}, such that |V _ k - V _ {k-1}| = a _ k. Note that because of the polygon inequality, V _ k is strictly between V _ 0 and V _ {k-1}.
Now, bend the line into an arc (initially with very low curvature), but keep the points on the arc and maintain the distances between them. Then gradually shrink the radius of the arc. As you do this, V _ k gets closer and closer to V _ 0. Stop when V _ k coincides with V _ 0. Once V _ 0 and V _ k coincide, we now have the polygon with sides a _ 1 \ldots a _ k! Moreover, since the vertices are in a circle, this polygon is convex. (such a polygon is called a [cyclic polygon](Circumscribed circle - Wikipedia _ circle#Cyclic _ n-gons))
Thus, assuming that a _ 1\ldots a _ k are sorted, we have just shown that:
We can express this succinctly with the [Iverson bracket](Iverson - Wikipedia _ bracket) notation:
This assumes that a _ 1\ldots a _ k are sorted. If they’re not, simply sort them first. The order of the $a _ i$s doesn’t matter anyway.
Computing \sum _ a F(a _ 1\ldots a _ k)
We want to know the sum of F(a _ 1\ldots a _ k) among all size-k subsets \{a _ 1\ldots a _ k\} of \{1 \ldots n\}. This sum is equivalent to the number of subsets having F(a _ 1\ldots a _ k) = 1. Let G(k,n) be this sum. Since we are talking about subsets, it doesn’t matter what order a _ 1\ldots a _ k are in, so we can assume they’re sorted, and so G(k,n) is just the number of increasing sequences (a _ 1\ldots a _ k) such that a _ 1 + a _ 2 + \ldots + a _ {k-1} > a _ k.
It’s actually easier to count the number of sequences such that a _ 1 + a _ 2 + \ldots + a _ {k-1} \le a _ k. Let’s call this number L(k,n). Note that G(k,n) + L(k,n) is simply the number of subsets of \{1 \ldots n\} of size k, which is just {n \choose k}, so once we have L(k,n) we can easily get G(k,n) as {n \choose k} - L(k,n). So let’s focus on L(k,n).
Using the Iverson bracket notation, we have:
Next, we separate a _ k into its own summation:
However, since a _ k is already \ge the sum of all other lengths, the condition a _ {k-1} < a _ k redundant, so we have:
Let’s define two new functions:
C _ k(n) is simply the number of positive increasing sequences whose sum is \le n, and T _ k(n) is the sum of the elements of all such sequences.
Using these functions, we now have:
Thus, to compute L(k,n) we need to be able to compute C _ k(n) and T _ k(n) quickly.
Computing C _ k and T _ k
Let’s first derive a formula for C _ k(n). Let (a _ 1, \ldots a _ k) be a positive increasing sequence satisfying:
Let b _ i = a _ i - 1. Thus, we have:
All $a _ i$s are positive, so the $b _ i$s must be at least 1, except possibly b _ 1. So we consider two cases, depending on whether b _ 1 is zero or not:
- If b _ 1 is zero, then (b _ 2, \ldots b _ k) is a size-(k-1) positive increasing sequence whose sum is \le n-k.
- If b _ 1 is nonzero, then (b _ 1, b _ 2, \ldots b _ k) is a size-k positive increasing sequence whose sum is \le n-k.
Since any sequence (a _ 1, \ldots a _ k) with sum \le n corresponds to exactly one of these cases, we can count these two kinds of sequences and combine to obtain C _ k(n).
- There are C _ {k-1}(n-k) sequences corresponding to the case b _ 1 = 0
- There are C _ k(n-k) sequence corresponding to the case b _ 1 \not= 0
So we have:
This gives us a neat recurrence for C _ k(n)! For base cases, we have C _ k(n) = 0 if n < \frac{k(k+1)}{2}, and C _ 0(n) = 1. (The latter corresponds to the empty tuple.)
Now, we can do the same for T _ k(n), except that this time we have to sum the sequences, to get T _ k(n-k) + T _ {k-1}(n-k). However, note that the sum of the $b _ i$s is always k less than the sum of the $a _ i$s, so when summing the sets, we need to add these k s back, once for every sequence. Since there are C _ k(n) such sequences, we have:
For base cases, we have T _ k(n) = 0 if n < \frac{k(k+1)}{2}, and T _ 0(n) = 0.
With these recurrences, a dynamic programming approach to computing C _ k(n) and T _ k(n) is now possible, and we can now solve the problem in O(nk) time! We can also precompute these values in O(n _ {\max}k _ {\max}) time before processing any queries, so that computing C _ k(n) and T _ k(n) takes only O(1) after that.
The only drawback of this algorithm is that it doesn’t work for the last subtask, because of time and memory constraints. For that, we will need a few additional tricks.
Faster way to compute C _ k and T _ k
C _ k(n) and T _ k(n) both satisfy linear recurrences, which means matrix exponentiation methods can be used. However, due to the constraints of the problem and the sizes of the matrices involved, these methods may be slow.
For example, there is an O(k^6 \log n)-time algorithm to do it, by constructing a \frac{k(k+1)}{2}\times \frac{k(k+1)}{2} matrix and raising it to the power n. Using more [advanced](https://en.wikipedia.org/wiki/Berlekamp–Massey _ algorithm) recurrence-based techniques, one can reduce this to O(k^4 \log n) or even O(k^2 \log k \log n). However, there is a much simpler solution that doesn’t use any of these techniques, which we’ll describe below.
First, let’s try to derive a formula for C _ 1(n). We have C _ 1(n) = C _ 0(n-1) + C _ 1(n-1), but C _ 0(n-1) is simply 1, so:
which gives us the formula C _ 1(n) = n. Great!
Next, let’s try T _ 1(n). Similarly, we have T _ 1(n) = 1C _ 1(n) + T _ 0(n-1) + T _ 1(n-1), but T _ 0(n-1) is just 0, and we already know C _ 1(n) = n, so:
So we have T _ 1(n) = \frac{n(n+1)}{2}. Great!
Let’s continue. Suppose we want to get C _ 2(n). We have C _ 2(n) = C _ 1(n-2) + C _ 2(n-2), but we already know that C _ 1(n-2) = (n-2), so:
This is a little bit trickier, because this is not a simple polynomial unlike C _ 1(n) and T _ 1(n). So we consider two cases, depending on whether n is even or odd.
- If n is even, let n = 2m. Then:
- If n is odd, let n = 2m+1. Then:
Thus, we have the formulas
which are nice simple polynomials! Similarly, we can derive the following formulas:
which are also nice simple polynomials!
These formulas can actually be combined into one:
However, note that these are not polynomials.
At these point, we now start to see a pattern. Namely that the formulas that compose C _ k and T _ k seem to be polynomials of degree k and degree k+1, respectively.
Unfortunately, more problems arise once we proceed to C _ 3(n) and T _ 3(n). For example, let’s try C _ 3(n). Since C _ 3(n) = C _ 2(n-3) + C _ 3(n-3), we can derive the following:
Now, our first instinct would again be to split it into the three cases: C _ 3(3m), C _ 3(3m+1), C _ 3(3m+2), hoping that they’re polynomials too. Unfortunately, they’re not. This is because the sequence (n-3,n-6,n-9,\ldots) are alternately even or odd, so they will need the odd and even cases of C _ 2(n).
We will actually need to separate C _ 3(n) into 6 cases: C _ 3(6m+r) for r = 0, 1, \ldots 5, in order to get polynomial closed-form formulas. It’s a bit tedious, but you may check it yourself that these are all polynomials of degree 3. Similarly, the T _ 3(6m+r) for r = 0, 1, \ldots 5 are all polynomials of degree 4, which confirms our suspicion about C _ k and T _ k.
In fact, it turns out that our suspicion is right all along! The following general statement about C _ k and T _ k is true:
For a fixed k\ge 0 and r, C _ k(x k!+r) is a polynomial on x of degree k, and T _ k(x k!+r) is a polynomial on x of degree k+1.
As an example, one can verify that, for example:
which is a polynomial on x with degree 4.
The proof of this is just a natural generalization of the things we did above, coupled with a few facts about sums of polynomials. See the appendix for more about the proof.
From this theorem, we now have another way to compute C _ k(n):
- Let m and r be the quotient and remainder of the division n / k!, respectively, so we have 0 \le r < k! and n = mk! + r.
- Compute the values C _ k(xk!+r) for 0 \le x \le k, using our previous algorithm for C _ k(n).
- Compute the coefficients of the polynomial P(x) := C _ k(xk!+r), from the values computed in the previous step. (One way to do this is with Gaussian elimination, and it runs in O(k^3) time. See the appendix for more details.)
- C _ k(n) can now be obtained as P(m).
There’s also a very similar algorithm for T _ k(n).
The slowest part of this algorithm is computing the values C _ k(xk!+r) in the second step, which runs in O\left(\left(k+2\right)!\right), so this is also the complexity of the whole algorithm. One way to improve it would be to precompute the values C _ k(n) for k \le \k _ {\max} and n \le \left(k _ {\max}+1\right)! in a table beforehand. This precomputation runs in O\left(\left(k _ {\max}+2\right)!\right) time, and afterwards, we can simply perform table lookups to get the values we want. Using this, answering a test case now runs in O(k^3) time, which is good enough for the last subtask.
Finally, we can take this even further: precompute the coefficients of all the C _ k(xk!+r) themselves. With this, the running time for a test case becomes the very fast O(k)!
Appendix: Computing coefficients of polynomial given values
Suppose we know the degree of a polynomial P(x), say k, and k+1 of its values P(a _ 0) = b _ 0, P(a _ 1) = b _ 1, P(a _ 2) = b _ 2, \ldots P(a _ k) = b _ k. But we don’t know its coefficients. How do we compute the coefficients? In other words, how do we compute the $p _ i$s, where:
Let’s try to write down the things we know:
But this is just a [system of linear equations](System - Wikipedia _ of _ linear _ equations) in k+1 variables! Thus, the standard algorithms such as Gaussian elimination would work well here. Gaussian elimination runs in O(k^3) time.
We can also view this as a matrix inversion problem, because:
(the left-hand side is a [Vandermonde matrix](Alexandre-Théophile Vandermonde - Wikipedia _ matrix))
Since we want to compute the $p _ i$s, we need to multiply both sides by the inverse of the matrix to the left. Computing the inverse of a matrix requires O(k^3) time. Now, this is a bit slower than Gaussian elimination, but this has the advantage that the inverse can be computed once and reused many times, so that answering each case only takes O(k^2) time.
Appendix: Proof that C _ k(x k!+r) and T _ k(x k!+r) are polynomial
We want to prove the following statement:
For a fixed k\ge 0 and r, C _ k(x k!+r) is a polynomial on x of degree k, and T _ k(x k!+r) is a polynomial on x of degree k+1.
Note that there are no restrictions on r.
The proof is by induction, so let’s assume that the statement holds for k-1, and we’ll try to prove it for k.
Assume first that 0 \le r < k!.
From the recurrence, C _ k(x k!+r) = C _ {k-1}(x k!+r-k) + C _ k(x k! + r-k). By expanding this sum like we did before, we can get:
where the sum ranges for all i which makes the argument to C _ {k-1} in the sum positive. Next, we can split this sum into (k-1)! sums, depending on the i \bmod (k-1)!. We let q and s be numbers such that i = q(k-1)!+s and 0 \le s < (k-1)!, and manipulate the above, to get:
Now, consider the inner sum. Let r' = r - k - sk. By induction, C _ {k-1}(x(k-1)!+r') is a polynomial of degree k-1, say P(x). Thus, the inner sum is of the form \sum _ q P((x-q)k). But summing a polynomial this way results in a polynomial one degree higher, so in fact the inner sum is some polynomial of degree (k-1)+1 = k. Thus, C _ k(x k!+r) is just the sum of (k-1)! polynomials, each of degree k, so C _ k(x k!+r) must be a polynomial of degree k itself.
A similar derivation can be done for T _ k(x k!+r).
If r doesn’t satisfy 0 \le r < k!, then we can easily translate the polynomial so r satisfies it. For example, if 5k! \le r < 6k!, then C _ k(x k!+r) = C _ k((x+5)k!+(r-5k!)) where, 0 \le r-5k! < k!. The latter is a polynomial of degree k, so C _ k(x k!+r) must also be a polynomial of degree k.
Note that a similar argument can show that C _ k (x L_k + r) and T _ k (x L_k + r) are polynomials, where L_k = \text{lcm}(1,2,\ldots,k).
Time Complexity:
O\left(\left(k _ {\max}+2\right)!\right) preprocessing, O(k) per test case