You are not logged in. Please login at to post your questions!


LUCASTH - Editorial




Author: Zhuolin Yang
Tester: Hanlin Ren
Editorialist: Hanlin Ren




Fast Fourier Transform, Divide and Conquer, Lucas Theorem


Let $f(n,k)$ be the sum of products of all size-$k$ subsets of $[n]=\{1,2,\dots,n\}$. Formally $$f(n,k)=\sum_{S\subseteq[n],|S|=k}\prod_{x\in S}x.$$ Given $n\le 10^{501}$ and prime $p\le 10^5$, how many ways are there to select a $k$ from $0$ to $n$ such that $f(n,k)$ is not divisible by $p$?


  • The answer is the number of nonzero coefficients of $P(x)=\prod_{i=1}^n(x+i)$, under modulo $p$;
  • Let $c=\lfloor n/p\rfloor$, $b=n\bmod p$. Then $P(x)\equiv f(x)g(x) \pmod p$, where $f(x)=[\prod_{i=1}^p(x+i)]^c$ and $g(x)=\prod_{i=1}^b(x+i)$;
  • You can observe the fact that $\prod_{i=1}^p(x+i)\equiv (x^p-x)\pmod p$;
  • Let $\Delta(P)$ be the number of nonzero coefficients of polynomial $P$, then $\Delta(P)=\Delta(f)\Delta(g)$, since $f$ has "gap" $p-1$ and $\deg(g) < p-1$;
  • $\Delta(f)$ can be computed by Lucas Theorem; $g$ can be computed by divide-and-conquer and FFT;
  • The time complexity is the addition of two parts: converting $n$ to $p$-ary which costs $O(\log^2 n)$, and computing $g$ which costs $O(p\log^2 p)$.


subtask 1

In this subtask, $n\le 5000$. Let $f[i][j]$ be the product of size-$j$ subsets of $\{1,2,\dots,i\}$. Then we can obtain the following dp equation: $$f[i][j]=f[i-1][j]+i\cdot f[i-1][j-1].$$ This is because, if we choose $i$ into our subset, the product should be $i\cdot f[i-1][j-1]$; otherwise it should be $f[i-1][j]$.

We can compute $f[\cdot][\cdot]$ in $O(n^2)$ time and then compute the answer.

subtask 2

In this subtask, $n\le 200000$. We need an observation: $f(n,k)$ is the coefficient of $x^{n-k}$ in the polynomial $$\prod_{i=1}^n(x+i)=(x+1)(x+2)\dots(x+n).$$ Why is that? Suppose we want to compute the coefficient of $x^{n-k}$ of above polynomial. Think about how you multiply polynomials, and you'll realize that it's just the sum of products over all choices of $k$ subsets of $\{1,2,\dots,n\}$, which is the definition of $f(n,k)$.

Now the task becomes to compute the product of $n$ polynomials of the form $(x+i)$. This can be done by divide and conquer, along with FFT. Let $A(x)$ be the product of the first half of $(x+i)$'s, and $B(x)$ be the product of rest $(x+i)$'s. We compute $A(x)$ and $B(x)$ recursively, and then compute $A(x)\cdot B(x)$ by FFT. The time complexity is $T(n)=2T(n/2)+O(n\log n)$, so $T(n)=O(n\log^2 n)$. This idea is illustrated by the following pseudocode, where $MUL(S)$ computes $\prod_{s\in S}(x+s)$.

  if (|S| == 1) then return (x+s[0])
  partition S into two sets S_1, S_2, each of size roughly |S|/2
  A = MUL(S_1)
  B = MUL(S_2)
  return A*B //A*B is computed by FFT

After that, we just count the number of coefficients which can't be divided by $p$.

subtask 3

Following what we did before, let $P(x)=(x+1)(x+2)\dots (x+n)=a_0+a_1x+a_2x^2+\dots+a_nx^n$, we know that the answer is the number of $a_i$'s which can't be divided by $p$. Let $c=\lfloor n/p\rfloor$ and $b=n\bmod p$. Under modulo $p$, we have: $$P(x)=[(x+1)(x+2)\dots (x+p)]^c(x+1)(x+2)\dots (x+b).$$ You can observe that(see the second last paragraph of Finite field) $$(x+1)(x+2)\dots (x+p)\equiv (x^p-x)\pmod p,$$ Therefore $$P(x)=(x^p-x)^c\cdot (x+1)(x+2)\dots(x+b).$$ Since we only want to count the number of nonzero coefficients(after modulo $p$), it's no difference if we replace $(x^p-x)$ by $(x^{p-1}-1)$. We let $f(x)=(x^{p-1}-1)^c$, and $g(x)=(x+1)(x+2)\dots(x+b)$. For convenience, if $b=p-1$ we let $f(x)=(x^{p-1}-1)^{c+1}$ and $g(x)=1$. Let $\Delta(f)$ be the number of nonzero coefficients of $f$, then $\Delta(f\cdot g)=\Delta(f)\Delta(g)$, since $g$ has degree $< p-1$ and $f$ has nonzero coefficients only on $x^{i(p-1)}$ terms.

Solving $\Delta(f)$

Let's suppose $f(x)=(x^{p-1}-1)^k$ for some integer $k$. Then $$f(x)=\sum_{i=0}^kx^{i(p-1)}(-1)^{k-i}\binom{k}{i}.$$ We write $k,i$ in base $p$, say $k=k_1k_2\dots k_l$ and $i=i_1i_2\dots i_l$. By Lucas's Theorem, $\binom{k}{i}\not\equiv 0\pmod p$ if and only if $\forall 1\le j\le l$, $i_j\le k_j$. Therefore $\Delta(f)=(k_1+1)(k_2+1)\dots(k_l+1)$ since the $j$-th bit of $i$ has $(k_j+1)$ options.

Solving $\Delta(g)$

$\Delta(g)$ can be computed by the divide-and-conquer with FFT in subtask 2.

Time complexity

We need $O(len^2)$ time to convert $n$ into base $p$, where $len=O(\log n)$ is the length of $n$. We need a divide-and-conquer and FFT, whose time complexity is $O(p\log^2 p)$. Thus the total time complexity is $O(len^2+p\log^2 p)$.


Please feel free to share your approaches :)


Author's solution can be found here.
Tester's solution can be found here.

This question is marked "community wiki".

asked 27 Jan '18, 14:03

r_64's gravatar image

accept rate: 16%

edited 12 Feb '18, 15:12

admin's gravatar image

0★admin ♦♦

There is a way to "solve" it using the $O(n^2)$ algorithm, I'm both proud and disgusted of my solution.

From plotting the function $f(i,j)\%P$ as a matrix you can see that it has a triangular shape that after $P$ lines repeats itself in a pattern that looks a lot like the Sierpinski triangle. I then knew that I only needed the $(N\%P)$th row of the triangle to solve the problem. This means I had to calculate $f(N\%P,j)$ for all possible values of $j$.

Using the $O(n^2)$ algorithm this can be done in $O(N\%P)^2$, which was far too slow if $N\%P$ was big. I thought that probably the author had thought of this and had made $N\%P$ really big.

However there is a simple way to calculate $f(N\%P,j)$ for large values of $N\%P$. The function behaves periodic so I simply started out on row $P$ and then went backwards up to row $(N\%P)$. This worked =P. My passing solution had the time complexity $O(min(N\%P,P-(N\%P))^2)$.


answered 13 Feb '18, 01:19

gorre_morre's gravatar image

accept rate: 42%

Can u please how did u calculate the ans by calculating N%p rows? I too saw some trinagles bt wasnt able to deduce anything

(13 Feb '18, 11:23) vivek_19982996★

To illustrate what is talked about in gorre_morre's post here is two images showing an image plot of f(n,k)%p!=0 (where black is true, white is false) with n along the vertical axis and k along the horisontal axis.

Here p=5 shows the Seirpinski fractal/gasket (large image, so expand at will):

View Content

And here p=101 shows the detail of a single element, where we note that all base triangles are the identical: p=101, detail

We're interested in the number of black pixels in some row. So just by finding the answer for the base triangle we can use the structure of the Sierpinski fractal to compute the answer for an arbitrary row. Specifically this involves interpreting the row index as a number in base p to see how many base triangles there is for a given index.


answered 13 Feb '18, 22:38

algmyr's gravatar image

accept rate: 37%

edited 13 Feb '18, 22:41

@algmyr about time complexity of this if N was equal P/2 now what is time complexity of this?!

(13 Feb '18, 23:09) userhacker4★

@userhacker The case $n=p/2$ is not actually the worst case, it's more like $n=p/\sqrt{2}$ (so about 70000 for the largest allowed prime). And as for time complexity: Finding the correct row is quadratic in $n\,\textrm{mod}\,p$. Finding the number of triangles for a given row is more or less $O(\log(n))$, at least small enough to be insignificant.

If you are clever about searching from the top or bottom depending on what row you want, you ended up passing the test cases, although you can easily devise cases that kills such a solution, like $n=p/\sqrt{n}$.

(13 Feb '18, 23:24) algmyr7★

@gorre_morre said time complexity so according that i choose N=49999 and p=99991 then we have O(min(49999,49992)^2)=O(49992^2)=O(2499200064) and we have 4 test case?

(13 Feb '18, 23:36) userhacker4★

i think you getting close to main solution. in main solution we find a general pattern for every N according to p,p^2,p^4,p^8,p^16 ,.... <10^501. just this have one exception for that we need calculate F(value of exeption,p) with FFT. this is fastest way. your way is near it. link text

(13 Feb '18, 23:58) userhacker4★

That time complexity is correct asymptotically, but the actual number of operations is a bit different since we work on a triangle. The point where working from above is more expensive is a bit lower down than half the triangle, essentially the horizontal line that cuts the triangle in two pieces with the same area.

This way is nowhere near the one expected one complexity wise. It's just pure luck that this approach ends up working with the test cases, $n\,\textrm{mod}\,p$ never ended up close to the worst case for this algo.

(14 Feb '18, 00:32) algmyr7★

check this test case for TLE: 4 49999 99991 , 49999 99971 , 49999 99989 , 51999 99991 goodby man.good luck.

(14 Feb '18, 01:08) userhacker4★

On my laptop just the first case takes like 6 seconds (pypy) which wouldn't by itself be a TLE, since pypy for some reason has the same time multiplier as python, 5x. I think your test could potentially actually work if run on the server. However 50k is not the worst case, as stated. From what I measured on single cases, in (p,time) pairs: (50000,6.3), (60000,9.1), (70000,36.7), (80000,14.3), (90000,7.6)

(14 Feb '18, 01:51) algmyr7★
showing 5 of 7 show all

Nice problem! One nitpick in the description: it's not always true that deg(g)<p−1, as n can be -1 mod p. This case can be handled by adding 1 to n, as this just introduces an extra factor of x (mod p) to the polynomial product, which doesn't affect the number of non-zero coefficients. (I think your solution actually does this.)

You can reduce the time taken to work out Δ(g) to O(p.log(p)) operations if you want to, by using a single FFT of order p-1 in the multiplicative group mod p. That's because this FFT converts the coefficients of a polynomial (of degree less than p-1) to the set of values it takes. So the inverse FFT, also an FFT, reconstructs the coefficients of g from the set of values the polynomial takes: g(1), ..., g(p-1). These values can be worked out in O(p) time because g(r)=(r+b)/r*g(r-1). The required FFT isn't a power-of-2 size, but it can converted to three power-of-2 FFTs using a variant of Bluestein's algorithm that doesn't need (2N)^th roots of unity.

This is how one of my solutions worked. It made things maybe two or three times faster compared with a divide-and-conquer approach. (The gain wasn't huge because the overheads are such that p around 100,000 still counts as 'small'!) It takes 0.26 seconds using Python+numpy.


answered 13 Feb '18, 01:27

alexthelemon's gravatar image

accept rate: 25%

edited 14 Feb '18, 21:02

time limit for this problem must be 1 or 2 sec(hard problem). because i see some approach that not considered for this problem.


answered 13 Feb '18, 03:04

userhacker's gravatar image

accept rate: 0%

edited 13 Feb '18, 03:08

@userhacker no 1 sec is also too long .. it should be 0.000....inf..1 nano seconds


answered 14 Feb '18, 00:01

square_root_pi's gravatar image

accept rate: 0%

Using OEIS, I noticed the given example was a sequence produced by Stirling Numbers of the First Kind. I then looked for formulas for Stirling numbers. I used FFT for numbers less than 100K.


answered 14 Feb '18, 08:13

wmoise's gravatar image

accept rate: 9%

Can anyone explain this line "Let Δ(f)Δ(f) be the number of nonzero coefficients of ff, then Δ(f⋅g)=Δ(f)Δ(g)Δ(f⋅g)=Δ(f)Δ(g), since gg has degree <p−1<p−1 and ff has nonzero coefficients only on xi(p−1)xi(p−1) terms."

I mean how is delta(f.g) =delta(f)*delta(g) ?


answered 14 Feb '18, 08:33

vivek_1998299's gravatar image

accept rate: 23%

Please if anyone could help...

(14 Feb '18, 21:53) vivek_19982996★

For example, suppose that $p=7$, and $f=x^{12}+2x^{6}+3$, and $g=x^5+x$. Then $f$ only has non-zero coefficients on $x^{6i}$ terms. $g$ has degree $<6$.

$$fg = x^{17}+x^{13}+2x^{11}+2x^{7}+3x^{5}+3x$$

$\Delta (fg) = 6 = 3 \times 2 = \Delta(f) \Delta(g)$.

(15 Feb '18, 02:03) john_smith_35★

If possible can u please provide a mathematical proof for this : delta(f.g) =delta(f)*delta(g)

Thank you

(15 Feb '18, 15:17) vivek_19982996★

A careful proof will be a maze of notation, and probably not helpful. Perhaps a sketch will help.

$f$ is the sum of $\Delta(f)$ terms, each of the form $a x^{r(p-1)}$.
$g$ is the sum of $\Delta(g)$ terms, each of the form $b x^s$, with $s<p-1$.

When calculating $fg$, we multiply each of the $\Delta(f)$ terms by each of the $\Delta(g)$ terms, and add the products together.

How many terms are there in $fg$? Certainly no more than $\Delta(f)\Delta(g)$. But could there be less? No, because each of the products is non-zero, and all the powers of the resulting $x^{r(p-1)+s}$ are different.

(16 Feb '18, 03:17) john_smith_35★

Got it!!!!

Thank you so much!!!

(16 Feb '18, 10:18) vivek_19982996★

how can you be sure that after multiplying f and g, no coefficient will become zero. I understand that all terms will have different powers. But that does not stop a coefficient becoming 0 mod p.


answered 26 Feb '18, 19:42

chinmay0906's gravatar image

accept rate: 0%


How can a coeffiecient be 0? For convinience lets say q=p-1. See f is a polynomial with terms of multiple of power q,g is a polynomial with degree<q; Lets say i is the power of a term in result,so i will be (j*q)+k where j denotes x^(jq) power if present in f,k denotes x^k power if present in g;

So as u can see ,i will always be distinct,so for each mul we get a distinct term,so for the coefficient to be 0,(a*b)modp should be 0(a is coefficient in f,b in g),which is not possible if a!=0,b!=0,p->prime

(26 Feb '18, 20:08) vivek_19982996★

oh sorry!! i get it now. This holds true because p is prime so, a*b mod p is never equal to 0. Thanks a lot!

(27 Feb '18, 10:38) chinmay09067★
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here



Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text]( "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported
  • mathemetical formulas in Latex between $ symbol

Question tags:


question asked: 27 Jan '18, 14:03

question was seen: 2,707 times

last updated: 27 Feb '18, 10:38