### PROBLEM LINKS

### DIFFICULTY

medium-hard

### PREREQUISITES

dp, matrix exponentiation, newton’s identities, elementary symmetric polynomials

### PROBLEM

Let f(x) = {a_1}^x + {a_2}^x + \dots + {a_n}^x \pmod {10^9 + 7}, where a_1, a_2, \dots, a_n are integers. You don’t know a_1, a_2, \dots, a_n. Instead you have the values of f(1), f(2), \dots, f(n). You are to find f(x) for some given x.

### QUICK EXPLANATION

### EXPLANATION

#### Take small Examples

Let us take small examples to understand definition of f(x).

Let n = 1, f(x) = {a_1}^x. i.e. f(1) = a_1.

We are given value of f(1), so know value of a_1. So we know that function f(x). Now, we can evaluate f for any x.

Let n = 2. f(x) = {a_1}^x + {a_2}^x.

f(1) and f(2) are known values. We want to evaluate f(x) for any x.

We know value of a_1 \cdot a_2 and a_1 + a_2 = f(1). Using this information, we can find values of a_1, a_2.

#### Elementary symmetric polynomials

Elementary symmetric polynomials are one of the important constructs in mathematics, any symmetric polynomial can be represented as a polynomial in elementary symmetric polynomials. In other words, any symmetric polynomial can be represented by an expression involving only additions and multiplications of elementary symmetric polynomials. There is one elementary symmetric polynomial of degree d in n variables (d \leq n), which is formed by adding together products of d distinct variables. For brevity, we will refer elementary symmetric polynomial of degree d in n variables as ESP(n, d).

For example, let us consider only two variables i.e. n = 2, a_1 and a_2.

Now let us consider only two variables n = 3, a_1, a_2 and a_3.

A more complex example

In other words, each term will be sum of product of all d sized subset of distinct variables. In general, there will be total \binom{n}{d} terms in it.

Any symmetric polynomial can be represented in terms of ESP’s. Here are some examples.

#### Expressing EPSs in terms of power sums

Let us denote ESP(n, k) as e_k.

We see

In general,

Using these relations, we can obtain values of e_0, e_1, \dots e_n by the values of f(1), f(2), \dots, f(n) in total \mathcal{O}(n^2) time.

#### Expressing power sums in terms of ESPS

Rewriting the above equations used in previous section, we get

In general,

As we know that e_i for i > n will be zero.

So, we get

As e_{n + 1} = 0.

We see that it has only n terms.

In general f(X) for even X > n will contain n terms in its representation of ESPs.

#### Solving recurrence relations

Now, we see that f(X) for X > n can be represented as a recurrence relation of n terms. Also coefficients in this recurrence relation are constant or linearly related with i. So the recurrence relation is linear.

**Direct matrix multiplication for each query**

Using the direct matrix exponentation, we will get \mathcal{O}(n^3 \, log X) time complexity for a single query of f(X). There are Q queries, so, the total time will be \mathcal{O}(Q \, n^3 \, log X). This can solve up to three subtasks.

**Slightly faster solution**

We can precompute M, M^2, M^4, M^8, M^{16}, \dots, M^{2^p}, such that p is largest number satisfying 2^p \leq 10^{18}.

As mulitplying a matrix with a vector takes \mathcal{O}(n^2) time. The query f(X) can be answered in at most log(X) multiplications of the precomputed matrices and a constant vector, to answer each query in \mathcal{O}(n^2) log X time.

Overall time taken will be dominated by precomputing as number of queries are small. So, we get \mathcal{O}(n^3 \, log X) time.

Let us take a small example to describe this in detail. Let us take the simplest example of finding fibonacci number.

Let

and the vector v be

Assume we computed, M, M^2, M^4, M^8 etc.

We want to compute f(8). As we know that f(8) = M^7 v. We can write M^7 as M^1 * M^2 * M^4.

f(8) = M^1 * M^2 * M^4 * v

Multiply the matrices in the **order from right to left** instead of left to right. So, each time we multiply the matrix with a vector. So, we do total 3 multiplications of matrix with vector when we go from right to left. If we go from left to right, we need to 2 multiplication of matrices and one multiplication with constant vector.

#### Subtask 4

We need to optimize the pre-computation step. Using Cayley Hamilton theorem, we can find f(X) in \mathcal{O}(n^2 log X) time. A nice explanation of this can be found here. This can optimize the pre-computation step to \mathcal{O}(n^2 \, log X \, log 10^{18}).

### SETTER’S SOLUTION

Can be found here.

### TESTER’S SOLUTION

Can be found here.