PRECPAIR & BIPPAIR - Editorial

Problem Link:

Setter: Zac Friggstad
Tester: Danya Smelskiy

Difficulty:

Super-Hard

Problem Statement: Paraphrased

Given an undirected graph G = (V,E) where each edge e \in E has an integral value v_e. Determine all k such that there is a perfect matching in G with total value k.

Below describes and algorithm with running time O(\nu \cdot n^4) where \nu = \max_{e \in E} v_e and n = |V|. Furthermore, the bottleneck in the running time runs fairly fast in practice (the constant is small) as it is mostly just some nested loops to perform Gaussian elimination.

NOTE: Much of this editorial might seem very new to most readers. But it follows from a solid understanding of algebraic approaches to matching theory. If this is very new to you, consider it a topic to read up on!

Algorithm

The algorithm is summarized below, though it won’t mean much if you don’t have proper context with the background material. An exposition follows explaining the algorithm.

The overall idea is to use Tutte matrices with polynomial entries in a way that separates the contributions of matchings of different values to coefficients of different terms of some polynomial we can compute. But for the case of BIPPAIR, one can use a simpler approach that still has you compute a polynomial, but with with less overhead in the details.

Pseudocode

  • Fix a large prime p (2^{31}-1 suffices for the bounds on this problem).

  • For each e \in E, let x_e be a random integer mod p.

  • Let \nu denote the maximum value of any edge.

  • For y = 0, \ldots, \nu \cdot n

  • Let M be a V \times V matrix with entries being integers mod p. Set:

    • M_{u,v} = 0 if uv is not an edge.
    • M_{u,v} = y^{v_e} \cdot x_e if e = uv has u < v (according to some fixed ordering of the nodes, like the indexing in the problem statement)
    • M_{u,v} = -y^{v_e} \cdot x_e if e = uv has v < u.
      The point is that M is skew-symmetric (M^T = -M).
  • Evaluate the determinant \det M modulo p and call this value z_y.

  • Interpolate a polynomial g() of degree \leq \nu \cdot n such that g(y) = z_y for each y as above.

  • Compute a polynomial such that f^2 = g (a square root of g, one always exists in this context). You can normalize g beforehand to make it monic, it makes this step easier to implement. We are just concerned about which coefficients are nonzero.

  • Output all k such that the coefficient of y^k in f() is nonzero.

The overall running time is O(\nu \cdot n^4) if \nu \leq n (otherwise it is O(\nu^2 \cdot n^2)).

The bottleneck O(\nu \cdot n^4) is from computing O(\nu \cdot n) determinants using standard Gaussian elimination to compute a determinant in O(n^3) time. This does not account for the running time of inverses mod p which use O(\log p) standard arithmetic steps, but a careful approach uses O(\nu \cdot n) inverses overall. Mine was less careful and used O(\nu \cdot n^2), which was still fast enough as it was not the bottleneck in the running time.

Comment: This is a randomized algorithm that succeeds with high probability: at least 1 - \frac{\nu n^2}{p}. For the bounds in the problem and the prime mentioned above, this is very close to 1. It is an open problem to find a deterministic, polynomial time algorithm that solves the problem even in the very special case of bipartite graphs and edge weights 0 or 1.

References

The main reference for the core idea is: Matching is as Easy as Matrix Inversion, K. Mulmuley, U. V. Vazirani, and V. V. Vazirani, Combinatorica 7, 107–113, 1987.

https://dl.acm.org/doi/pdf/10.1145/28395.383347

They briefly comment that one can solve this problem by computing the Pfaffian of a particular matrix using polynomial interpolation (see Section 6 under the heading Exact Matching}. The idea is attributed to L. Lovasz in the paper. More details are sketched in the rest of this editorial.

But the algorithm in this paper is probably impractical. It uses a highly interesting, but difficult to use idea (in practice) called the isolation lemma. One can circumvent this difficulty by using a different approach based on the Schwartz-Zippel lemma (see below). So, the expected solution combines theory of Tutte matrices, random sampling, and polynomial interpolation.

Course notes with this specific approach are here:
http://www-math.mit.edu/~goemans/18438S14/lec4-algmat.pdf

These notes only focus on the case where all weights are 0 or 1 (i.e. red edges or blue edges). Still, it generalizes readily to this problem with larger but still bounded weights.

Another reference you might want. tofollow up with is the Wikipedia page on Pfaffians:
https://en.wikipedia.org/wiki/Pfaffian

Warmup: Bipartite Graphs

First: Just Matchings - No Value

We show an alternate way to decide if a bipartite graph has any perfect matching. The values will be incorporated in later. The goal of this section is to present/remind the reader of an algebraic technique for determining if a bipartite graph has a perfect matching using determinants and random sampling. Throughout, let n = |V|.

Suppose G is bipartite with sides L, R such that |L| = |R| = n/2. We consider the bi-adjacency matrix M as being the matrix with rows indexed by L and columns indexed by R. But instead of using 1 to indicate the presence of an edge, we use a variable x_e that is specific to that edge.

That is, M_{u,v} = 0 for u \in L, v \in R if there is no edge uv \in E. Otherwise, M_{u,v} = x_e.

The key is to look at determinants. Recall for a square matrix A of order n that the determinant can be expressed as a polynomial in the entries of the matrix

\det A = \sum_{\sigma \in S_n} {\rm sgn}(\sigma) \cdot \prod_{i=1}^n A_{i, \sigma(i)}.

Here, S_n is the set of all permutations/bijections \sigma : \{1, \ldots, n\} \rightarrow \{1, \ldots, n\} and {\rm sgn}(\sigma) is the signature of \sigma, which is simply \pm 1 depending on whether there are an odd (-1) or even (+1) number of inversions of \sigma. Equivalently, when looking at a cycle decomposition of \sigma we see that {\rm sgn}(\sigma) is -1 if there are an odd number of even-length cycles and +1 if there are an even number of odd-length cycles. If you truly want to understand this editorial, you would do well to remind yourself what is meant by this.

Consider the bi-adjacency matrix M. We can view a permutation \sigma in the definition of \det M as being a bijection L \rightarrow R. Note, a bijection is simply a proposed matching that matches u \in L with \sigma(u) \in R. The product

\prod_{u \in L} M_{u, \sigma(u)} will be a non-zero polynomial if and only if \sigma corresponds to an actual perfect matching in graph G (i.e. u\sigma(u) \in E for every u \in L).

That is,

\det M = \sum_{\sigma : \text{ a matching in } G} {\rm sgn}(\sigma) \cdot \prod_{u \in L} x_{u\sigma(u)}.

Furthermore, two different matchings involve different monomials so there is no cancellation. Thus, \det M is the nonzero polynomial iff G has at least one perfect matching.

Now, this polynomial can have exponentially many terms so it is hopeless to compute it explicitly. But we just want to know if it is the zero polynomial. For any given assignments to the variables x_e we can evaluate \det(M) in O(n^3) steps. If we get a nonzero value, we know the graph has a perfect matching! But if this determinant was 0, were we just unlucky? That is: if the polynomial is not zero, can we pick the x_e values such that the determinant will not vanish?

It turns out plugging in random values to the x_e will almost certainly cause a nonzero polynomial to not vanish, which is what we want. See the next bit for details.

Schwartz-Zippel Lemma

Let \mathbb F denote a field and S \subset \mathbb F be a finite set. The relevant case in this problem is when \mathbb F is the field of integers modulo p for a large prime number p and S = \mathbb F in this case.

The degree of a multivariate monomial x_1^{a_1}x_2^{a_2} \cdot \ldots \cdot x_k^{a_k} is \sum_{i=1}^k a_i (the total degree of all terms). The degree of a nonzero multivariate polynomial is the maximum degree of its monomials. For example, the degree of x\cdot y^2 + 2 \cdot x^3\cdot y is 4 (look at the last monomial).

The following is called a “lemma” for historical purposes, but it is a fundamental result.

Schwartz-Zippel Lemma

Let p \in \mathbb F[x_1, \ldots, x_n] be a nonzero polynomial with degree d. If we sample z_1, \ldots, z_n independently and uniformly at random from S,

{\bf Pr}[p(z_1, \ldots, z_n) = 0] \leq \frac{d}{|S|}.

Note, the special case of univariate polynomials (n = 1) is simply saying a nonzero polynomial of degree d has at most d roots.

See a short proof here:
https://eccc.weizmann.ac.il/report/2010/096/revision/1/download/

Summary

To summarize the last two sections: to determine if a bipartite graph G has a perfect matching one can take the following algebraic approach: pick a large prime p. For each e \in E, sample x_e randomly from \{0, \ldots, p-1\} and form the bi-adjacency matrix M using these sampled values.

  • If G has no perfect matching, then \det M = 0 always (for any sampled values) because the underlying polynomial is the zero polynomial.

  • If G has a perfect matching, then by the Schwartz-Zippel Lemma the probability \det M is nonzero is at least 1-\frac{n/2}{p} because every monomial in the determinant polynomial (before sampling the x_e) has degree n/2. Since n \leq 100 in the problem, taking p = 2^{31}-1 more than suffices for this to succeed with immensely high probability.

Bipartite Graphs - Now With Values!

Now we consider the problem at hand with the only restriction being that G is bipartite. That is, edges e now have values v_e. We use the same variables x_e for edges e \in E. But now we add one more variable y that will be used to encode the weight of an edge. We will take advantage of the fact that all edge weights are very small integers.

Form the following bi-adjacency matrix for G where, again, L and R are the two sides of the bipartite graph. For u \in L, v \in R we set:

  • M_{u,v} = 0 if uv is not an edge in G.
  • M_{u,v} = y^{v_e} \cdot x_e if uv is an edge in G (recall the graph was simple, we did not have parallel edges).

Now consider the polynomial \det M. As the maximum edge weight is \nu, we have the total weight of any matching is at most \nu\cdot \frac{n}{2}. For each 0 \leq k \leq \nu \cdot \frac{n}{2} let S^k_{\text {match}} be all bijections \sigma : L \rightarrow R corresponding to matchings in G of total weight exactly k. Using some simple manipulations, one can see:

\det M = \sum_{k=0}^{\nu \cdot \frac{n}{2}} y^k \cdot \sum_{\sigma \in S^k_{\text{match}}} {\rm sgn}(\sigma) \prod_{u \in L} x_{u,\sigma(u)}.

Let’s simplify this as \sum_{k=0}^{\nu \cdot \frac{n}{2}} y^k \cdot f_k({\bf x}) where {\bf x} refers to the collection of terms \{x_e\}_{e \in E}. Then f_k({\bf x}) is not the zero polynomial if and only if G has matching with value exactly k. Let N = \{k : f_k({\bf x}) \text{ is not the zero polynomial}\} be the indices we want to compute!

If we plug in random integers (mod p again) for the x_e variables but leave y as an indeterminate, we see that \det M is a univariate polynomial. Furthermore, the coefficient of y^k is not zero if f_k({\bf x}) is not zero with probability at least 1 - \frac{n/2}{p}. Taking the union bound over all possible k, we see:

{\bf Pr}[f_k({\bf x}) = 0 \text{ for some } k \in N] \leq |N| \cdot \frac{n/2}{p} \leq \frac{\nu n^2}{p}.

Since n \leq 100 and \nu \leq 20, this is still plenty small for p = 2^{31}-1.

But how can we compute \det M when the x_e are sampled but y is still a polynomial? Well, one could perform Gaussian elimination over the field of rational functions in y. But this is annoying (and possibly too inefficient) to implement. A cleaner solution is to interpolate the polynomial.

Interpolating Polynomials

Now consider the following interpolation problem. Given d+1 pairs of values (x_0, y_0), \ldots, (x_d, y_d) where all x-values are distinct (but not necessarily the y values), can we find a degree \leq d polynomial f such that f(x_i) = y_i for all 0 \leq i \leq d? If so, it must be unique (distinct polynomials of degree \leq d cannot agree on d+1 different inputs).

There are many methods that do this, perhaps the simplest to explain is the method of Lagrangian interpolation. For each 0 \leq i \leq d let g_i(x) = \prod_{j\neq i} (x-x_j) (i.e. all 0 \leq j \leq d except j = i). Observe since the x-values are distinct then g_i(x_i) \neq 0. Now consider:

f(x) = \sum_{i=0}^d y_i \cdot \frac{g_i(x)}{g_i(x_i)}.

It is easy to verify f(x_i) = y_i by observing, g_i(x_j) = 0 for i \neq j. Furthermore, one can construct this f(x) polynomial given (x_0, y_0), \ldots, (x_d, y_d) in O(d^2) step. For example, one could compute \prod_{i=0}^n (x-x_i) as a preprocessing step and then divide it by x-x_i using polynomial long division in O(d) time to compute each g_i(x) when required, resulting in O(d^2) steps to compute.

Putting It All Together

Here is the whole algorithm for bipartite graphs.

  • fix a large prime p (substantially larger than \nu \cdot n^2)
  • sample x_e \sim \{0, \ldots, p-1\} independently for each e \in E
  • {\bf for} y = 0, 1, \ldots, \nu \cdot \frac{n}{2} {\bf do}
    • Form a matrix M with row indices L and column indices R by setting M_{u,v} = 0 if uv \notin E and M_{u,v} = y^{v_e} \cdot x_e if e = uv \in E.
    • Evalute \det M (an integer mod p) and call this value z_y.
  • Use polynomial interpolation with the values \{(y, z_y)\}_{0 \leq y \leq \nu \cdot \frac{n}{2}} to construct a polynomial f(y).
  • Writing f = \displaystyle \sum_{i=0}^{\nu \cdot \frac{n}{2}} a_i \cdot y^i, output all i such that a_i \neq 0.

If there is no matching of value k, for sure a_k = 0. If there is a matching of value k, then a_k \neq 0 with very high probability. Overall, with probability at least 1-\frac{\nu n^2}{p}, the set \{k : a_k \neq 0\} is the set of all values k such that a perfect matching with value k exists.

General (non-Bipartite) Graphs

The idea for non-bipartite graphs is similar, the biggest change is in how we handle matchings in non-bipartite graphs using determinants. Thankfully, this is possible with very little algorithmic overhead! But the idea is a bit more complex.

Truly understanding this approach requires the following technical definitions. We assume n = |V| is even (which it was for this problem). It corresponds, loosely, to the determinant of a matrix when the permutations are restricted to 2-cycles. But this is not quite an exact interpretation.

Pfaffians of Matrices

Definition: Pfaffian
Let A be a square matrix of size n \times n with n even. Write k = n/2. The {\bf Pfaffian} of A, denoted {\rm Pf}(A), is

\frac{1}{2^k \cdot k!} \sum_{\sigma \in S_n} {\rm sgn}(\sigma) \cdot \prod_{i=1}^k A_{\sigma(2i-1), \sigma(2i)}.

That is, the product only has n/2 terms instead of n as in the determinant. It is hard to evaluate the Pfaffian in general. But in skew-symmetric matrices (i.e. A^T = -A where A^T is the transpose of A), we have the following fundamental result due to Cayley:

Theorem
For a skew-symmetric matrix A, \det A = ({\rm Pf}(A))^2.

See the lecture notes mentioned in the references section for a terse proof. This is the biggest omission from this editorial.

So we can compute the Pfaffian of A up to the “sign” \pm 1 by computing \det A and then taking its square root.

The relevance of the Pfaffian is apparent when you realize the terms in the product range over a matching of the index set: \sigma(2i-1) is “matched” with \sigma(2i) in each term. More on this below.

General Matching - No Values

We again consider how to tell if a graph has a perfect matching using algebraic tricks: no edge weights are involved here. We fix an arbitrary ordering of vertices and now think of them as being numbered 1, \ldots, n.

Now let x_e be a variable for each edge. Construct the skew-symmetric adjacency matrix M with variables x_e as follows. For each u < v pair of vertices,

  • M_{u,v} = 0 if uv is not an edge.
  • M_{u,v} = x_e if e = uv is an edge.
  • M_{v,u} = -x_e if e = uv is an edge.

That is, the upper triangle has positive occurrences of x_e and the lower triangle has negative occurrences of x_e.

Theorem
{\rm Pf}(M) is not the zero polynomial if and only if G has a perfect matching.

Proof Sketch
Consider a nonzero term of {\rm Pf}(M) corresponding to some permutation \sigma. Then M_{\sigma(2i-1), \sigma(2i)} is nonzero for each 1 \leq i \leq k, so \sigma describes a perfect matching in M.

Conversely, for any matching we can write such a term. For example, if u and v are matched then we make sure some 1 \leq i \leq k has

\sigma(2i-1) = u and \sigma(2i) = v. There are many ways to construct a permutation this way (swap the order of uv or rearrange the matching edges so they appear with different i)

but they all have the same coefficient {\rm sgn}(\sigma) \frac{1}{2^k \cdot k!}. To see the signatures are the same, rearranging two i, j indices (i.e. swapping \sigma(2i-1) with \sigma(2j-1) and \sigma(2i) with \sigma(2j) is equivalent to two swaps in \sigma, which does not change its signature. Swapping \sigma(2i-1) with \sigma(2i) negates the signature, but also negates M_{\sigma(2i-1), \sigma(2i)} by skew symmetry.
*End Proof

So we do the same as before: sample random x_e values being integers mod p for a large enough prime p and evaluate \det M. Since \det M = ({\rm Pf}(A))^2, the probability \det M is nonzero if G has a perfect matching is at least 1 - \frac{n/2}{d} (i.e. the Pfaffian does not vanish with this probability).

Finally - General Matching with Values

Form M as in the last section, except instead of x_e (or -x_e), use y^{v_e} x_e (or -y^{v_e} x_e) where y is a single new variable. Similar to the bipartite case, but with more care in the argument, one can show that for these variables \{x_e\}_{e \in E} and y we have:

{\rm Pf}(M) = \sum_{k=0}^{\nu \cdot n/2} y^k \cdot g_k({\bf x})

where g_k({\bf x}) is a nonzero polynomial iff G has a perfect matching of value exactly k.

Sample a random x_e value independently at random from \{0, \ldots, p-1\} for every edge e. Then \det M is a polynomial in y and g_k({\bf x}) does not vanish with high probability if there is a perfect matching with value k.

Interpolate \det M. At this point, the polynomial f(y) we get is the square of the univariate (in y) polynomial {\rm Pf}(M). So we simply compute a square root g(y) of f(y) to get the Pfaffian polynomial with these random x values but y still an indeterminate. We read off the coefficients of this g(y) to see which are not zero.

The only thing to comment on here is that computing the square root of g(y) involves a modular square root (mod p). This can be done quickly with the Tonelli-Shanks algorithm. But we can avoid this. Normalize g(y) so it is monic. Given a monic polynomial that is a perfect square, we can compute its square root using an elementary algorithm that can compute one coefficient very easily given the values of higher coefficients.

SOLUTIONS:

Setter
/*
  precisepairing solution
  Zachary Friggstad, 2020
 
  See the editorial.
*/
 
#include <bits/stdc++.h>
 
using namespace std;
 
using ll = long long;
using vl = vector<ll>;
 
const ll P = 1000000007; // a prime
 
// reduces v mod P to the range {0,...,P-1} even if v is negative
ll modp(ll v) { return ((v%P)+P)%P; }
 
// Computes the inverse mod P using the extended Euclidean algorithm.
ll invert(ll a) {
  ll r0 = P, r1 = a, t0 = 0, t1 = 1;
 
  while (r1) {
    ll q = r0/r1;
    r0 -= q*r1; swap(r0, r1);
    t0 -= q*t1; swap(t0, t1);
  }
 
  t0 = modp(t0);
  assert(modp(t0*a) == 1);
  return modp(t0);
}
 
// Computes the determinant of matrix M modulo P using Gaussian elimination.
// Assumes the matrix is square.
ll determinant(vector<vl>& M) {
  int n = M.size();
 
  ll det = 1;
  for (int r = 0; r < n; ++r) {
    // find the pivot row
    int piv = r;
    while (piv < n && M[piv][r] == 0) ++piv;
    if (piv == n) return 0; // terminate Gaussian elimination early, we know det == 0
 
    // swap the pivot row to r and negate the determinant
    if (piv != r) {
      det = modp(-det);
      swap(M[r], M[piv]);
    }
 
    // normalize this "pivot" row to have a leading 1
    ll inv = invert(M[r][r]);
    det = modp(det*M[r][r]);
    for (int c = n-1; c >= r; --c) M[r][c] = modp(M[r][c]*inv);
 
    // subtract multiples of this row from other rows
    for (int rr = r+1; rr < n; ++rr)
      if (M[rr][r] != 0) // saves some in practice for sparse instances
        for (int c = n-1; c >= r; --c)
          M[rr][c] = modp(M[rr][c] - M[r][c]*M[rr][r]);
  }
 
  return det;
}
 
// multiply two polynomials and trim leading 0s
// if the result is 0, then only a single 0 will be left
vl mulpoly(const vl& a, const vl& b) {
  vl prod(a.size()+b.size()+1, 0);
  for (int i = 0; i < a.size(); ++i)
    for (int j = 0; j < b.size(); ++j)
      prod[i+j] = modp(prod[i+j] + a[i]*b[j]);
  while (prod.size() > 1 && prod.back() == 0) prod.pop_back();
  return prod;
}
 
// evaluate a polynomial at point x using Horner's rule
ll eval_horner(const vl& poly, ll x) {
  ll val = 0;
  for (int i = poly.size()-1; i >= 0; --i) val = modp(val*x + poly[i]);
  return val;
}
 
// fast modular exponentiation - uses 0^0 == 1
ll fastexp(ll x, ll m) {
  ll res = 1;
  while (m) {
    if (m&1) res = modp(res*x);
    x = modp(x*x);
    m >>= 1;
  }
  return res;
}
 
// Assumes x.size() == y.size()
// Returns the unique polynomial f() with degree <= x.size()
// such that f(x[i]) == y[i]
// Uses Lagrangian interpolating polynomials
vl interpolate(const vl& x, const vl& y) {
  int degree = x.size()-1;
  assert(y.size()-1 == degree);
 
  // construct the polynomial F(x) = (x-x[0])*(x-x[1])*...*(x-x[degree])
  vl poly(degree+1, 0), numer(1, 1);
  for (int i = 0; i <= degree; ++i) {
    // multiply numer by x-x[i]
    numer.push_back(1);
    for (int j = i; j > 0; --j) numer[j] = modp(numer[j-1]-x[i]*numer[j]);
    numer[0] = modp(-x[i]*numer[0]);
  }
 
  // For brevity, let G_i(x) denote the polynomial F(x)/(x-x[i])
  // Computes sum_{i=0, 1, ..., degree} G_i(x) * y[i]/G_i(x[i])
  // This is the interpolating polynmial
  for (int i = 0; i <= degree; ++i) {
    // compute G_i(x) := F(x)/(x-i) using long division
    vl quo, rem = numer;
 
    for (int j = degree+1; j >= 1; --j) {
      quo.push_back(rem[j]);
      rem[j-1] = modp(rem[j-1] + rem[j]*x[i]);
    }
    reverse(quo.begin(), quo.end());
    assert(!rem[0]);
 
    assert(mulpoly(quo, {modp(-x[i]), 1}) == numer);
 
    // evaluate G_i(x[i])
    ll den = eval_horner(quo, x[i]);
 
    // compute y[i] / G_i(x[i])
    den = modp(y[i]*invert(den));
 
    // and add the polynomial G_i(x) * y[i]/G_i(x[i]) to the running total
    for (int j = 0; j <= degree; ++j) poly[j] = modp(poly[j] + quo[j]*den);
  }
 
  // reduce so leading coefficient is nonzero, unless it is the zero polynomial
  // in which case we just leave a single zero
  while (poly.size() > 1 && poly.back() == 0) poly.pop_back();
 
  // Just testing it worked properly.
  for (int i = 0; i <= degree; ++i) assert(eval_horner(poly, x[i]) == y[i]);
 
  return poly;
}
 
// Computes a square root of a monic polynomial (assuming one exists)
// Returns the answer with leading coefficient 1 (not -1) mod P.
// This is just an elementary algorithm that computes the coefficients
// of the square root one at a time starting with the leading coefficient.
vl sqroot_monic_poly(const vl& poly) {
  int degree = poly.size()-1;
  assert(degree%2 == 0);
 
  // now use the square root algorithm for polynomials
  vl sqroot(degree/2+1, 0);
  sqroot[degree/2] = 1; // we know leading coefficient is 1
 
  for (int j = degree/2-1; j >= 0; --j) {
    sqroot[j] = poly[degree/2+j];
    for (int k = 1; j+k < degree/2; ++k)
      sqroot[j] = modp(sqroot[j] - sqroot[j+k]*sqroot[degree/2-k]);
    sqroot[j] = modp(sqroot[j]*invert(2*sqroot[degree/2]));
  }
 
  // just checking!
  assert(mulpoly(sqroot, sqroot) == poly);
 
  return sqroot;
}
 
struct edge {
  int u, v;
  ll w, x; // x-value for random assignment
};
 
int main() {
//  freopen("17.in","r",stdin);
//  freopen("17.out","w",stdout);
  int n, m;
  cin >> n >> m;
 
  srand(time(0));
 
  // read the graph and sample random x-values for each edge
  vector<edge> e(m);
  ll maxw = 0;
  for (auto& f : e) {
    cin >> f.u >> f.v >> f.w;
    f.x = modp(rand());
    maxw = max(maxw, f.w);
  }
 
  // try enough y-values to interpolate the determinant polynomial
  int degree = n*maxw;
 
  vl detvals, yvals;
 
  for (ll y = 0; y <= degree; ++y) {
    yvals.push_back(y);
 
    // construct the Tutte matrix with the weight offsets
    vector<vl> tutte(n, vl(n, 0));
    for (const auto& f : e) {
      tutte[f.u][f.v] = modp(fastexp(y, f.w)*f.x);
      tutte[f.v][f.u] = modp(-fastexp(y, f.w)*f.x);
    }
 
    // and get the determinant!
    detvals.push_back(determinant(tutte));
  }
 
  vl poly = interpolate(yvals, detvals);
 
  // if the polynomial is zero, there is probably no perfect mathcing
  if (poly == vl(1,0)) {
    cout << "YOU ARE IN TROUBLE" << endl;
    return 0;
  }
 
  // now take square root of poly
  // only the *locations* of the nonzero coefficients matter, so we normalize
  // by the leading coefficient to make it monic (the leading coefficient of poly
  // is a quadratic residue mod P).
 
  assert(fastexp(poly.back(), (P-1)/2) == 1); // double-check we have a QR
  ll norm_factor = invert(poly.back());
  for (auto& c : poly) c = modp(c*norm_factor);
 
  vl sqroot = sqroot_monic_poly(poly);
 
  // FINALLY: find the indices of the nonzero coefficients in the square root
  // Report these!
  vl k_vals;
  for (int k = 0; k < sqroot.size(); ++k)
    if (sqroot[k])
      k_vals.push_back(k);
 
  cout << k_vals.size() << endl;
  for (int i = 0; i < k_vals.size(); ++i)
    cout << k_vals[i] << (i+1==k_vals.size() ? '\n' : ' ');
 
  return 0;
}
Tester
#include <iostream>
#include <vector>
#include <algorithm>
#include <cstdlib>
#include <cstdio>
#include <cassert>
using namespace std;
 
const long long md = (1ll << 31) - 1;
 
long long pw(long long x, long long y) {
    long long res = 1;
    while (y) {
        if (y & 1ll)
            res = (res * x) % md;
        x = (x * x) % md;
        y >>= 1;
    }
    return res;
}
 
//           MATRIX
 
struct matrix {
    vector<vector<long long> > v;
    matrix(int n, int m) {
        assert(n >= 0 && m >= 0);
        v.resize(n, vector<long long>(m, 0));
    }
    int get_n() {
        return (int)v.size();
    }
    int get_m() {
        return (v.empty() ? 0 : (int)v[0].size());
    }
    void swap_rows(int x, int y) {
        if (x != y) {
            swap(v[x], v[y]);
        }
    }
};
 
//          POLYNOMIALS
 
 
struct polynomial {
    vector<long long> p;
    polynomial () {}
    polynomial(int n) {
        p.resize(max(n, 0), 0ll);
    }
    polynomial(int n, long long value) {
        p.resize(max(n, 0), value);
    }
    int size() const {
        return (int)p.size();
    }
    long long operator [](int i) const {
        assert(i < p.size());
        return p[i];
    }
    long long& operator [](int i) {
        assert(i < p.size());
        return p[i];
    }
    void shrink() {
        while (p.size() > 1 && p.back() == 0ll)
            p.pop_back();
    }
    long long back() const {
        return p.back();
    }
    long long eval(long long X) const {
        long long res = 0;
        for (int i = this->size() - 1; i >= 0; --i) {
            res = (res * X) % md + p[i];
            if (res >= md) res -= md;
        }
        return res;
    }
    bool operator == (const polynomial& rhs) {
        return (this->p == rhs.p);
    }
    void print() const {
        cout << "{";
        for (int i = size() - 1; i >= 0; --i) {
            cout << p[i];
            if (i) cout << " ";
        }
        cout << "}\n";
    }
    void operator = (const polynomial& rhs) {
        p = rhs.p;
    }
};
 
polynomial operator + (const polynomial& lhs, const polynomial& rhs) {
    polynomial sum(max((int)lhs.size(), (int)rhs.size()), 0ll);
    for (int i = 0; i < sum.size(); ++i) {
        long long cur = (i < lhs.size() ? lhs[i] : 0) + (i < rhs.size() ? rhs[i] : 0);
        if (cur >= md) cur -= md;
        sum[i] = cur;
    }
    return sum;
}
 
polynomial operator * (polynomial lhs, long long C) {
    for (auto& i : lhs.p) {
        i = (i * C) % md;
    }
    return lhs;
}
 
polynomial operator * (const polynomial& lhs, const polynomial& rhs) {
    int d1 = lhs.size() - 1;
    int d2 = rhs.size() - 1;
    polynomial multiply(d1 + d2 + 1, 0ll);
    long long cur = 0;
    for (int i = 0; i <= d1; ++i) {
        for (int j = 0; j <= d2; ++j) {
            cur = (lhs[i] * rhs[j]) % md;
            multiply[i + j] += cur;
            if (multiply[i + j] >= md) multiply[i + j] -= md;
        }
    }
    multiply.shrink();
    return multiply;
}
 
polynomial divide_by_binomial(polynomial lhs, polynomial rhs) {
    assert(rhs.size() == 2);
    polynomial result((int)lhs.size() - 1, 0);
    long long inv = pw(rhs.back(), md - 2);
    while (lhs.size() >= 2) {
        int deg = (int)lhs.size() - 1;
        long long cur = (lhs.back() * inv) % md;
        result[deg - 1] += cur;
        if (result[deg - 1] >= md) result[deg - 1] -= md;
        lhs[deg] -= (cur * rhs[1]) % md;
        if (lhs[deg] < 0) lhs[deg] += md;
        lhs[deg - 1] -= (cur * rhs[0]) % md;
        if (lhs[deg - 1] < 0) lhs[deg - 1] += md;
        lhs.shrink();
    }
    result.shrink();
    return result;
}
 
////////////////////////////////////////////////////////////////////////////////////////////////
 
struct edge_t {
    int u, v;
    int len, x;
};
 
long long find_determinant(matrix M) {
    long long det = 1, cur = 0;
    for (int j = 0; j < M.get_n(); ++j) {
        int row = -1;
        for (int i = j; i < M.get_n(); ++i)
            if (M.v[i][j] != 0) {
                row = i;
                break;
            }
        if (row == -1) return 0;
        if (row != j) {
            M.swap_rows(row, j);
            det = (md - det) % md;
        }
        long long inv = pw(M.v[j][j], md - 2);
        det = (det * M.v[j][j]) % md;
        for (int k = j; k < M.get_n(); ++k)
            M.v[j][k] = (M.v[j][k] * inv) % md;
        for (int i = j + 1; i < M.get_n(); ++i)
            if (M.v[i][j] != 0) {
                long long q = M.v[i][j];
                for (int k = j; k < M.get_n(); ++k) {
                    cur = (M.v[j][k] * q) % md;
                    M.v[i][k] -= cur;
                    if (M.v[i][k] < 0) M.v[i][k] += md;
                }
            }
    }
    return det;
}
 
polynomial interpolate(const vector<pair<long long, long long> >& val) {
    int deg = (int)val.size() - 1;
    polynomial F(1, 1);
    for (int i = 0; i <= deg; ++i) {
        polynomial bin(2, 1);
        bin[0] = (md - val[i].first) % md;
        F = F * bin;
    }
    polynomial result;
    for (int i = 0; i <= deg; ++i) {
        polynomial binomial(2, 1);
        binomial[0] = (md - val[i].first) % md;
        polynomial G = divide_by_binomial(F, binomial);
        
        long long z = G.eval(val[i].first);
        z = pw(z, md - 2);
        
        polynomial cur = G * val[i].second;
        cur = cur * z;
        result = result + cur;
    }
    result.shrink();
    return result;
}
 
polynomial get_square_root(polynomial p) {
    int deg = p.size() - 1;
    polynomial result((deg / 2) + 1, 0ll);
    int rdeg = result.size() - 1;
    result[rdeg] = 1;
    for (int i = rdeg - 1; i >= 0; --i) {
        long long cur = p[rdeg + i];
        for (int j = i + 1; j < rdeg; ++j) {
            cur = (cur - (result[j] * result[rdeg + i - j]) % md + md) % md;
        }
        long long q = pw(2, md - 2) % md;
        cur = (cur * q) % md;
        result[i] = cur;
    }
//    result.print();
//    (result * result).print();
//    p.print();
//    if (!(result * result == p)) cout << ">" << '\n';
    return result;
}
 
int n, m;
vector<edge_t > edges;
int max_v;
 
 
int main(int argc, const char * argv[]) {
    #ifdef __APPLE__
        freopen("/Users/danya.smelskiy/Documents/Danya/Resources/input.txt","r",stdin);
        //freopen("/Users/danya.smelskiy/Documents/Danya/Danya/output.out", "w", stdout);
    #endif
    ios_base::sync_with_stdio(false); cin.tie(0); cout.tie(0);
    srand(time(0));
    cin >> n >> m;
    for (int i = 0; i < m; ++i) {
        int u, v, len;
        cin >> u >> v >> len;
        if (u > v) swap(u, v);
        int x = (rand()) % md;
        edges.push_back({u, v, len, x});
        max_v = max(max_v, len);
    }
    vector<pair<long long, long long> > _values;
    for (int y = 0; y <= max_v * n; ++y) {
        matrix M(n, n);
        for (int i = 0; i < edges.size(); ++i) {
            int u = edges[i].u;
            int v = edges[i].v;
            int w = (pw(y, edges[i].len) * 1ll * edges[i].x) % md;
            M.v[u][v] = w;
            M.v[v][u] = (md - w) % md;
        }
        long long det = find_determinant(M);
        _values.push_back({y, det});
    }
    polynomial pl = interpolate(_values);
    if (pl.p.empty() || pl.p == vector<long long>{0}) {
        cout << "YOU ARE IN TROUBLE\n";
        return 0;
    }
    long long inv = pw(pl.back(), md - 2);
    for (int i = 0; i < pl.size(); ++i) {
        pl[i] = (pl[i] * inv) % md;
    }
    polynomial sq_root = get_square_root(pl);
    
    vector<int> ans;
    for (int i = 0; i < sq_root.size(); ++i)
        if (sq_root[i] != 0) ans.push_back(i);
    cout << ans.size() << '\n';
    for (int i = 0; i < ans.size(); ++i) {
        if (i) cout << " ";
        cout << ans[i];
    }
    cout << '\n';
    return 0;
}
 
 
6 Likes