How to solve CSTREE from AUG Long Challenge?

What’s your approach to solve CSTREE from August Long challenge? I could only get 15 pts by using Kirchoff’s Theroem and the Gaussian Elimination to find determinant.

2 Likes

I found a paper about lexicographic graph product which can model on this problem.
Key in the formula, then AC verdict.

Can you please post link to the paper which you mentioned ?

Well, if you are on determinant part, then you are not very far behind the actual solution of the problem.
First define matrix given as A1, and a matrix of same dimensions as the original matrix but all elements as -1 as B1.
First, you can imagine the (NK X NK ) adjacency matrix formed.Now to find the number of spanning trees by Kirchoff’s theorem (Refer Wikipedia : Kirchhoff's theorem - Wikipedia), and eliminate the first row and column.Surely we can write the remaining matrix as matrix divided into 4 blocks of,
(N-1 X N-1), ------A
(N-1 X N(K-1)), -------B
(N(K-1) X N-1), and -------C
(N(K-1) X N(K-1)) ------D

Now to find determinant of this block matrix, you can refer to identity related to the determinant of block matrix when the top left matrix is invertible (Link : Determinant - Wikipedia) -------- (1)

Now for determinant of D, since this can be written as a block matrix with blocks of given matrix, so my approach was to just solve it like a simple matrix and later treat it like a block matrix, in terms of original matrix, and in final equation take determinant of the expression formed in terms of original matrix.(Maybe somebody can prove it here :D).
So I present the result directly here:
If a matrix is a block matrix with diagonal blocks ‘a’ and rest of blocks ‘b’ ( and they both are of same dimensions, and square matrices too), and block matrix has dimensions (K*K, when just seeing blocks as elements), then its determinant can be written as determinant of :
det((a+(k-1)b)) * det((a-b))^(k-1) ------ (2)

Now done with determinant of first part, we move on to the next determinant ( See (1) ).
Now the second term looks tricky in the 2nd term but if you try to solve it for given matrices, it turns out to be:
Trace(D inverse) * matrix of all ones of dimensions (N-1 X N-1)

Now we just need to find Trace of D inverse right?
Well for that i used the property that trace of a matrix is equal to sum of its eigenvalues.
You can look up eigenvalues of matrices on google, its a pretty standard topic.

For finding eigenvalues of matrix M, what we do is:
det.(M- vI) = 0, where v is an eigenvalue of that matrix, and I is the identity matrix.
Also eigenvalues for inverse of a matrix just the reciprocal of the eigenvalues of the original matrix.

Lets just find eigenvalues for D first?
Now to find eigenvalues of matrix (D), we just need to solve determinant of D-vI, right?And we also know that D-vI will be a block matrix of dimensions (K-1) with blocks (A1- vI), (B1 - vI), so it is the same matrix as we talked about in (2).So we write the determinant of matrix D-vI in the terms of equation (2), and we can get new equation as :
det((A1+(k-2)B1) - (k-1)vI ) * det((A1-B1))^(k-2) = 0
To get values of ‘v’, we can say that the matrix D has same eigenvalues as matrix
(A1 + (k-2)B1)/(k-1) (instead of division, you can take k-1 under modulo inverse).So to find eigenvalues or more accurately sum of inverse of eigenvalues for matrix D, we can find sum of inverse of eigenvalues of matrix (A1 + (k-2)B1)/(k-1) OR trivially sum of elements of inverse of (A1 + (k-2)B1)/(k-1). This was our final requirement after which the equation (1) can be solved easily i believe.

3 Likes
1 Like

Thanks, hard to get it for me now. Will go through it once again.:slightly_smiling_face:

Imgur
n is the number of vertices in G
l_i = 1 for all i, since a complete graph is a kind of multipartite with all indep set of size one
p = \sum_i l_i k
\delta_k, k > 1is all nonzero eigenvalue of Laplacian matrix of G

1 Like

I used Kirchoff’s Theorem and then used elementary row and column operations on the determinant to simplify it first. The value of the determinant actually turned out to be :
value1 * value2 * (value3)^(k-2)
Where value1,value2 and value3 are values of determinants of size n (n<=20).So, I coded for finding value1,value2,value3 and found out the final product.

1 Like

My approach :

Some properties that will be useful :

Property 1: The determinant of the Laplacian Matrix is zero

Property 2: According to Kirchhoff’s theorem, all cofactors of the Laplacian Matrix are equal to each other, and they are equal to the number of spanning trees of the graph. Let’s denote the number of spanning tree by X = any cofactor of the laplacian matrix.

The laplacian matrix in this graph will be a matrix of dimensions (NK)\times (NK). For simplicity , let’s divide this matrix in blocks matrices of N \times N. You can see that there are two types of blocks. A block consisting of only -1 (let’s call this block matrix -J_N , as the “all-ones matrix” is denoted by J_N) , and another block that depends on the edges of the Graph given (let’s call this block matrix A) . The “diagonal block” has only the matrix block A and all the reaming blocks are -J_N

For example, for K = 3 :
L = \begin{pmatrix} A & -J_N & -J_N \\ -J_N & A & -J_N\\ -J_N & -J_N & A \end{pmatrix}

So, we know that |L|=0. Now, what happens if we add 1 to all the elements of the laplacian matrix ? we will get something like that :

L + J_{NK} = \begin{pmatrix} A + J_N & 0_N & 0_N\\ 0_N& A + J_N & 0_N\\ 0_N & 0_N& A + J_N \end{pmatrix}

where 0_N is a square matrix N \times N with all elements equal to zero.

Now the determinant of this new matrix is easy to calculate : is the product of the block matrices of the diagonal.

Equation 3 : |L + J_{NK}| = |A + J_N|^{K}

But how does this help us ? We don’t want that.

Well , let’s see if we could find another formula for |L + J_{NK}| in terms of |L|

In general, if we have a matrix M and we add 1 to one of its element, what can we say about the determinant of this new matrix ? We can use Laplace expansion. To have a better visualization of Laplace expansion, in case you don’t know :

|M|= \begin{vmatrix} a & b & c\\ d& e& f\\ g& h & i \end{vmatrix} =a \times \begin{vmatrix} e& f\\ h & i \end{vmatrix} - b \times \begin{vmatrix} d& f\\ g & i \end{vmatrix} + c \times \begin{vmatrix} d& e\\ g& h \end{vmatrix}

Now if we add 1 to the first element, the new determinant will be :
|M'| = |M| + cofactor_{1,1}
where cofactor_{1,1} = \begin{vmatrix} e& f\\ h & i \end{vmatrix}

So we can generalize this idea and it can be proof that
|L + J_{NK}| = |L| + sum of all cofactors of L

So now, using property 1 and 2 .

|L + J_{NK}| = 0 + (N \times K)^2 \times X

Using equation 3 :
|A + J_N|^{K} = (N \times K)^2 \times X

Then X =\dfrac{|A + J_N|^{K}}{(N \times K)^2}

You can calculate |A + J_N| in O(N^3) using Gauss Method.
Then you can use fast exponentiation and modular inverse to get the final result.

Link to my solution

4 Likes

Thanks for the explanation, @diegohdmgz!

1 Like

@diegocruzo, thanks a lot for elaborate answer.

  1. Can you please provide proof of why |L| is 0? I had done pen and paper work to find the determinant value of L in terms of A and J_n but I didn’t get the determinant value as 0?
  2. Can you please explain this in bit detail, how you came up to this generalization, it would be helpful :-

Thanks a lot :slight_smile:

@vivek_shah98. Of course , no problem .

1. |L| = 0
(It works for any general graph, not only for the graph of the problem)

Proof :
The Laplacian matrix can be defined in the following way :
L_{ii} = degree of the node i

L_{ij} = -1 , if there is an edge between i and j , otherewise L_{ij} = 0 . For all i \neq j

So another interesting property of this matrix is that every row sum and column sum is zero. Why ? Because the degree of the vertex is summed with a “-1” of each neighbour (From Wikipedia)

Also, you may know that ,if we add one row to another, the determinant of the matrix remains the same. So , let’s add all the rows to the first one. Then, the first row will be full of zeros (because each element of the first row will be the sum of its column , which is zero). Thus, the determinant is zero.

  1. In general |A + J| = |A| + sum of all cofactors of A
    (A is a general square matrix. Not necessarily the Laplacian Matrix)

Proof:
Let’s denote the i , j cofactor as cof_{ij}
Using Laplace expansion :

|A| = \begin{vmatrix} a_{11} & a_{12} & \dots & a_{1n} \\ a_{21} & a_{22} & \dots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \dots & a_{nn} \end{vmatrix} \\ |A| =a_{11} \times cof_{1,1} + a_{12} \times cof_{1,2} + \dots + a_{1,n} \times cof_{1,n}

So now, what happens if we add 1 to every element of the first row ?

|A'| = \begin{vmatrix} a_{11} + 1 & a_{12} + 1 & \dots & a_{1n} + 1 \\ a_{21} & a_{22} & \dots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \dots & a_{nn} \end{vmatrix} \\ |A'| = (a_{11} + 1) \times cof_{1,1} + ( a_{12} + 1) \times cof_{1,2} + \dots + (a_{1,n} + 1) \times cof_{1,n} \\

\text{Eq . 1} :|A'| = |A| +\text{sum of cofactors of first row} = |A| + \sum_{i=1}^{n} cof_{1,i}

And now we can generalize adding 1 to every element of the matrix :

|A + J| = \begin{vmatrix} a_{11} + 1 & a_{12} + 1 & \dots & a_{1n} + 1 \\ a_{21} + 1 & a_{22} + 1 & \dots & a_{2n} + 1 \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} + 1& a_{n2} + 1 & \dots & a_{nn} + 1 \end{vmatrix} \\

Using equation 1 on the first row :
|A + J| = \begin{vmatrix} a_{11} & a_{12} & \dots & a_{1n} \\ a_{21} + 1 & a_{22} + 1 & \dots & a_{2n} + 1 \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} + 1& a_{n2} + 1 & \dots & a_{nn} + 1 \end{vmatrix} + \sum_{i=1}^{n} cof_{1,i} \\

Using equation 1 on the second row :

|A + J| = \begin{vmatrix} a_{11} & a_{12} & \dots & a_{1n} \\ a_{21} & a_{22} & \dots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} + 1& a_{n2} + 1 & \dots & a_{nn} + 1 \end{vmatrix} + \sum_{i=1}^{n} cof_{1,i} + \sum_{i=1}^{n} cof_{2,i} \\

If you continue this process until the last row:

|A + J| = |A| + \text{sum of all cofactors of } A

Also, I’m not diegocruzo :sweat_smile: But you’re welcome.

1 Like

@diegohdmgz, Thanks a lot for the reply. :grinning: Understood your solution. Thanks for spreading your knowledge of linear algebra with everyone. :slightly_smiling_face:

1 Like

I think that is a talented solution, because you avoid the use of generating functions, interpolations and convolutions. Well played bro!.

1 Like