Matrices, and why they are useful

I read somewhere that there are no good blogs on forming matrix relations, why are they associative, and matrices are not intuitive. So I’ll try outlining how matrices work, and how to make them feel “natural” in a sense.


It’s very easy to prove they are associative by just brute forcing the answer, but there is a nice analogy, which makes the proof more natural.
How do we do matrix multiplication.
Say a[i][j] is the element in the i th row and j th column of a, we multiply a*b=c
c[i][j]=\sum a[i][k] * b[k][j]
The formula probably feels like nothing more than math jargon.
But you can think of it as a path. from the k th column, I can go to the k th row of b, and then from some l th column, I can go to the l th row of the next matrix, and so on. If you think about it carefully, you can see that you can’t change the order, because you would be going from a row to a column, but the number of ways is independent of which one is calculated first.

Forming a transformation matrix

Apparently it’s a bit hard to visualise the relations in matrices, but luckily, There is a standard method to map a linear recurrence relation to a matrix.
Firstly, the size is dependent on the number of previous terms it’s dependent on, let that be k.
Let the dp relation be
dp[i]=\sum a_jdp[i-j] Note that a_j may be 0.
We have a standard matrix
\begin{bmatrix} dp[i-k] & dp[i-k+1] & dp[i-k+2]&... &dp[i-1] & dp[i] \end{bmatrix} \times\begin{bmatrix}0 & 0 & 0 & ... & 0 & a_k\\ 1 & 0& 0 & ... & 0 & a_{k-1}\\ 0 & 1 & 0 & ... & 0 & a_{k-2}\\ 0 & 0 & 1 & ... & 0 & a_{k-3}\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \dots & 1 & a_1 \end{bmatrix}
This may not seem entirely obvious at first, but it’s easy to understand why. Basically, we are trying to increase the value of all dp states by 1. What are singular one in each column does is that it multiplies all values by 0, except one value. For the first column, that is dp[i-k+1], for the second column, that is dp[i-k+2] and so on. This way, we have managed to push all the dp values to the left. Now the last value, the one we just created, is \sum dp[i-j]a_j, which is just what we wanted.

What if it isn't a direct relation but multiple dp states

Unfortunately, these kind of relations are more reliant on guesswork. you need to be able to regenerate your matrix, which will come only with practice.
Let’s try a very simple example.
We have two dependents
\begin{bmatrix} dp[n-1][0] & dp[n-1][1]\end{bmatrix}
Now we make a 2 \times 2 matrix
\begin{bmatrix} a & b\\ c & d \end{bmatrix}
Now dp[n][0]=a\times dp[n-1][0]+b\times dp[n-1][1] and
dp[n][1]=c\times dp[n-1][0] +d\times dp[n-1][1]
We compare coefficients to find out that a=1,b=1,c=1,d=0.


Question 1: Consider a grid of size 2 \times n. How many ways can I arrange Blocks of size 1\times1 and 1\times2 in it, such that it is fully filled.

Question 1 hard:
CodeChef: Practical coding for everyone
in less than 0.05 seconds.

Feel free to include any questions you think are good for the topic.


Can you provide some problems with multiple dp states ?

Added a question.