Add Me!Close Menu Navigation

Linear Recursive Sequence

We derive a general technique for solving full rank linear recursive sequences. Formally, a kth linear recursive sequence is defined as

     $$ x_n = a_1 x_{n-1} + a_2 x_{n-2} + a_3 x_{n-3} + \stackrel{k}{\ldots} + a_k x_{n-k} $$

In vector form, we can use the notation

    \[v_n = \left(\begin{array}{c} x_{n+k-1} \\ x_{n+k-2} \\ \vdots \\ x_{n+1} \\ x_n \end{array}\right)\]

In this sense, v_0 is just the initial condition vector

    \[v_0 = \left(\begin{array}{c} x_{k-1} \\ x_{k-2} \\ \vdots \\ x_{1} \\ x_0 \end{array}\right)\]

Where as v_1 can be expressed as

    \[v_1 = \left(\begin{array}{c} x_{k} \\ x_{k-1} \\ \vdots \\ x_{2} \\ x_1 \end{array}\right)\]

If you’ve studied linear algebra before, you may recognize this immediately as v_1 = A v_0. Let’s find what the transform A is.

    \[\underbrace{\left(\begin{array}{c c c c c c c c} a_1 & a_2 & \hdots & a_{k-1} & a_k \\ 1 & 0 & \hdots & 0 & 0 \\ 0 & 1 & \hdots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \hdots & 1 & 0 \end{array}\right)}_{A} \underbrace{\left(\begin{array}{c} x_{k-1} \\ x_{k-2} \\ \vdots \\ x_{1} \\ x_0 \end{array}\right)}_{v_0} =\underbrace{\left(\begin{array}{c} x_{k} \\ x_{k-1} \\ \vdots \\ x_{2} \\ x_1 \end{array}\right)}_{v_1}\]

similarly

    \[v_n = A^n v_0\]

So the problem is reduced down to one of finding A^n. Supposing that we have a full set of eigenvalues, then the characteristic polynomial

    \begin{align*} P_A(\lambda) &= \left|\lambda I - A \right| \\ &= \left|\begin{array}{c c c c c c}  \lambda - a_1 & 0 - a_2 & \hdots & -a_{k-1} & -a_k \\ -1 & \lambda \\ & -1 & \lambda \\ & & \ddots \\ & & & -1 & \lambda \end{array} \right| \\ \intertext{Notice that all of the minors of this determinant are triangles, hence their determinants are just the products of their diagonals.} &= (\lambda - a_1) \left|\begin{array}{c c c c c c}   \lambda \\  -1 & \lambda \\  & & \ddots \\  & & -1 & \lambda \end{array} \right| - (-a_2) \left|\begin{array}{c c c c c c}   -1\\  & \lambda \\  & & \ddots \\  & & -1 & \lambda \end{array} \right| + (-a_3) \left|\begin{array}{c c c c c c}   -1 & \lambda\\  & -1 \\  & & \ddots \\  & & -1 & \lambda \end{array} \right| - \cdots \\ &= (\lambda - a_1)\lambda^{k-1} - a_2\lambda^{k-2} - a_3\lambda^{k-3} - \cdots - a_k\lambda^0 \\ &= \lambda^k - \sum_{i=1}^k a_i \lambda^{k-i} \end{align*}

Setting P_A(\lambda) = 0, the distinct roots \lambda_1, \lambda_2, \cdots, \lambda_k satifying the equation are the eigenvalues of A. Since A has a full rank of eigenvectors, it must be the case that A is diagonalizable, that is A = S \Lambda S^{-1} \implies A^n = S \Lambda^n S^{-1}. Therefore

    \[v_n = A^n v_0 = S \Lambda^n S^{-1} v_0\]

suggests that x_n is also a linear combination of \lambda_1^n, \lambda_2^n, \cdots. That is,

    \[x_n = \alpha_1 \lambda_1^n + \alpha_2 \lambda_2^n + \cdots\]

The coefficients \alpha_i can be found via the initial value problem

    \begin{align*} \alpha_1 + \alpha_2 + \cdots &= x_0 \\ \alpha_1 \lambda_1 + \alpha_2 \lambda_2 + \cdots &= x_1 \\ \alpha_1 \lambda_1^2 + \alpha_2 \lambda_2^2 + \cdots &= x_2 \vdots \\ \left(\begin{array}{c} \alpha_1 \\ \alpha_2 \\ \vdots \\ \alpha_k \end{array}\right) &= \left(\begin{array}{c c c c} 1 & 1 & 1 & \hdots \\ \lambda_1 & \lambda_2 & \lambda_3 & \hdots \\ \lambda_1^2 & \lambda_2^2 & \lambda_3^2 & \hdots \\ \hdots & \hdots & \hdots & \hdots \end{array}\right)^{-1} \left(\begin{array}{c} x_0 \\ x_1 \\ x_2 \\ x_{k-1} \end{array}\right) \end{align*}

Therefore x_n = \left\langle \Lambda^n, \left(\begin{array}{c} \alpha_1 \\ \vdots \\ \alpha_k \end{array}\right) \right\rangle. \blacksquare

Posted By Lee

Woosh

Recent Comments