Linear homogenous system with repeated eigenvalues

In summary, a linear homogeneous system with repeated eigenvalues occurs when a matrix has multiple eigenvalues that are the same. This situation can lead to complications in finding a complete set of linearly independent eigenvectors, often necessitating the use of generalized eigenvectors. The system's solutions can be expressed using a combination of eigenvectors and generalized eigenvectors, resulting in a solution space that can be more complex than that of systems with distinct eigenvalues. Understanding the algebraic and geometric multiplicities of the eigenvalues is crucial for analyzing the system's behavior and stability.
  • #1
psie
269
32
Homework Statement
Solve the IVP ##x'=Ax, x(0)=x_0##, where ##A=\begin{pmatrix} 1&1&-1\\ 0&2&-2\\-1&1&-1\end{pmatrix}## and ##x_0=\begin{pmatrix} -1 \\ 1\\ 0\end{pmatrix}##.
Relevant Equations
Determinants, Gaussian elimination, generalized eigenvectors, matrix exponential, etc.
I've solved this problem using a fairly involved technique, where I compute the matrix ##e^{tA}## (the fundamental matrix of the system) with a method derived from the Cayley-Hamilton's theorem. It is a cool method that I believe always works, but it can be a lot of work sometimes. It involves finding a polynomial function, which you then evaluate at the matrix ##A##. Once you've found ##e^{tA}##, the general solution to the system is ##e^{tA}x_0##

Anyway, I've been presented another solution and I'm not really sure what formula they used in the end. To be honest, their whole approach is very new to me and I'd be grateful for any references where this is explained in more detail.

So the eigenvalues of ##A## are ##2## and ##0##, the latter with multiplicity ##2##. The solution goes then as follows:

The generalized eigenvectors for ##\lambda=0## are given by the null space to $$A^2=\begin{pmatrix} 2&2&-2 \\ 2&2&-2\\0&0&0\end{pmatrix}$$ which has a basis ##(0,1,1)## and ##(1,0,1)##. The eigenvalue for ##\lambda=2## are given by the null space to $$A-2I=\begin{pmatrix} -1&1&-1 \\ 0&0&-2\\-1&1&-3\end{pmatrix}$$ which gives the eigenvector ##(1,1,0)##. The general solution is \begin{align} x(t)&=(I+tA)(c_1(0,1,1)+c_2(1,0,1))+c_3e^{2t}(1,1,0) \tag1 \\&=c_1(0,1,1)+c_2((1,0,1)-2t(0,1,1))+c_3e^{2t}(1,1,0) \tag2\end{align}

The initial value ##(-1,1,0)=(0,1,1)-(1,0,1)## is a generalized eigenvector for ##\lambda=0## so we get the solution ##x(t)=(I+tA)(-1,1,0)=(-1,1,0)+t(0,2,2)##.

1. What formula are they using in ##(1)##, i.e. where does the ##(I+tA)## come from?
2. The initial value is a linear combination of generalized eigenvectors. I do not understand how they conclude ##x(t)=(I+tA)(-1,1,0)##?

Grateful if someone could answer both questions and possibly refer to some text where this method is explained in more detail.
 
Physics news on Phys.org
  • #2
1) If [itex]A[/itex] has an eigenvalue [itex]\lambda[/itex] with multiplicity 2, then there exist non-zero vectors [itex]u[/itex] and [itex]v[/itex] such that [tex]\begin{split}
(A - \lambda I)v &= u, \\
(A - \lambda I)u &= 0.\end{split}[/tex] It follows that if [itex]x = a(t)u + b(t)v[/itex] and [itex]\dot x = Ax[/itex] then [tex]
\begin{split} \dot a u + \dot b v &= A(au + bv) \\
&= \lambda a u + b (\lambda v + u) \end{split}[/tex] so that [tex]\begin{split}
\dot a &= \lambda a + b \\
\dot b &= \lambda b \end{split}[/tex] and [tex]\begin{split}
x(t) &= a_0 e^{\lambda t} u + b_0 e^{\lambda t} (tu + v) \\
&= a_0 e^{\lambda t} u + b_0 e^{\lambda t} (t(A - \lambda I) + I)v. \end{split}[/tex] Since [itex](A - \lambda I)u = 0[/itex] we can add [itex]a_0e^{\lambda t}t(A - \lambda I)u = 0[/itex] to the right hand side to get [tex]
\begin{split}
x(t) &= a_0 e^{\lambda t} (t(A - \lambda I) + I)u + b_0 e^{\lambda t} (t(A - \lambda I) + I)v \\
&= e^{\lambda t}(t(A - \lambda I) + I)(a_0 u + b_0 v). \end{split}[/tex]

2) You need to solve [tex]
(-1, 1, 0) = x(0) = c_1(0,1,1) + c_2(1,0,1) + c_3 (1,1,0).[/tex] It happens that [itex](c_1, c_2, c_3) = (1, -1, 0)[/itex].
 
  • Like
Likes psie
  • #3
A better explanation is that here [itex]\mathbb{R}^3 = \ker A^2 \oplus \ker (A - 2I)[/itex], so that any [itex]x \in \mathbb{R}^3[/itex] can be written uniquely as the sum of a vector [itex]v \in \ker A^2[/itex] and a vector [itex]u \in \ker (A - \lambda I)[/itex].

For [itex]v \in \ker A^2[/itex] we have [itex]A^2 v = 0[/itex] so that [itex]e^{tA}v = (tA + I)v[/itex] and for [itex]u \in \ker (A - 2I)[/itex] we have [itex]Au = 2u[/itex] so that [itex]e^{tA}u = e^{2t}u[/itex]. Hence [tex]\begin{split}
e^{tA}x &= e^{tA}v + e^{tA}u \\
&= (tA + I)v + e^{2t}u.\end{split}[/tex]
 
  • Like
Likes psie

FAQ: Linear homogenous system with repeated eigenvalues

What is a linear homogeneous system with repeated eigenvalues?

A linear homogeneous system with repeated eigenvalues refers to a system of linear differential equations where the coefficient matrix has eigenvalues that are not distinct. This means that at least one eigenvalue appears more than once in the characteristic equation of the matrix.

How do you solve a linear homogeneous system with repeated eigenvalues?

To solve such a system, you first find the eigenvalues and corresponding eigenvectors. If the eigenvalues are repeated, you also need to find generalized eigenvectors to construct the complete solution. The solution involves terms that include polynomials multiplied by exponential functions of the eigenvalues.

What is the significance of generalized eigenvectors in systems with repeated eigenvalues?

Generalized eigenvectors are crucial when dealing with repeated eigenvalues because they provide the necessary additional linearly independent solutions to form a complete basis for the solution space. Without them, you would not have enough independent solutions to fully describe the system.

Can a system with repeated eigenvalues be diagonalized?

No, a system with repeated eigenvalues generally cannot be diagonalized if the geometric multiplicity (number of linearly independent eigenvectors) is less than the algebraic multiplicity (number of times the eigenvalue appears). Instead, such systems are put into Jordan canonical form, which includes Jordan blocks for the repeated eigenvalues.

What is the Jordan canonical form and how is it used in solving systems with repeated eigenvalues?

The Jordan canonical form is a block-diagonal matrix that simplifies the representation of a linear system with repeated eigenvalues. Each block corresponds to an eigenvalue and includes both eigenvectors and generalized eigenvectors. This form is used to decouple the system into simpler sub-systems that can be solved more easily, allowing for the construction of the general solution.

Back
Top