Use matrix exponential to solve linear differential equations

In summary, you are able to find the Jordan form of the matrix P by solving the equations P^{-1}AP=J. X(t) is found by first determining P such that P-1AP=J is in Jordan form and then solving the three simple equations.
  • #1
syj
55
0

Homework Statement



Consider the system X'(t)=AX+B(t) where:

A =
[0 0 -1]
[1 0 -3]
[0 1 -3]

and B(t)=
[e-3t]
[et]
[3 ]

Find X(t) by frist determining P such that P-1AP=J is in Jordan form and then solving the three simple equations directly.

Homework Equations





The Attempt at a Solution


I know how to find J
but then I'm totally lost.
I need an example to follow.
I have no idea how to solve this.
Or if someone could please direct me to a page on the web that will help.
thanks
 
Physics news on Phys.org
  • #2
You say you are able to find P such that [itex]P^{-1}AP= J[/itex], the Jordan form. Excellent- that's most of the hard work!

Think of it as this: multiply both sides of the equation by [itex]P^{-1}[/itex] to get [itex]P^{-1}dX/dt= d(P^{-1}X)/dt= P^{-1}AX+ P^{-1}B[/itex]. You can do that because all matrices here a constants. Now [itex]PP^{-1}= I[/itex], of course, so we can insert it between A and X:
[tex]\frac{d(P^{-1}X)}{dt}= P^{-1}A(PP^{-1})X+ P^{-1}B[/tex]
[tex]\frac{d(P^{-1}X)}{dt}= (P^{-1}AP)(P^{-1}X)+ P^{-1}B[/tex]
[tex]\frac{d(P^{1}X)}{dt}= J(P^{-1}X)+ P^{-1}B[/tex]

Now, let [itex]Y= P^{-1}X[/itex] and the equation becomes
[tex]\frac{dY}{dt}= JY+ C[/tex]
where [itex]C= P^{-1}B[/itex].

That will be at least a "partially separated" system of equations. Of course, once you have found Y, [itex]X= PY[/itex].
 
Last edited by a moderator:
  • #3
Ok,
please check if I am correct so far:
I found the characteristic polynomial of A to be ([itex]\lambda[/itex]+1)3
this gives me the eigenvalue of [itex]\lambda[/itex]=-1 (multiplicity 3)
the eigenvector i got was
[1 2 1]T

NOW I am kinda stuck :blushing:
I thought I would have to create the matrix P
[1 2 1]
[1 2 1]
[1 2 1]

and then find its inverse.
However this is proving impossible for me.
I am certain I have either done something wrong in finding my characteristic polynomial OR there is a different method to use when I have repeated roots.

Please help!:cry:
 
  • #4
syj said:
Ok,
please check if I am correct so far:
I found the characteristic polynomial of A to be ([itex]\lambda[/itex]+1)3
this gives me the eigenvalue of [itex]\lambda[/itex]=-1 (multiplicity 3)
the eigenvector i got was
[1 2 1]T

NOW I am kinda stuck :blushing:
I thought I would have to create the matrix P
[1 2 1]
[1 2 1]
[1 2 1]

and then find its inverse.
However this is proving impossible for me.
I am certain I have either done something wrong in finding my characteristic polynomial OR there is a different method to use when I have repeated roots.

Please help!:cry:

If J is the Jordan form we have exp(t*A) = exp(PAP^(-1)J*t) = P*exp(J*t)*P^(-1), and computing exp(J*t) is easy.

RGV
 
  • #5
If the characteristic polynomial is ([itex]\lambda[/itex]+1)3
then am I correct in saying the jordan form of the matrix is J=
[-1 0 0]
[0 -1 0]
[0 0 -1]

and so eJ=
[e-1 0 0]
[0 e-1 0]
[0 0 e-1]
 
  • #6
No, that is not correct. That would be the case if the matrix were "diagonalizable"- that is, if there were three independent eigenvectors. Ray Vickson, you can't use the same vector repeatedly. In order that P be invertible, the rows have to be indendent vectors. Eigenvectors corresponding to distinct eigenvalues are always independent so a matrix having three distinct eigenvalues is always diagonalizable but that is not the case here.

Yes, [itex]\lambda= -1[/itex] is a triple root of the characteristic equation. That means that [itex](A+ I)^3v= 0[/itex] for every vector (a matrix always satifies its own characteristic equation. In particular, there must exist v such that Av= -v- i.e. v is an eigenvalue. It is easy to show that v= <1, 2, 1> is such an eigenvector, as you say, but we only get one such. That is why the matrix is not diagonalizable. At this point we know that the Jordan Normal form is
[tex]\begin{bmatrix}-1 & 1 & 0 \\ 0 & -1 & 1 \\ 0 & 0 & 1\end{bmatrix}[/tex].
(If there had been three independent eigenvectors, it would be diagonal:
[tex]\begin{bmatrix}-1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1\end{bmatrix}[/tex]
If there had been two independent eigenvectors, it would have had only one "block":
[tex]\begin{bmatrix}-1 & 1 & 0 \\ 0 & -1 & 0\\ 0 & 0 & 1\end{bmatrix}[/tex].)

Since there are no other independent eigenvectors, we look for "generalized eigenvectors". There may be vectors, v, such that that it is not true that [itex](A+ I)v= 0[/itex] but that [itex](A+ I)^2v= 0[/itex]. That is the same as [itex](A+ I)u= 0[/itex] where [itex]u= (A+ I)v[/itex] so that u is an eigenvector since every eigenvector is a multiple of <1, 4, -1> so we look for a vector, v, such that (A+ I)v= <1, 2, 1>.

The matrix equation, (A+ I)v= <1, 2, 1> or
[tex]\begin{bmatrix}1 & 0 & -1 \\ 1 & 1 & -3\\ 0 & 1 & -2\end{bmatrix}\begin{bmatrix}x \\ y \\ z\end{bmatrix}= \begin{bmatrix}x- z \\ x+ y- 3z \\ y- 2z\end{bmatrix}= \begin{bmatrix}1 \\ 2 \\ 1\end{bmatrix}[/tex]
gives the three equations x- z= 1, x+ y- 3z= 2, and y- 2z= 1. From the first equation, x= z+ 1. From the third equation, y= 2z+ 1. Putting those into the second equation, z+ 1+ 2z+ 1- 3z= 2 is satisfied for any z. Taking z= 1, x= 2, and y= 3. One "generalized eigenvector" is <2, 3, 1>.

But since [itex](A+ I)^3v= 0[/itex] for all v, there must also be v such that neither [itex](A+ I)v= 0[/itex] nor [itex](A+I)^2v= 0[/itex] but [itex](A+I)^3v= 0[/itex] and that, in turn, means such that [itex](A+ I)v[/itex] is a multiple of <2, 3, 1>.
The last, independent, generalized eigenvector must satisfy (A+ I)v= <2, 3, 1> or
[tex]\begin{bmatrix}1 & 0 & -1 \\ 1 & 1 & -3\\ 0 & 1 & -2\end{bmatrix}\begin{bmatrix}x \\ y \\ z\end{bmatrix}= \begin{bmatrix}x- z \\ x+ y- 3z \\ y- 2z\end{bmatrix}= \begin{bmatrix}2 \\ 3\\ 1\end{bmatrix}[/tex]
Solve for x, y, z and construct the matrix P using those vectors as columns.
 
Last edited by a moderator:
  • #7
I am confused: I did not use any vectors, repeatedly or not. I did use your relationship A = P.J.P^(-1) from your earlier post. Then I just pointed out that exp(tA) = P.exp(tJ).P^(-1). This is well known, and holds for any analytic function f(A), not just for exp(tA).

RGV
 
  • #8
I have found P
P=
1 1 1
0 1 2
0 0 1

so and P-1AP = J =
-1 0 0
1 -1 0
0 1 -1

I am going to try to follow HallsofIvy's explanation.

...
 
  • #9
HELP
I am totally lost ...
I don't know where to go from here ...
All I have is P
and looking at HallsofIvy's steps I am stuck at the last equation.
How do I find Y?
:(
the question in my textbook says to find P and then solve the three simple equations.
What 3 simple equations?
:(
please help
 
  • #10
Try calculating the first few powers of J. Look for a pattern.

Also, for P, reverse the order of the columns, that is, use
[tex]P=\begin{pmatrix} 1 & 1 & 1 \\ 2 & 1 & 0 \\ 1 & 0 & 0 \end{pmatrix}[/tex]
so you'll get
[tex]J=\begin{pmatrix} -1 & 1 & 0 \\ 0 & -1 & 1 \\ 0 & 0 & -1 \end{pmatrix}[/tex]
Jordan normal form is an upper triangular matrix.
 
Last edited:
  • #11
What did I do wrong that lead to my having the columns in the wrong order for P?
I took [1 0 0]T as being my start vector and then used a "chain" method I found online.
the method defines x2=Ax1
and then x3=Ax2

Please help me understand.
I am also still struggling to solve the linear differential equation.
Can someone recommend a textbook that has a worked example.
The text I am using is Matrices and Linear Transformations by Cullen. There are no worked examples for this section. :(
There is something on pg269 of the text. If anyone can check it out on google books and maybe help me out?
thanks a mil
 
  • #12
That's backwards. The first column x1 should be the eigenvector of A. The next column should be the solution to Ax2=x1, and so on.

Where are you getting stuck solving the system?
 
  • #13
Ok, I have discovered the error of my ways.
I now have the following:
P = \begin{matrix}1&1&1\\2&1&0\\1&0&0\end{matrix}

P-1= \begin{matrix}0&0&1\\0&1&-2\\1&-1&1\end{matrix}

so that P-1AP=J=\begin{matrix}-1&1&0\\0&-1&1\\0&0&-1\end{matrix}

is that correct so far?
 
Last edited:

FAQ: Use matrix exponential to solve linear differential equations

How does the matrix exponential method work to solve linear differential equations?

The matrix exponential method involves using the exponential of a matrix to solve a linear differential equation. This method is based on the fact that the solution to a linear differential equation can be represented as a linear combination of exponential functions.

What are the advantages of using the matrix exponential method to solve linear differential equations?

One advantage of the matrix exponential method is that it can be used to solve systems of linear differential equations, which may have multiple variables. It also provides a closed-form solution, meaning there is no need for numerical approximation. Additionally, the matrix exponential method can be easily implemented with computer software.

What is the difference between using the matrix exponential method and other methods for solving linear differential equations?

Compared to other methods, such as the power series method or the variation of parameters method, the matrix exponential method is more efficient for solving systems of linear differential equations. It also provides a simpler and more elegant solution for certain types of linear differential equations.

Are there any limitations to using the matrix exponential method for solving linear differential equations?

The matrix exponential method may not be suitable for all types of linear differential equations. It is most effective for systems of linear differential equations with constant coefficients. Additionally, it may not be practical for solving higher-order differential equations or non-linear differential equations.

Can the matrix exponential method be used for real-world applications?

Yes, the matrix exponential method has many real-world applications in fields such as engineering, physics, and economics. It can be used to model and analyze various systems, including electrical circuits, chemical reactions, and population dynamics. It is also commonly used in control theory to design controllers for systems with multiple inputs and outputs.

Back
Top