Matrix Methods for Difference Problems II

In summary, the problem with using the diagonalization method is that we are changing the basis for each value of n.
  • #1
topsquark
Science Advisor
Insights Author
Gold Member
MHB
2,020
827
This is related to a recent (mainly unserious) post I recently made. I did some more work on a similar problem and I'd like to bounce off an idea why this doesn't work. I really am not sure if I'm right so I'd appreciate a comment.

I am working with some simple systems of difference equations. For example, writing it in matrix form it's:
\(\displaystyle \left ( \begin{matrix} a_{n + 1} \\ b_{n + 1} \end{matrix} \right ) = \left ( \begin{matrix} 0 & n \\ 1 & 0 \end{matrix} \right ) \left ( \begin{matrix} a_n \\ b_n \end{matrix} \right )\)

The idea is to use matrix methods to simplify the system, ie if I diagonalize the matrix and transform the vectors into the eigenbasis I can separate the a's and b's. So I do that, solve the matrix recursion and transform back to the original basis. Done, right? Wrong! The solution doesn't check.

I know we can do this with a system with constant matrix elements so my question is: Does the fact that we have a non-constant transformation matrix (ie it depends on n) change the recursion when we change to the eigenbases?

-Dan
 
Physics news on Phys.org
  • #2
Transforming to PF ## \LaTeX ##:
$$ \begin{pmatrix} a_{n + 1} \\ b_{n + 1} \end{pmatrix} = \begin{pmatrix} 0 & n \\ 1 & 0 \end{pmatrix} \begin{pmatrix} a_n \\ b_n \end{pmatrix} $$
topsquark said:
Done, right?
Unless you show your workings it is impossible to tell.

topsquark said:
I know we can do this with a system with constant matrix elements so my question is: Does the fact that we have a non-constant transformation matrix (ie it depends on n) change the recursion when we change to the eigenbases?
I think you have probably answered your own question. But if you think about it, the eigenvectors of ## T_n ## are meaningless because they tell you about ## T_n(T_n(s)) ## but you are actually looking for ## T_{n + 1}(T_n(s)) ##.
 
  • Like
Likes topsquark and Greg Bernhardt
  • #3
Actually, for this aspect of the problem I was working on the derivation of the original matrix equation is irrelevant. I was simply looking to find an explanation of why the diagonalization method didn't work.

As it happens the diagonalization method won't work because (at least the way I was approaching it) we were changing the basis for each value of n. So taking the product made no sense. I was eventually able to write out an example for a few values of n in W|A and was able to verify that it does work if you insert the correct transformation matrix between the diagonal matrices for each n, but the process is rather mind bogglingly intense. I have given up for now and, frankly, since there is a better way to approach the problem I'm not likely to get back to it.

Thanks for the reply!

-Dan
 
  • Like
Likes Greg Bernhardt
  • #4
I mean, in this particular case you can diagonalise the system by letting
$$
A_n = \begin{pmatrix}0 & 1\\ 1& 0\end{pmatrix} ^n\begin{pmatrix} a_n \\ b_n\end{pmatrix}
$$
 
  • #5
Orodruin said:
I mean, in this particular case you can diagonalise the system by letting
$$
A_n = \begin{pmatrix}0 & 1\\ 1& 0\end{pmatrix} ^n\begin{pmatrix} a_n \\ b_n\end{pmatrix}
$$
Okay, let me be a bit more clear about the issue and what I was trying to do. If we have the system with constant coefficients
##\left ( \begin{matrix} a_{n+1} \\ b_{n+1} \end{matrix} \right ) = \left ( \begin{matrix} w & x \\ y & z \end{matrix} \right ) \left ( \begin{matrix} a_n \\ b_n \end{matrix} \right )##

Then you can just diagonalize the coefficient matrix and multiply out, like you said:
##\left ( \begin{matrix} a_n \\ b_n \end{matrix} \right ) = \left ( \begin{matrix} p & 0 \\ 0 & q \end{matrix} \right ) ^n \left ( \begin{matrix} a_0 \\ b_0 \end{matrix} \right )##

But since we are using a coefficient matrix that depends on n we have to use a change of basis transformation every time we change n. So the formula becomes
$$\left ( \begin{matrix} a_n \\ b_n \end{matrix} \right ) = T(n) A_n T^{-1}(n) T(n-1) A_{n-1} T^{-1}(n-1) \text{... } T(0) A_0 T^{-1}(0) \left ( \begin{matrix} a_0 \\ b_0 \end{matrix} \right )$$
where the T(n) are the transformation matrices to diagonalize the ##A_n##.

It gets very ugly, very fast!

-Dan
 
  • #6
topsquark said:
Then you can just diagonalize the coefficient matrix and multiply out, like you said:
That is not what I said.
 
  • #7
To be a bit more specific, the particular case of the OP can be rewritten by instead using
Orodruin said:
$$
A_n = \begin{pmatrix}0 & 1\\ 1& 0\end{pmatrix} ^n\begin{pmatrix} a_n \\ b_n\end{pmatrix}
$$
as your variables. Introducing the notation
$$
P = \begin{pmatrix}0 & 1\\ 1& 0\end{pmatrix}
$$
and
$$
x_n = \begin{pmatrix} a_n \\ b_n\end{pmatrix}
$$
you would have ##A_n = P^n x_n## with ##P^2 = 1## and your recursion relation is on the form
$$
x_{n+1} = T_n x_n
$$
leading to
$$
A_{n+1} = P^{n+1} T_n P^n A_n
$$
where in the case of the OP, ##P^{n+1} T_n P^n## is diagonal.

An alternative approach to the problem in the OP is to consider that
$$
b_{n+2} = a_{n+1} = n b_n
$$
and therefore
$$
b_{2n} = (2n-2)! b_0
$$
and
$$
b_{2n+1} = (2n-1)! b_1 = (2n-1)! a_0
$$
with the ##a_n## following from ##a_n = b_{n+1}##.
 
  • Like
Likes topsquark
  • #8
I generally see what you are saying, though I'm not certain how you are using the ##A_n## and $$P^n##. But it looks to me like we are, at least, in general agreement. (I haven't been able to find material online about non-constant coefficient cases so you may be applying a method which includes more details than I'm familiar with.) I can do the substitution method, though. I was trying to find a method to "generalize" the work I was doing on first order non-constant coefficient recursions in a similar fashion to how matrix methods generalize the solution method for systems of equations. Turns out it didn't work all that well!

Thanks for putting in the time.

-Dan
 

FAQ: Matrix Methods for Difference Problems II

What are matrix methods for difference problems?

Matrix methods for difference problems are mathematical techniques used to solve differential equations or systems of equations that involve discrete time steps. These methods involve representing the equations in matrix form and using matrix operations to solve for the unknown variables.

What is the difference between matrix methods and other numerical methods?

Matrix methods are specifically designed to solve difference problems, which involve discrete time steps. Other numerical methods, such as finite difference or finite element methods, are more general and can be applied to a wider range of problems.

What types of difference problems can be solved using matrix methods?

Matrix methods can be used to solve a variety of difference problems, including initial value problems, boundary value problems, and eigenvalue problems. They are particularly useful for problems involving partial differential equations.

What are the advantages of using matrix methods for difference problems?

Matrix methods offer several advantages over other numerical methods, including their ability to handle complex systems of equations, their efficiency in solving large problems, and their accuracy in approximating solutions.

Are there any limitations to using matrix methods for difference problems?

While matrix methods are powerful tools for solving difference problems, they do have some limitations. They may not be suitable for problems with irregular boundaries or highly nonlinear equations. Additionally, they may require a significant amount of computational resources for large or complex problems.

Similar threads

Replies
3
Views
2K
Replies
5
Views
2K
Replies
6
Views
2K
Replies
4
Views
655
Replies
4
Views
3K
Replies
1
Views
940
Replies
4
Views
3K
Back
Top