How Does Spectral Decomposition Apply to Matrix A?

In summary, we need to verify that A has a spectral decomposition by finding its eigenvalues and corresponding eigenvectors. Then, we use the orthogonal projections on the eigenspaces to verify our results using the spectral theorem. The spectral decomposition of A is given by T = QDQ^-1, where Q is an invertible matrix composed of the eigenvectors of A and D is a diagonal matrix with the eigenvalues of A along the diagonal. This decomposition allows us to change the basis of a vector, apply the diagonal matrix, and then change the basis back to verify that it agrees with A.
  • #1
Shackleford
1,656
2
[itex]
\begin{bmatrix}
1 & 2 \\
2 & 1 \\
\end{bmatrix}[/itex] = A

(1) Verify that LA possesses a spectral decomposition.
(2) For each eigenvalue of LA, explicitly define the orthogonal projection on the corresponding eigenspace.
(3) Verify your results using the spectral theorem.

The eigenvalues of A are 3 and -1. The eigenvectors are (1,1) and (-1,1), respectively. I'm not sure how to proceed.
 
Physics news on Phys.org
  • #2
Anyone?
 
  • #3
What you wrote is the same as [tex]x^{2}-2x-3[/tex] try to use this probably might make it easier for you.
 
  • #4
mtayab1994 said:
What you wrote is the same as [tex]x^{2}-2x-3[/tex] try to use this probably might make it easier for you.

I don't know how to show it possesses spectral decomposition.

What do I orthogonally project onto the eigenspaces?
 
  • #5
Well I'd say you should draw out that parabola that i gave you and it should look easier.
 
  • #6
mtayab1994 said:
Well I'd say you should draw out that parabola that i gave you and it should look easier.

I don't see how the characteristic polynomial comes into play. Am I projecting the parabola onto the lines associated with the eigenspaces?
 
  • #7
1)What does it mean for something to have a spectral decomposition?
Since you have your eigenvectors, how can you show that it has one?
2)You should be able to find the projections easily enough
3)I'm not quite sure what 'use the spectral theorem' means since there is no 'spectral theorem' on it's own. I assume it means just work out the decomposition and show that it agrees with A.
 
  • #8
genericusrnme said:
1)What does it mean for something to have a spectral decomposition?
Since you have your eigenvectors, how can you show that it has one?
2)You should be able to find the projections easily enough
3)I'm not quite sure what 'use the spectral theorem' means since there is no 'spectral theorem' on it's own. I assume it means just work out the decomposition and show that it agrees with A.

The set = {λ1, λ2,..., λk} of eigenvalues of T is the spectrum of T.

T = λ1T1 + λ2T2 + ... + λkTk

Ti is the orthogonal projection of V on Wi.
 
  • #9
Yes, and what does that matrix T look like?
(I should say, what does T look like in a certain basis ;))
 
  • #10
genericusrnme said:
Yes, and what does that matrix T look like?
(I should say, what does T look like in a certain basis ;))

Is T a diagonal matrix with its eigenvalues along the diagonal?
 
  • #11
Yes, you just need to show that this is so
 
  • #12
genericusrnme said:
Yes, you just need to show that this is so

So, just find an invertible matrix Q such that D = Q-1AQ?
 
  • #13
Yes
Although I wouldn't say 'find', you've already got everything written down that you need to know in order to write Q out.
 
  • #14
genericusrnme said:
Yes
Although I wouldn't say 'find', you've already got everything written down that you need to know in order to write Q out.

Q is composed of the eigenvectors of A, right?
 
  • #15
Shackleford said:
Q is composed of the eigenvectors of A, right?

Yes, do you understand WHY this is so?
Do you know what Q is actually doing other than sitting there being an invertable matrix?
 
  • #16
genericusrnme said:
Yes, do you understand WHY this is so?
Do you know what Q is actually doing other than sitting there being an invertable matrix?

Well, A is normal and so it is diagonalizable. What is Q doing?
 
  • #17
The Q is changing the basis you are working in.
If [itex]a[/itex], [itex]b[/itex] are the eigenvectors of [itex]T[/itex] and [itex]\lambda_1[/itex], [itex]\lambda_2[/itex] the eigenvalues we have

[itex]Ta = \lambda_1 a[/itex]
[itex]Tb = \lambda_2 b[/itex]

If we use the basis where the vector (1,0) ~ a and (0,1) ~ b then T is obviously diagonal. What the Q does is takes the vectors a and b and turns them into (1,0) and (0,1) respectively.

If we have a as the first row and b as the second row in Q then [itex]Q.(1,0) = a[/itex] and [itex]Q.(0,1) = b[/itex]. So then [itex]Q^{-1}.a = (1,0)[/itex] and [itex]Q^{-1}.a = (0,1)[/itex]

What your [itex]T=Q\ D\ Q^{-1}[/itex] decomposition is doing is when it acts on vector x is this;
1. Change the basis of x from (1,0) and (0,1) to a and b
2. Apply the diagonal matrix to a and b, a simple multiplication [itex]a \rightarrow \lambda_1 a[/itex], [itex]b \rightarrow \lambda_2 b[/itex]
3. Change the basis back to (1,0) and (0,1) from a and b
 
  • #18
genericusrnme said:
The Q is changing the basis you are working in.
If [itex]a[/itex], [itex]b[/itex] are the eigenvectors of [itex]T[/itex] and [itex]\lambda_1[/itex], [itex]\lambda_2[/itex] the eigenvalues we have

[itex]Ta = \lambda_1 a[/itex]
[itex]Tb = \lambda_2 b[/itex]

If we use the basis where the vector (1,0) ~ a and (0,1) ~ b then T is obviously diagonal. What the Q does is takes the vectors a and b and turns them into (1,0) and (0,1) respectively.

If we have a as the first row and b as the second row in Q then [itex]Q.(1,0) = a[/itex] and [itex]Q.(0,1) = b[/itex]. So then [itex]Q^{-1}.a = (1,0)[/itex] and [itex]Q^{-1}.a = (0,1)[/itex]

What your [itex]T=Q\ D\ Q^{-1}[/itex] decomposition is doing is when it acts on vector x is this;
1. Change the basis of x from (1,0) and (0,1) to a and b
2. Apply the diagonal matrix to a and b, a simple multiplication [itex]a \rightarrow \lambda_1 a[/itex], [itex]b \rightarrow \lambda_2 b[/itex]
3. Change the basis back to (1,0) and (0,1) from a and b

Are you calling my original matrix A matrix T?
 
  • #19
Yes, sorry for mixing notations :shy:

But what I wrote is still true for any matrix which has a full set of eigenvalues and vectors
 

FAQ: How Does Spectral Decomposition Apply to Matrix A?

What is spectral decomposition?

Spectral decomposition is a mathematical method used to break down a complex signal into its individual frequency components. It is commonly used in signal processing, data analysis, and machine learning.

How does spectral decomposition work?

Spectral decomposition involves transforming a signal from the time domain to the frequency domain using techniques such as Fourier transform or wavelet transform. This allows for the identification and isolation of specific frequency components within the signal.

What are the applications of spectral decomposition?

Spectral decomposition has a wide range of applications in various fields such as image processing, speech recognition, seismic data analysis, and finance. It is particularly useful in identifying hidden patterns and structures within complex data sets.

What are the benefits of using spectral decomposition?

Using spectral decomposition allows for a deeper understanding of the underlying components of a signal, which can help in making more accurate predictions and decisions. It also allows for noise reduction and feature extraction, making it a valuable tool in data analysis and machine learning.

Are there any limitations to spectral decomposition?

One limitation of spectral decomposition is that it assumes the signal is stationary, meaning it does not change over time. It may also be affected by noise and artifacts, which can impact the accuracy of the results. Additionally, the choice of transform and parameters can greatly affect the outcome of the decomposition.

Similar threads

Back
Top