Linear Algebra and Eigenvalues

In summary, to compute A^2 for a diagonlizable nxn matrix A with eigenvalues 1 and -1, we can set A=PD(P^-1) using the definition of a diagonalizable matrix. Then, A^2=(PD(P^-1))(PD(P^-1))=P(D^2)(P^-1). If the multiplicity of either eigenvalue is more than one, the form of D changes but not D^2. Therefore, to find D^2, we must determine what form D must take. P will not matter in this computation.
  • #1
Bluesman01
3
0
Suppose A is a diagonlizable nxn matrix where 1 and -1 are the only eigenvalues (algebraic multiplicity is not given). Compute A^2.

The only thing I could think to do with this question is set A=PD(P^-1) (definition of a diagonalizable matrix) and then A^2=(PD(P^-1))(PD(P^-1))=P(D^2)(P^-1)

This is how I left it on the test but I am sure this isn't right. How can you solve this without having the original matrix A or the algebraic multiplicity of the eigenvalues?
 
Physics news on Phys.org
  • #2
Bluesman01 said:
Suppose A is a diagonlizable nxn matrix where 1 and -1 are the only eigenvalues (algebraic multiplicity is not given). Compute A^2.

The only thing I could think to do with this question is set A=PD(P^-1) (definition of a diagonalizable matrix) and then A^2=(PD(P^-1))(PD(P^-1))=P(D^2)(P^-1)
Since A is diagonalizable, and its only eigenvalues are 1 and -1, then what form must D take? Even more to the point, what does D2 have to be?
Bluesman01 said:
This is how I left it on the test but I am sure this isn't right. How can you solve this without having the original matrix A or the algebraic multiplicity of the eigenvalues?
 
  • #3
But if the multiplicity of either eigenvalue is more than one, D changes form. Both P and D do. That's where I got stuck.
 
  • #4
Bluesman01 said:
But if the multiplicity of either eigenvalue is more than one, D changes form. Both P and D do. That's where I got stuck.

Of course D changes form. D^2 doesn't. What is it? Once you figure out what it is, P won't matter.
 
  • #5
Ah I see it now. Thanks to all who replied.
 
  • #6
That would be all two of us: Dick and myself...
 

FAQ: Linear Algebra and Eigenvalues

What is Linear Algebra?

Linear Algebra is a branch of mathematics that studies linear equations and their properties. It involves the manipulation and analysis of mathematical objects called vectors and matrices, as well as their operations and transformations.

What are Eigenvalues and Eigenvectors?

Eigenvalues and Eigenvectors are important concepts in Linear Algebra. Eigenvalues are scalar values that represent the scaling factor of an Eigenvector when it is transformed by a linear transformation. Eigenvectors are non-zero vectors that remain in the same direction after being transformed by a linear transformation.

Why are Eigenvalues and Eigenvectors important?

Eigenvalues and Eigenvectors are important because they help us understand the behavior of a linear transformation. They can be used to simplify complex systems and make predictions about the future behavior of a system. They also have applications in various fields such as engineering, physics, and computer science.

How do you find Eigenvalues and Eigenvectors?

To find Eigenvalues and Eigenvectors, we need to solve the characteristic equation of a matrix. The characteristic equation is obtained by setting the determinant of the matrix minus lambda (a variable) equal to zero. The values of lambda that solve this equation are the Eigenvalues, and the corresponding Eigenvectors can be found by plugging in these Eigenvalues into the original equation.

What are some real-world applications of Linear Algebra and Eigenvalues?

Linear Algebra and Eigenvalues have various applications in the real world. Some examples include image and signal processing, machine learning, data compression, and solving systems of differential equations. They are also used in computer graphics to manipulate and transform objects in 3D space. Additionally, Eigenvalues and Eigenvectors have applications in physics for solving problems related to quantum mechanics and vibrations.

Back
Top