Linear algebra: eigenvalues, kernel

In summary: If the kernel were multi-dimensional, then it is possible that there would not be an eigenvector with eigenvalue 0. However, it is still possible for there to be an eigenvalue of 0 with a corresponding eigenvector. It would depend on the specific matrix and its eigenvalues and eigenvectors.
  • #1
Felafel
171
0

Homework Statement


I've tried to solve the following exercise, but I don't have the solutions and I'm a bit uncertain about result. Could someone please tell if it's correct?
Given the endomorphism ##\phi## in ##\mathbb{E}^4## such that:
##\phi(x,y,z,t)=(x+y+t,x+2y,z,x+z+2t)## find:
A) ## M_{\phi}^{\epsilon \epsilon}##
B)##ker(\phi)##
C)eigenvalues and multiplicities
D)eigenspaces
E)is ##\phi## self-adjoint or not? explain

The Attempt at a Solution



A)
( 1 1 0 1 )
( 1 2 0 0 )
( 0 0 1 0 )
( 1 0 1 2 )

B) ker(phi)=> solutions of MX=0
x+y+t=0
x+2y=0
z=0
x+z+2t=0 => ker= v=(-2,1,0,1)

C) to find the eigenvalues i calculate the determinant from the characteristical polynomial:

( 1-T 1 0 1 )
( 1 2-T 0 0 )
( 0 0 1-T 0 ) => ## T^4-3T^3+4T^2-6T+4=0 ##
( 1 0 1 2-T ) where the only eigenvalue is 1 with multiplicity 1.

D) eigenspace:
i substitute T with 1:
( 0 1 0 1 )
( 1 1 0 0 )
( 0 0 0 0 ) => v=(-1,1,-2,-1)
( 1 0 1 1 )
which is self-adjoint because the multiplicity equals the dimension of the eigenspace
 
Physics news on Phys.org
  • #2
Felafel said:

Homework Statement


I've tried to solve the following exercise, but I don't have the solutions and I'm a bit uncertain about result. Could someone please tell if it's correct?
Given the endomorphism ##\phi## in ##\mathbb{E}^4## such that:
##\phi(x,y,z,t)=(x+y+t,x+2y,z,x+z+2t)## find:
A) ## M_{\phi}^{\epsilon \epsilon}##
B)##ker(\phi)##
C)eigenvalues and multiplicities
D)eigenspaces
E)is ##\phi## self-adjoint or not? explain

The Attempt at a Solution



A)
( 1 1 0 1 )
( 1 2 0 0 )
( 0 0 1 0 )
( 1 0 1 2 )

B) ker(phi)=> solutions of MX=0
x+y+t=0
x+2y=0
z=0
x+z+2t=0 => ker= v=(-2,1,0,1)
Yes, that is correct. And the fact that the kernel is one-dimensional tells you that there exist an eigenvector with eigenvalue 0.

C) to find the eigenvalues i calculate the determinant from the characteristical polynomial:

( 1-T 1 0 1 )
( 1 2-T 0 0 )
( 0 0 1-T 0 ) => ## T^4-3T^3+4T^2-6T+4=0 ##
( 1 0 1 2-T ) where the only eigenvalue is 1 with multiplicity 1.
That's impossible. This is a 4 by 4 matrix the algebraic multiplicities must add to 4. Further, you have the polynomial wrong. As I said before, the fact that the the kernel is one-dimensional tells you that 0 is an eigenvalue (of geometric multiplicity 1) so T= 0 must be a single root of the polynomial. There is no "+4" term.

In fact, the eigenvalues are 0, 1, 2, and 3.

D) eigenspace:
i substitute T with 1:
( 0 1 0 1 )
( 1 1 0 0 )
( 0 0 0 0 ) => v=(-1,1,-2,-1)
( 1 0 1 1 )
which is self-adjoint because the multiplicity equals the dimension of the eigenspace
IF it were true that 1 were the only eigenvalue, this would have proven that the this matrix is NOT self-adjoint. The "multiplicity" of the eigenvalue ("algebraic multiplicity") would be 4 while the dimension of the subspace ("geometric multiplicity") would be 1.

However, that is not true. Here, there are four distinct eigenvalues so four independent eigenvectors. That means that each eigenspace is one dimensional.
 
  • #3
now I've understood, thank you :)!
just one more question: what if the kernel were multi-dimensional? woukd there still exist an eigenvector with eigenvalue 0 or should i assume something else?
 

FAQ: Linear algebra: eigenvalues, kernel

1. What are eigenvalues and eigenvectors in linear algebra?

In linear algebra, eigenvalues and eigenvectors are a pair of numbers and vectors that represent the properties of a linear transformation. Eigenvalues are scalar values that indicate the amount by which a vector is stretched or shrunk by the transformation. Eigenvectors are the corresponding vectors that remain in the same direction after the transformation.

2. How are eigenvalues and eigenvectors used in real-life applications?

Eigenvalues and eigenvectors are used in various fields such as physics, engineering, and computer science. They are used to analyze and solve systems of differential equations, determine stability in dynamic systems, and compress data in machine learning algorithms.

3. What is the kernel of a matrix in linear algebra?

The kernel of a matrix is the set of all vectors that are mapped to the zero vector by the matrix. In other words, it is the set of all solutions to the homogeneous equation Ax = 0, where A is the matrix. The kernel is also known as the null space of a matrix.

4. How is the kernel related to eigenvalues?

The dimension of the kernel of a matrix is equal to the number of eigenvalues that are equal to zero. This is because the eigenvectors corresponding to these eigenvalues are mapped to the zero vector in the matrix transformation, making them part of the kernel.

5. Can a matrix have more than one eigenvalue and eigenvector?

Yes, a matrix can have multiple eigenvalues and corresponding eigenvectors. In fact, a square matrix of size n can have a maximum of n distinct eigenvalues and n corresponding eigenvectors.

Back
Top