- #1
squaremeplz
- 124
- 0
Homework Statement
When generating a matrix from eigenvectors, does it matter in which order
the columns are placed?
This is, exactly,squaremeplease said:Sorry I am trying to diagonalize the matrix h
I used MATLAB to check my results:
h =
2 1 0
1 2 1
0 0 2
>> [e,r] = eig(h)
e =
0.7071 -0.7071 -0.7071
0.7071 0.7071 0
0 0 0.7071
and if you normalize the vectors first, this isr =
3 0 0
0 1 0
0 0 2
The eigenvalues I get are 1;2;3 and and my eigenvector matrix is a bit differnt than e when I do it out by hand:
1 -1 -1
1 1 0
0 0 1
Yes, did you try it? If e isbut since c[1;1;0] where c is any scalar; I am assuming it's the same thing due to the ratios. However, you are saying that I can use the matrix r instead? I know the arithmetic, this is just a bit confusing. Thanks again.
I think r might be my end result, but the way I am trying to get it is
e^(-1) * h * e = r
is this correct?
Eigenvalues and eigenvectors are mathematical concepts used to describe the properties of a linear transformation. Eigenvalues represent the scalar values that are scaled by the eigenvectors when the transformation is applied.
Eigenvalues and eigenvectors are important because they help us understand how a linear transformation affects a vector. They also have many practical applications in fields such as physics, engineering, and computer science.
To find eigenvalues and eigenvectors, you need to solve the characteristic equation of a matrix. This involves finding the values of lambda that satisfy the equation det(A-lambda*I) = 0, where A is the matrix and I is the identity matrix. The corresponding eigenvectors can then be found by solving the equation (A-lambda*I)x = 0.
Eigenvalues and eigenvectors are closely related. Each eigenvalue has a corresponding eigenvector, and the eigenvector represents the direction in which the transformation is scaled by the eigenvalue. Additionally, eigenvectors are always orthogonal to each other.
Eigenvalues and eigenvectors are used in data analysis to reduce the dimensionality of a dataset. This is done by finding the principal components, which are linear combinations of the original variables that explain the most variance in the data. These principal components are the eigenvectors of the covariance matrix of the data, and their corresponding eigenvalues represent the amount of variance explained by each component.