By multiplying both sides by P.

In summary, the conversation discusses diagonalizing a symmetric matrix A to obtain a diagonal matrix D, where the principal axes of A are no longer Cartesian but are now the basis made up of the eigenvectors of A. The values in the diagonal D correspond to the units in a new basis formed by the eigenvectors, and the rows of D correspond to the eigenvectors making up the new basis. The significance of DPT = PTA is discussed, with the understanding that P represents the eigenvectors. The conversation also touches on the representation of linear transformations using matrices and the importance of orthonormal eigenvectors in this process.
  • #1
quietrain
655
2

Homework Statement



lets say i have a matrix A which is symmetric

i diagonalize it , to P-1AP = D

Question 1)
am i right to say that the principal axis of D are no longer cartesian as per matrix A, but rather, they are now the basis made up of the eigen vectors of A? , which are the columns of P ?

so if my diagonal D takes the form of say

(1,0,0)
(0,2,0)
(0,0,3)

Question 2)
The 1, 2 , 3 are actually 1,2,3 units in a new basis formed by my respectively eigenvectors?

i.e,

(1,0,0)(eigenvector of A corresponding to eigenvalue 1)
(0,2,0)(eigenvector of A corresponding to eigenvalue 2)
(0,0,3)(eigenvector of A corresponding to eigenvalue 3)

so that it is a (3x3) x (3x1) = (3x1) matrix

in a same sense as in a cartesian system

(1,0,0)(x)
(0,2,0)(y)
(0,0,3)(z)

so that i get 1i + 2j + 3k right?

Question 3
if now my (eigenvector of A corresponding to eigenvalue 1) is given by (x,y,z)

so that
(1,0,0)(x1,y1,z1)
(0,2,0)(x2,y2,z2)
(0,0,3)(x3,y3,z3)

so i get a 3x3 matrix?

where the resulting 3 rows are my 3 eigenvectors making up the new basis except it has changed its magnitude due to the diagonal matrix. that's all right?

so issn't this actually DPT? so its equal to PTA as per the very first point above? so issn't the rows of PT my 3 principal axis? does this step have any significance? i think there is ? but i can't seem to see any. what does DPT = PTA tell me?Question 4)

so if now i have
(1,2,3)(x1,y1,z1)
(4,5,6)(x2,y2,z2)
(7,8,9)(x3,y3,z3)

this is telling me that i have 3 vectors which have components equal to the matrix product of the above right? the 3 vectors are the rows of the resultant matrix right?thanks a lot!
 
Last edited:
Physics news on Phys.org
  • #2
quietrain said:

Homework Statement



lets say i have a matrix A which is symmetric

i diagonalize it , to P-1AP = D

Question 1)
am i right to say that the principal axis of D are no longer cartesian as per matrix A, but rather, they are now the basis made up of the eigen vectors of A? , which are the columns of P ?
I don't know what you mean by "no longer Cartesian". The principal axes of A or D are in the direction of the eigenvectors of A. The linear transformation represented by A in a given basis is represented by D in the basis made up of eigenvectors of A.

so if my diagonal D takes the form of say

(1,0,0)
(0,2,0)
(0,0,3)

Question 2)
The 1, 2 , 3 are actually 1,2,3 units in a new basis formed by my respectively eigenvectors?

i.e,

(1,0,0)(eigenvector of A corresponding to eigenvalue 1)
(0,2,0)(eigenvector of A corresponding to eigenvalue 2)
(0,0,3)(eigenvector of A corresponding to eigenvalue 3)
Not exactly. The eigenvectors of A, in the original basis are the columns of the matrix P above. In the new coordinate system, in which the matrix representing A is diagonal, the basis vectors are the unit eigenvectors and so are (1, 0, 0) for eigenvalue 1, (0, 1, 0) for eigenvalue 2, and (0, 0, 1) for eigenvalue 3. Of course, any scalar multiple of an eigenvalue is also an eigenvalue so, yes, (1, 0, 0), (0, 2, 0), and (0, 0, 3) are eigenvalues. But so are (-45, 0, 0), (0, pi, 0), etc.

so that it is a (3x3) x (3x1) = (3x1) matrix
Please don't use pronouns without antecedents! What does "it" refer to? Since your original matrix was 3 by 3, you have a linear transformation from a three dimensional vector space to itself. All linear transformations are represented by 3 by 3 matrices, all vectors by 3- columns. That has nothing to do with eigenvectors.

in a same sense as in a cartesian system
Again, what do you mean by a "cartesian system"? A vector space with a given orthonormal basis?

(1,0,0)(x)
(0,2,0)(y)
(0,0,3)(z)

so that i get 1i + 2j + 3k right?
Yes, that is standard matrix multiplication. Again, it has nothing to do with eigenvectors.

Question 3
if now my (eigenvector of A corresponding to eigenvalue 1) is given by (x,y,z)

so that
(1,0,0)(x1,y1,z1)
(0,2,0)(x2,y2,z2)
(0,0,3)(x3,y3,z3)

so i get a 3x3 matrix?
That makes no sense. Did you mean that "eigenvector correspondint to eigenvalue 1 is given by (x1, y1, z1), eigenvector corresponding to eigenvalue 2 is (x2, y2, z2), etc.? As I said before, in any basis a vector is represented by a 3-column (Because you are representing the linear operation as multiplication on the left by the matrix. If you had represented the linear operation as multplication on the right, that is as
[tex]\begin{bmatrix}x & y & z\end{bmatrix}\begin{bmatrix}a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{32} & a{32} & a_{33}\end{bmatrix}= \begin{bmatrix}xa_{11}+ ya_{21}+ za_{31} & xa_{12}+ ya_{22}+ ax_{32} & xa_{13}+ ya_{23}+ za_{33}\end{bmatrix}[/tex]
but, if you are representing linear transformations by a matrix, in no case is a vector also represented by a matrix.

where the resulting 3 rows are my 3 eigenvectors making up the new basis except it has changed its magnitude due to the diagonal matrix. that's all right?

so issn't this actually DPT? so its equal to PTA as per the very first point above? so issn't the rows of PT my 3 principal axis? does this step have any significance? i think there is ? but i can't seem to see any. what does DPT = PTA tell me?
No, as I said before, it is the columns of P that are the eigenvectors and so the principle axes. The principle axes of the a linear transformation are in the direction of its eigevectors no matter what the basis vectors are. "DPT= PTA" doesn't tell you anything and generally isn't true. What is true is what you wrote before: [itex]P^{-1}AP= D[/itex] whence [itex]AP= P^{-1}D[/itex]. Now, if you choose the eigenvectors to be orthonormal (which you can always if A is diagonalizable), then [itex]P^{-1}= P^T[/itex] so you have [itex]P^TD= AP[/itex] but that still is not "[itex]P^TD= AP^T[/itex]".

Question 4)

so if now i have
(1,2,3)(x1,y1,z1)
(4,5,6)(x2,y2,z2)
(7,8,9)(x3,y3,z3)

this is telling me that i have 3 vectors which have components equal to the matrix product of the above right? the 3 vectors are the rows of the resultant matrix right?


thanks a lot!
 
  • #3
HallsofIvy said:
Again, what do you mean by a "cartesian system"? A vector space with a given orthonormal basis?

i meant a Cartesian coordinate system span by the x,y,z unit basis vectors? or is it suppose to be i,j,k unit basis vectors?
That makes no sense. Did you mean that "eigenvector correspondint to eigenvalue 1 is given by (x1, y1, z1), eigenvector corresponding to eigenvalue 2 is (x2, y2, z2), etc.? As I said before, in any basis a vector is represented by a 3-column (Because you are representing the linear operation as multiplication on the left by the matrix. If you had represented the linear operation as multplication on the right, that is as
[tex]\begin{bmatrix}x & y & z\end{bmatrix}\begin{bmatrix}a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{32} & a{32} & a_{33}\end{bmatrix}= \begin{bmatrix}xa_{11}+ ya_{21}+ za_{31} & xa_{12}+ ya_{22}+ ax_{32} & xa_{13}+ ya_{23}+ za_{33}\end{bmatrix}[/tex]
but, if you are representing linear transformations by a matrix, in no case is a vector also represented by a matrix.

yes i meant that eigenvector corresponding to eigenvalue 1 is given by (x1, y1, z1), eigenvector corresponding to eigenvalue 2 is (x2, y2, z2) ...

i was trying to draw the link between a cartesian coordinate system (given by the x,y,z basis) Versus the new coordinate system (given by the eigenvector of A basis)ok, let's say matrix A has eigenvectors

V1 =
(1)
(2)
(3)

V2 =
(4)
(5)
(6)

V3 =
(7)
(8)
(9)

so , if after diagonalizing A, i find the matrix D which is say

(1,0,0)
(0,2,0)
(0,0,3)

so D11 , which is 1, is not x=1 on the cartesian coordinate system right? but rather 1 unit in the direction of the corresponding eigenvector V1 above right?

what i am trying to do is since

Ax = D11x , where x are eigenvectors of A, and D are eigenvalues of A

so if i represent the eigenvalues of D in matrix form
(1,0,0)
(0,2,0)
(0,0,3),

i am trying to get Ax = 1x, Ax = 2x , Ax = 3x for my eigenvalue equations?

so how will the eigenvectors look like if i follow the matrix multiplication rule, such that i get back my Ax=D11x , Ax = D22x and Ax = D33x

is it just

AP = DP ? where P is 3 columns of eigenvectors of A? does it work like this?

or is this idea totally wrong? because i am representing a vector as a matrix?
maybe i should ask this

if i have a vector (1,1,1) , how do i decompose it into the new basis specified by

V1 =
(1)
(2)
(3)

V2 =
(4)
(5)
(6)

V3 =
(7)
(8)
(9)
No, as I said before, it is the columns of P that are the eigenvectors and so the principle axes. The principle axes of the a linear transformation are in the direction of its eigevectors no matter what the basis vectors are. "DPT= PTA" doesn't tell you anything and generally isn't true. What is true is what you wrote before: [itex]P^{-1}AP= D[/itex] whence [itex]AP= P^{-1}D[/itex]. Now, if you choose the eigenvectors to be orthonormal (which you can always if A is diagonalizable), then [itex]P^{-1}= P^T[/itex] so you have [itex]P^TD= AP[/itex] but that still is not "[itex]P^TD= AP^T[/itex]".
[/QUOTE]

if P-1AP = D

then multiply P-1 on the right of both sides

P-1APP-1 = DP-1

then P-1A = DP-1

which is for orthonormal eigenvectors,

PTA = DPTbut how did you get from P-1AP= D to AP= P-1D ?
 
Last edited:

FAQ: By multiplying both sides by P.

What is an eigenvector?

An eigenvector is a vector that, when multiplied by a given matrix, results in a scalar multiple of itself. In other words, the direction of the eigenvector does not change when it is multiplied by the matrix.

What is a change of basis?

A change of basis is the process of representing a vector or matrix in terms of a different set of basis vectors. This can be useful in solving certain problems or simplifying calculations.

How are eigenvectors related to change of basis?

Eigenvectors can be used to define a new set of basis vectors for a given matrix. This new basis, known as the eigenvector basis, can make certain calculations and transformations easier to understand and perform.

Why is the eigenvector change of basis important?

The eigenvector change of basis is important because it allows for the simplification of certain calculations and transformations, making them easier to understand and perform. It also provides insight into the behavior of a matrix and its relationship to its eigenvectors.

How is the eigenvector change of basis used in real-world applications?

The eigenvector change of basis is used in a variety of real-world applications, such as image and signal processing, machine learning, and quantum mechanics. It can also be used to understand the behavior of systems in physics and engineering.

Similar threads

Replies
1
Views
1K
Replies
4
Views
2K
Replies
3
Views
1K
Replies
3
Views
2K
Replies
5
Views
3K
Replies
11
Views
1K
Replies
9
Views
2K
Back
Top