- #1
nomadreid
Gold Member
- 1,729
- 229
- TL;DR Summary
- I am not sure whether, for a finite Hermitian matrix M, a spectral decomposition M=PDP^-1 includes that (a) the P can be required to be unitary, and (b) whether the diagonal D can be required to be the classic upper-&-lower triangular kind
I have a proof in front of me that shows that for a normal matrix M, the spectral decomposition exists with
M=PDP-1
where P is an invertible matrix and D a matrix that can be represented by the sum over the dimension of the matrix of the eigenvalues times the outer products of the corresponding basis vectors vi
of an orthonormal basis, i.e.,
Σλi |vi><vi|
What the theorem does not say is whether, for Hermitian matrices (which are normal) M, one can require that the P be unitary. I seem to recall reading that it can but cannot find where I read that, and do not know how to prove it myself.
Also, the classic "diagonal matrix" is one with all the off-diagonal entries zero, but in this theorem, diagonal is just having the form above (with the summation), which need not have all zero off-diagonal entries. So I do not know whether the theorem would be valid if one required D to be a classic diagonal matrix, as I do not know how I would get from the above outer sum to a classic diagonal matrix.
{On less solid ground: my intuition (always a bad guide) suggests that one could rotate the orthonormal basis to a basis with (1,0,0...), (0,1,0...,)(0,0,1...)...., but that seems too reliant on a spatial intuition of perpendicularity. Nagging at me is also the idea that there are Hermitian operators that can't agree on a basis, so that it seems unlikely that they could all be reduced to the same basis even if they are separated by
P_P-1. My intuition on this point is fishing around without any rigor, so anything shooting down this lead balloon is also welcome.]
Thanks for any help.
M=PDP-1
where P is an invertible matrix and D a matrix that can be represented by the sum over the dimension of the matrix of the eigenvalues times the outer products of the corresponding basis vectors vi
of an orthonormal basis, i.e.,
Σλi |vi><vi|
What the theorem does not say is whether, for Hermitian matrices (which are normal) M, one can require that the P be unitary. I seem to recall reading that it can but cannot find where I read that, and do not know how to prove it myself.
Also, the classic "diagonal matrix" is one with all the off-diagonal entries zero, but in this theorem, diagonal is just having the form above (with the summation), which need not have all zero off-diagonal entries. So I do not know whether the theorem would be valid if one required D to be a classic diagonal matrix, as I do not know how I would get from the above outer sum to a classic diagonal matrix.
{On less solid ground: my intuition (always a bad guide) suggests that one could rotate the orthonormal basis to a basis with (1,0,0...), (0,1,0...,)(0,0,1...)...., but that seems too reliant on a spatial intuition of perpendicularity. Nagging at me is also the idea that there are Hermitian operators that can't agree on a basis, so that it seems unlikely that they could all be reduced to the same basis even if they are separated by
P_P-1. My intuition on this point is fishing around without any rigor, so anything shooting down this lead balloon is also welcome.]
Thanks for any help.