Hermitian Operators and Basis Independence: Exploring the Relationship

  • Thread starter homology
  • Start date
  • Tags
    Hermitian
In summary, in this conversation, the topic of hermiticity of a matrix was discussed in relation to its representation in a particular basis. It was noted that a matrix can be self-adjoint but not symmetric, and that in finite-dimensional vector spaces, hermitic, self-adjoint, and symmetrical are the same. However, for a matrix to be equal to its adjoint, the basis must be orthonormal, which is only true for a unitary matrix.
  • #1
homology
306
1
Let's say you're wondering around P(oo) (which I'll use to represent the space of polymomials of any degree on the interval [-1,1]) and you decide to calculate the matrix X representing the position operator x. Let's say you do this in the basis: 1, t, t^2, ..., t^n,... you'll find that the matrix X is not hermitian even though x is hermitian on P(oo). Then you decide to calculate the matrix again, but now you do it with the orthonormal basis generated from the above basis via the Gram-schmidt method (i.e. the properly normalized Legendre polynomials) Now you find that X is hermitian. What gives? should the hermiticity of an operator be independent of its representation in a particular basis? (maybe not and this is super obvious, feel free in that case to slap me).

Kevin
 
Physics news on Phys.org
  • #2
homology said:
(maybe not and this is super obvious, feel free in that case to slap me).

SLAP SLAP SLAP :-)

The hermiticity of a matrix is not depending on its basis *as long as the basis is orthonormal* of course, but your previous basis of 1, t, t^2 ... wasn't...

cheers,
Patrick.
 
  • #3
vanesch said:
SLAP SLAP SLAP :-)

The hermiticity of a matrix is not depending on its basis *as long as the basis is orthonormal* of course, but your previous basis of 1, t, t^2 ... wasn't...

cheers,
Patrick.

Please slap me some more :smile: because I didn't know the above. Is it the same with properties like unitary-ism, symmetry, etc?

Kevin
 
  • #4
Wait a second?!

A linear operator A is symmetric iff (g,Af) = (Ag,f) for all f,g in Domain(A).

A linear operator A is self-adjoint iff: (i) A is symmetric, (ii) the adjoint At exists, and (iii) Domain(A) = Domain(At).

(Your uses of the term "Hermitian" coincide with my use of "self-adjoint" in the above [note: the definition I have given is incomplete because the term "adjoint" in (ii) has not been defined].)

Now, let A be self-adjoint, and let fi be an arbitrary basis. Define the "matrix" of A in this basis by

Aij = (fi,Afj) .

Then,

Aij = (fi,Afj) = (Afi,fj) = (fj,Afi)* = Aji* .

So, why does the basis have to be orthonormal?

************************************
* SLAP SLAP SLAP :-) ... on me
*
* These "matrix" elements only make sense for an orthonormal basis!
*************************************

Homology, how are you defining the "matrix" of an operator in the tn-basis?

*************************************
* Oh, I see ...
*
* since f(x) = Sigma_n { fnxn }, define the
* "vectors" as the "infinituples" {fn} and the "matrices"
* as the maps Amn of the "infinituple" components
* according to
*
* Sigma_n Amnfn .
*
* ... In any case, the definition of "self-adjoint" which I gave above
* is the one that is meant in QM and it is quite independent of
* any choice of basis.
*
* Using a definition like this one down here, is equivalent to
* changing the "inner product" associated with the Hilbert space
* in the definition I gave above.
*************************************
 
Last edited:
  • #5
to Eye_in_the_Sky

For an inner product (v,w) the matrix form is Transpose(v)*A*w where A is the matrix of the bilinear form defining the inner product. So if we have an operator M such that (Mv,w)=(v,Mw) then the matrix form for this is
Transpose(M*v)*A*w=Transpose(v)*Tranpose(M)*A*w and this must be equal to Tranpose(v)*A*(M*v) so in order for the matrix version of the operator M to be self-adjoint Transpose(M)*A=A*M. Now it may be the case that this implies symmetry in M, but it doesn't seem so.

Some options that occur to me now are: we could try to find a basis for which both A and M are diagonal and then they would both be symmetric and the above requirement for self-adjoint-ness would be met. Or if the matrix A can be reduced to the identity by an orthogonal change of basis then that would imply that the transformed version of M would symmetric.

However I think that it is clear from above that a matrix can be self-adjoint but not be symmetric, it really depends on the inner product.

Is this clear to you? Also I'm not sure what you mean by
Homology, how are you defining the "matrix" of an operator in the tn-basis?
Do you mean how to come about it? well one way is to see what the effect of x is on the basis vectors 1, t, t^2 etc. This will define the rows of x. Besides that its the result of an excercise in Griffith's QM book in which he says: "If this is a Hermitian operator (and it is), how come the matrix is not equal to its transpose conjugate?"

Try it, you'll find that the matrix X for the position operator in the 1, t, t^2, ... is not hermitian/self-adjoint however it is in the orthonormal basis generated by this basis.

Kevin
 
  • #6
First of all, in finite-dimensional vector spaces, hermitic, self-adjoint and symmetrical (conjugate-symmetrical) are the same.
Second, you can only write that the matrix element of an operator A is
a_i,j = (e_i, Ae_j) if {e_i} form an orthonormal set of course, such that
(e_i, e_j) = delta_{i,j}.

Finally, if you transform a matrix A into another base through a basis transformation S, you have B = S A S^(-1).

The adjoint of B, B' = S^(-1)' A' S'.

If we assume the matrix A to be self-adjoint, then A = A', but in order for B to be equal to B', we need that S^(-1) = S', which is only true for a unitary matrix S ; meaning that the new basis is orthonormal.

cheers,
Patrick.
 
  • #7
vanesch said:
First of all, in finite-dimensional vector spaces, hermitic, self-adjoint and symmetrical (conjugate-symmetrical) are the same.

Again, this is not true as I have already pointed out. here is an example. Let's use the following syntax for matrices B={{1-1,1-2},{2-1,2-2}} where the first number indicates row and the second indicates column so the entry 2-1 in B indicates that this entry is in the second row first column.

Okay, let A={{4,1},{1,1}} define an inner product on R^2. A is positive definite and symmetric so it does the job. What this means is that for two vectors v,w we can evaluate (v,w)=Transpose(v)*A*w (keeping in mind here that this inner product is defined by A). Now let M={{1,2},{10,3}}. I claim that this nonsymmetric matrix is self-adjoint.
(Mv,w)=Transpose(Mv)*A*w=Tranpose(v)*Tranpose(M)*A*w.

Now, Tranpose(M)*A={{14,11},{11,5}}=A*M so we can conclude that

Transpose(v)*Transpose(M)*A*w=Transpose(v)*A*M*w = (v,Mw)

vanesch said:
Second, you can only write that the matrix element of an operator A is
a_i,j = (e_i, Ae_j) if {e_i} form an orthonormal set of course, such that
(e_i, e_j) = delta_{i,j}.
I haven't used this anywhere, I'm not sure why you bring it up.

vanesch said:
Finally, if you transform a matrix A into another base through a basis transformation S, you have B = S A S^(-1).

The adjoint of B, B' = S^(-1)' A' S'.

If we assume the matrix A to be self-adjoint, then A = A', but in order for B to be equal to B', we need that S^(-1) = S', which is only true for a unitary matrix S ; meaning that the new basis is orthonormal.
This I agree with certainly. Certainly if a matrix is symmetric then it remains so under unitary/orthogonal changes of basis, by just your argument. However, I remain fixed on the fact that self-adjoint only implies symmetric for orthonormal bases whether finite dimensional (as considered above in this post) or infinite-dimensional as considered in my original post.

Thanks again for your feedback, these conversations really help flesh out ideas.

Kevin
 
  • #8
homology said:
Okay, let A={{4,1},{1,1}} define an inner product on R^2. A is positive definite and symmetric so it does the job. What this means is that for two vectors v,w we can evaluate (v,w)=Transpose(v)*A*w (keeping in mind here that this inner product is defined by A). Now let M={{1,2},{10,3}}. I claim that this nonsymmetric matrix is self-adjoint.

Right. This is what I would call putting the wagon before the horses :-)
To me, the in-product is the canonical one (by the matrix A = delta_ij), and orthogonal bases are defined with respect to this base.
What you do with your definition of the inner product with matrix A is to work (in my language) in a non-orthogonal basis, and you conclude that the matrix is not symmetrical. I reformulate this by saying that (assuming the canonical in product) a hermitean operator in a non-orthogonal basis is not symmetrical. But that's saying the same thing !

Defining a general inproduct just comes down to saying that you use the canonocal inproduct, but in a non orthonormal basis. As any symmetrical matrix can be diagonalised, the relationship is straightforward...

cheers,
patrick.
 
  • #9
vanesch said:
Right. This is what I would call putting the wagon before the horses :-)
To me, the in-product is the canonical one (by the matrix A = delta_ij), and orthogonal bases are defined with respect to this base.

Ahh, this explains our misunderstanding, since my notion of an inner product is just something induced by a bilinear form with certain properties.

vanesch said:
I reformulate this by saying that (assuming the canonical in product) a hermitean operator in a non-orthogonal basis is not symmetrical. But that's saying the same thing !

Ha ha, I think we understand one another and I've figured out my original problem with your help! It was originally such a surprise to me that such things could happen since I have been raised on inner products that use the identity. However I see now the importance of the orthonormal basis in the context of these words: hermitian/self-adjoint and the implications to their representative matrices.

Thanks again for your help!

Kevin
 
  • #10
For Homology: a small point of clarification

homology said:
For an inner product (v,w) the matrix form is Transpose(v)*A*w where A is the matrix of the bilinear form defining the inner product
First off, I'd like to write

Transpose(v)*A*w

as

t(v)*Bw .

Now, it looks to me like the "v" and "w" in "(v,w)" are "abstract" vectors, whereas the "v" and "w" in "t(v)*Bw" are the components of those vectors - relative to some basis - arranged as column matrices. Is that what you mean?

If so, then let's write "vectors" in bold font. Also, let's select a basis, and give it a name, say bi. Then, using the "summation convention":

v = vibi , w = wibi .

In this notation, we then have for the components

t(v)*Bw <--> vi*Bijwj ,

and from (a "consistent" definition of inner product)

t(v*)Bw = (v,w) , for all v,w ,

it follows that

Bij = (bi,bj) .

This "connects" the component representation (relative to basis bi) to the abstract vector representation.

Next, let L be a linear operator in/on the vector space, and let L <--> Lij be its matrix representation relative to the basis bi; that is,

L <--> L <--> Lij ,

where Lij is defined by

Lbj = Lijbi .

You can check that this is the "correct" definition to give the correspondences

Lv <--> Lv <--> Lijvj .

Finally, if we define "symmetric" as

(v,Lw) = (Lv,w) , for all v,w in Domain(L) ,

then the equivalent condition in component form - relative to the basis bi - is given by

t(L)*B = BL , on the domain of L ,

where, to repeat,

Bij = (bi,bj) .
 
  • #11
Eye_in_the_Sky said:
and from (a "consistent" definition of inner product)

are you implying something?

Eye_in_the_Sky said:
Finally, if we define "symmetric" as

(v,Lw) = (Lv,w) , for all v,w in Domain(L) ,

Perhaps we've mixed up terms. As I have learned it, self-adjoint is the term for such linear operators L such that (Lv,w)=(v,Lw) v,w in a real vector space V. Hermitian is simply the complex version. If we represent a self-adjoint operator in a orthonormal basis then the resulting matrix representation will be symmetric, i.e. unchanged under transpose. While for the hermitian case, the resulting matrix is unaltered under transpose and conjugation.

But in any event, we're not arguing are we? You can freely calculate the matrix entries of the position operator in the nonorthonormal basis 1, t, t^2, t^3, ... and in the orthonormal basis derived from them and you will see that in the nonorthonormal basis the X matrix is not hermitian even though x (as an abstract operator) certainly is and that in the orthonormal basis X is hermitian.

Kevin
 
  • #12
Homology:

My (admittedly poor) attempt at offering a small point of clarification has resulted in a larger point of "obscurification". What I wrote was originally intended to be a response to your reply to my earlier post in this thread. But by the time I got to writing it and posting it, I saw that you had already reached a resolution through your exchanges with Vanesch ... so, I changed it around posted it anyways, thinking it would serve as sort of clarification and summary of the main conclusions.

All I really wanted to do was lay down the basic relationship between the "abstract" vector representation and the "component" representation in clear terms. Specifically, I wanted to say:

The condition

[1] (Lv,w) = (v,Lw) , for all v,w ,

is equivalent to

[2] t(L)*B = BL ,

where

[3] Bij = (bi,bj) .

If a basis bi is not orthonormal with respect to an inner product ( , ), then by [3] the matrix B is different from the identity. In that case, the Hermicity condition [1], which is equivalent to [2], will not coincide with t(L)* = L (except in the trivial case L = 0).

... and that is all.
---------------------------------

Regarding the definition of "symmetric" which I stated, ... well yeah ... just forget about it (... until you see it again somewhere else soon).
 
  • #13
Eye_in_the_SKy said:
The condition

[1] (Lv,w) = (v,Lw) , for all v,w ,

is equivalent to

[2] t(L)*B = BL ,

You Betcha! :biggrin: no worries, and again: thanks.

Kevin
 

FAQ: Hermitian Operators and Basis Independence: Exploring the Relationship

What is the difference between Hermitian and not Hermitian matrices?

Hermitian matrices are square matrices that are equal to their own conjugate transpose. This means that the elements in the matrix are mirrored across the diagonal. Non-Hermitian matrices do not have this property and their conjugate transpose is different from the original matrix.

Why are Hermitian matrices important in quantum mechanics?

In quantum mechanics, physical observables such as energy and momentum are represented by Hermitian operators. This is because Hermitian matrices have real eigenvalues, which correspond to the possible outcomes of measurements in quantum systems.

Can a matrix be both Hermitian and not Hermitian?

No, a matrix cannot be both Hermitian and not Hermitian. It either has the property of being equal to its own conjugate transpose (Hermitian) or it does not.

How can you tell if a matrix is Hermitian or not Hermitian?

To determine if a matrix is Hermitian, you can check if it is equal to its own conjugate transpose. If it is not equal, then the matrix is not Hermitian. Another way is to check if all the eigenvalues of the matrix are real. If they are, then the matrix is Hermitian.

Are there any real-world applications of Hermitian and not Hermitian matrices?

Yes, Hermitian and not Hermitian matrices have various applications in fields such as physics, engineering, and computer science. For example, in signal processing, Hermitian matrices are used to represent signals and their Fourier transforms, while non-Hermitian matrices are used in the study of non-conservative systems. In communication networks, non-Hermitian matrices are used to model the flow of information.

Similar threads

Replies
16
Views
2K
Replies
14
Views
2K
Replies
2
Views
15K
Replies
15
Views
3K
Replies
34
Views
4K
Replies
5
Views
2K
Replies
3
Views
7K
Back
Top