Eigenvalue of the sum of two non-orthogonal (in general) ket-bras

In summary, the eigenvalues of the matrix ##M = \ket{\psi^{\perp}}\bra{\psi^{\perp}} + \ket{\varphi^{\perp}}\bra{\varphi^{\perp}}## are ##\lambda_{\pm}= 1\pm |\bra{\psi}\ket{\varphi}|##. This can be proven by assuming an eigenvector of the form ##|\psi \rangle + \beta |\varphi \rangle## and solving for ##\lambda##, or by expressing the matrix in the ##|\psi \rangle, |\varphi \rangle## basis and generating the characteristic equation. The perpendicular symbol in this context indicates that both ket vectors belong
  • #1
Jufa
101
15
TL;DR Summary
In a maths course the following statement is claimed to be self-evident, and I don't find it so.
We have a matrix ##M = \ket{\psi^{\perp}}\bra{\psi^{\perp}} + \ket{\varphi^{\perp}}\bra{\varphi^{\perp}}##

The claim is that the eigenvalues of such a matrix are ##\lambda_{\pm}= 1\pm |\bra{\psi}\ket{\varphi}|##

Can someone proof this claim? I have been told it is self-evident but I've been already a couple of days struggling with it.
 
Physics news on Phys.org
  • #2
I'm not sure it's self-evident. You could look for an eigenvector of the form ##|\psi \rangle + \beta |\phi \rangle## and deduce an expression for ##\lambda - 1##.

PS That works!
 
  • #3
Another idea is to use the orthonormal basis generated by the standard Gram-Schmidt process. Where you replace ##|\phi \rangle## by the normalised vector in the direction
$$|\phi \rangle - |\psi\rangle \langle \psi | \phi \rangle $$

PS or simply express the matrix in the ##|\psi \rangle, |\phi \rangle## basis and generate the characteristic equation. I think that's the simplest way.
 
Last edited:
  • #4
PeroK said:
Another idea is to use the orthonormal basis generated by the standard Gram-Schmidt process. Where you replace ##|\phi \rangle## by the normalised vector in the direction
$$|\phi \rangle - |\psi\rangle \langle \psi | \phi \rangle $$

PS or simply express the matrix in the ##|\psi \rangle, |\phi \rangle## basis and generate the characteristic equation. I think that's the simplest way.
I guess when you say ##\ket{\phi}## you mean ##\ket{\varphi}##. I don't think it makes sense to compute the entries of the matrix in a non-orthonormal basis. I don't think the associated characteristic equation has nothing to do with the one you get when an orthonormal basis is considered.
 
  • #5
Jufa said:
I don't think the associated characteristic equation has nothing to do with the one you get when an orthonormal basis is considered.
Are you sure?
 
  • #6
Here's a simple proof that eigenvalues are basis-invariant.

First note that the definition of an eigenvalue for a linear transformation does not refer to a basis:
$$Tv = \lambda v$$In order for this to be well-defined, it must be the case that if ##T## is represented by the matrix ##M## in some basis and ##v## by the column vector ##x## in that basis, then the following must hold:$$Mx = \lambda x$$In any case, if we assume a change of basis using the transformation matrix ##S##, such that:$$M' = S^{-1}MS, \ \text{and} \ ,x' = S^{-1}x$$Where ##M'## and ##x'## are the matrix and vector expressed in the new basis, then:
$$M'x' = S^{-1}Mx = \lambda S^{-1}x = \lambda x'$$Which confirms that eigenvalues must be basis invariant.
 
  • #7
PeroK said:
Here's a simple proof that eigenvalues are basis-invariant.

First note that the definition of an eigenvalue for a linear transformation does not refer to a basis:
$$Tv = \lambda v$$In order for this to be well-defined, it must be the case that if ##T## is represented by the matrix ##M## in some basis and ##v## by the column vector ##x## in that basis, then the following must hold:$$Mx = \lambda x$$In any case, if we assume a change of basis using the transformation matrix ##S##, such that:$$M' = S^{-1}MS, \ \text{and} \ ,x' = S^{-1}x$$Where ##M'## and ##x'## are the matrix and vector expressed in the new basis, then:
$$M'x' = S^{-1}Mx = \lambda S^{-1}x = \lambda x'$$Which confirms that eigenvalues must be basis invariant.
Oh yes. You are definitely right, I am sorry for my confusion. I will work on your idea then. Thank you very much.
 
  • #8
Jufa said:
Oh yes. You are definitely right, I am sorry for my confusion. I will work on your idea then. Thank you very much.
You perhaps out to post one of your attempts. The eigenvalues can be calculated in a few lines starting with the general form of the eigenvector: ##\alpha |\psi \rangle + \beta |\varphi \rangle##

First, we can see that ##\alpha, \beta \ne 0## and therefore may take ##\alpha = 1## (as any scalar multiple of an eigenvector is still an eigenvector).

Next we solve the equation:
$$(\ket{\psi}\bra{\psi} + \ket{\varphi}\bra{\varphi})(|\psi \rangle + \beta |\varphi \rangle) = \lambda(|\psi \rangle + \beta |\varphi \rangle)$$And the result follows after a few lines of algebra.

That said, I'm not sure what the perpendicular symbol means in this context? It works out assuming the matrix is the outer product of the vectors, which I assumed to be normalised.
 
  • #9
I maybe should have mentioned that both ket vectors belong to ##\mathcal{C}^2## and thus the perpendicular vectors are well defined up to a global phase. My attempt is the following:

In the basis ## \ket{\psi}, \ket{\varphi}## the matrix looks like a diagonal one, namely:

##M = Diag\Big( |\bra{\psi}\ket{\varphi^\perp}|^2, |\bra{\varphi}\ket{\psi^\perp}|^2 \Big)##

Therefore the eigenvalues are nothing but these diagonal entries. This result does not seem to have nothing to do with:

##\lambda_{\pm} = 1 \pm |\bra{\psi} \ket{\varphi}|##
 
  • #10
Jufa said:
I maybe should have mentioned that both ket vectors belong to ##\mathcal{C}^2## and thus the perpendicular vectors are well defined up to a global phase. My attempt is the following:

In the basis ## \ket{\psi}, \ket{\varphi}## the matrix looks like a diagonal one, namely:

##M = Diag\Big( |\bra{\psi}\ket{\varphi^\perp}|^2, |\bra{\varphi}\ket{\psi^\perp}|^2 \Big)##

Therefore the eigenvalues are nothing but these diagonal entries. This result does not seem to have nothing to do with:

##\lambda_{\pm} = 1 \pm |\bra{\psi} \ket{\varphi}|##
I must admit I don't understand what is meant by the perpendicular vectors in this context.

In any case, I don't see how you gave diagonalised ##M##.
 
  • #11
PeroK said:
I must admit I don't understand what is meant by the perpendicular vectors in this context.

In any case, I don't see how you gave diagonalised ##M##.
Just an orthogonal vector.
 
  • #12
Jufa said:
Just an orthogonal vector.
There is no such thing. "Orthogonal" is a property of two vectors.

Your fundamental problem here may be that you do not understand what the original statement means. You must be able to give me a clear and unambiguous definition of ##\ket{\varphi^\perp}##. A good test of whether you understand a problem is whether you can explain it to someone else.
 
  • #13
PeroK said:
There is no such thing. "Orthogonal" is a property of two vectors.

Your fundamental problem here may be that you do not understand what the original statement means. You must be able to give me a clear and unambiguous definition of ##\ket{\varphi^\perp}##. A good test of whether you understand a problem is whether you can explain it to someone else.
Yes. Namely between ##\ket{\psi}## and ##\ket{\psi^\perp}##. These are the two vectors involved. I reckon I am stating the problem in a clear way.
 
  • #14
Jufa said:
Yes. Namely between ##\ket{\psi}## and ##\ket{\psi^\perp}##. These are the two vectors involved. I reckon I am stating the problem in a clear way.
Does that mean that ##\ket{\psi^\perp}## is any vector orthogonal to ##\ket{\psi}##?
 

FAQ: Eigenvalue of the sum of two non-orthogonal (in general) ket-bras

What is an eigenvalue in the context of ket-bras?

An eigenvalue is a scalar value that represents the magnitude of an eigenvector in the context of ket-bras. It is used to describe the behavior of a quantum system and is a crucial concept in quantum mechanics.

How is the eigenvalue of the sum of two non-orthogonal ket-bras calculated?

The eigenvalue of the sum of two non-orthogonal ket-bras is calculated by finding the eigenvalues of each individual ket-bra and then adding them together. This can be done using mathematical operations such as matrix multiplication and diagonalization.

What is the significance of the eigenvalue of the sum of two non-orthogonal ket-bras?

The eigenvalue of the sum of two non-orthogonal ket-bras is significant because it represents the total energy of the quantum system. It also provides information about the possible states that the system can exist in and their corresponding probabilities.

Can the eigenvalue of the sum of two non-orthogonal ket-bras be negative?

Yes, the eigenvalue of the sum of two non-orthogonal ket-bras can be negative. This can occur when the individual eigenvalues of the ket-bras have opposite signs and cancel each other out when added together.

How does the orthogonality of ket-bras affect the eigenvalue of their sum?

The orthogonality of ket-bras does not have a direct effect on the eigenvalue of their sum. However, if the ket-bras are orthogonal, their eigenvalues will also be orthogonal and therefore cannot be added together. This means that the eigenvalue of the sum of two orthogonal ket-bras will always be zero.

Back
Top