Eigenvalues, eigenvectors and the expansion theorem

In summary, the expansion theorem states that any vector in a Hilbert space can be written as a sum of orthogonal vectors.
  • #1
dyn
773
62
If i have an arbitrary ket then i know it can always be expressed as a linear combination of the basis kets.I now have an operator A which has 2 eigenvalues +1 and -1.
The corresponding eigenvectors are | v >+ = k | b > + m | a > and | v >- = n | c > where | a > , | b > and | c > are linear combinations of the basis vectors.
The arbitrary ket is expressed as | ψ > = a | a > + b | b > + c | c > where | a |2 gives the probability of a measurement giving the eigenvalue corresponding to | a >. A question asks what is the probability of measuring the eigenvalue +1 . It gives the answer as | b |2 + | a |2 .
Finally to my question ; how or why does the expansion theorem apply to this situation as the eigenvector | v >+ only exists as a combination of | a > and | b >
Hoping you can understand my question. Thanks
 
Physics news on Phys.org
  • #2
What is the dimension of the vector space?

We know it is at least two, since there are two distinct eigenvalues, but it will be more than that if either of those eigenvalues have multiplicity greater than 1. If so then the statement about 'corresponding eigenvectors' is inaccurate. An eigenvalue with multiplicity ##k## corresponds to a ##k##-dimensional eigenspace, for which we need ##k## linearly independent vectors to form a basis.

Alternatively, if the dimension is two then the two eigenvectors you list must form a basis, since they are orthogonal and hence linearly independent. Hence we will be able to express any vector/ket as a linear combination of those two vectors.
 
  • #3
##|v\rangle_+## is a perfectly good vector in its own right. The fact that it can be written as the sum of two other vectors doesn't make it some sort of second-class vector that only exists as that sum - any vector can be written as the sum of two other vectors.
 
  • #4
The Hilbert space is C3 so has 3 orthonormal basis vectors. | v >+ exists in a 2-D subspace of C3
and | v >- is in a 1-D subspace of C3
 
  • #5
dyn said:
The Hilbert space is C3 so has 3 orthonormal basis vectors. | v >+ exists in a 2-D subspace of C3
and | v >- is in a 1-D subspace of C3
Then a second eigenvector in the eigenspace of eigenvalue +1 is needed. Without it, we do not have a specification of an eigenbasis.

However, it's still not clear exactly what your question is. Can you express it more clearly?
 
  • #6
Thank you for your time. Its getting late here. I will try to rephrase my question tomorrow more clearly. I appreciate you trying to understand my question.
 
  • #7
dyn said:
The Hilbert space is C3 so has 3 orthonormal basis vectors.
That makes sense, but it is does not necessarily follow that:
| v >+ exists in a 2-D subspace of C3
and | v >- is in a 1-D subspace of C3
That might be the case, and if it is then as @andrewkirk says, you're missing an eigenvector - the two that you have are not sufficient to span the three-dimensional Hilbert space.

However, there is another possibility: the two eigenvectors of A together span a two-dimensional subspace of the Hilbert space. Might that be the case for this problem? It would be consistent with the answer provided for the probability of measuring +1 for A.
 
  • #8
Thanks for your replies. I think I have it sorted now.One of the eigenvalues was doubly degenerate and so was represented by 2 O.N. basis vectors.

On this point if an eigenvalue is doubly degenerate , ie has 2 eigenvectors | v1 > and | v2 > and a measurement returns this eigenvalue does that mean we have no way of knowing if an arbitrary ket has collapsed to | v1 > or | v2 > or any linear combination of those 2 eigenvectors ?

On a separate note I have seen the notation | ψ1 + ψ2 > = | ψ1 > + | ψ2 > . Is this standard notation as it doesn't seem right to me as adding kets is not the same as adding wavefunctions ?
 
Last edited:
  • #9
The usual collapse assumption is that, if the system has been prepared in the state ##\hat{\rho}## before the measurement, after the measurement, it's in the state
$$\hat{\rho}'=\frac{1}{z} \sum_{i,j=1}^2 |v_i \rangle \langle v_i|\hat{\rho}|v_j \rangle \langle v_j|, \quad Z=\sum_{i=1}^2 \langle v_i |\hat{\rho}|v_i \rangle.$$
Note that this holds true only for ideal filter measurements a la von Neumann, and that the collapse hypothesis is at least questionable, but I don't want to start another long discussion on the issue. Here it's just about the math ;-)!
 
  • #10
vanhees71 said:
The usual collapse assumption is that, if the system has been prepared in the state ##\hat{\rho}## before the measurement, after the measurement, it's in the state
$$\hat{\rho}'=\frac{1}{z} \sum_{i,j=1}^2 |v_i \rangle \langle v_i|\hat{\rho}|v_j \rangle \langle v_j|, \quad Z=\sum_{i=1}^2 \langle v_i |\hat{\rho}|v_i \rangle.$$
Note that this holds true only for ideal filter measurements a la von Neumann, and that the collapse hypothesis is at least questionable, but I don't want to start another long discussion on the issue. Here it's just about the math ;-)!

Thanks. Does this mathematical statement agree with the statement from my previous post ?

Also any thoughts on | ψ1 + ψ2 > = | ψ1 > + | ψ2 > ?
 
  • #11
dyn said:
Also any thoughts on | ψ1 + ψ2 > = | ψ1 > + | ψ2 > ?
What you put inside of a ket is just a label, so that expression is tautologically true; you could read it as a definition of |ψ1 + ψ2 >.
 
  • #12
I think it's usual notation. Also note that kets are not wave functions. A lot of confusion can be avoided when one kept the concepts straight from the very beginning. It's analogous with usual vectors in classical physics. E.g., a position vector is not a set of three numbers but a directed line connecting the origin of your reference frame with the point in question. This we write as ##\boldsymbol{x}##. Now, if you have identified an arbitrary (maybe for convenience Cartesian) basis ##\boldsymbol{e}_j## you can decompose any vector uniquely in terms of its components, ##\boldsymbol{x}=x^j \boldsymbol{e}_j##, and then it may be convenient to introduce a notation where
$$\vec{x}=\begin{pmatrix} x^1 \\ x^2 \\ x^3 \end{pmatrix} \in \mathbb{R}^3$$
is a column vector. Of course ##\boldsymbol{x}## and ##\vec{x}## are not the same but there's a one-to-one mapping from the vector space of "arrows" in Euclidean space to the vector space of triples of real numbers.

The same holds for the vectors of the Hilbert space. The kets in Dira'cs ingenious notation live in an abstract Hilbert space. Then you construct generalized basis vectors ##|\vec{x} \rangle##, which are "eigenvectors" of the position operator. These are not Hilbert-space vectors but distributions, fullfilling the generalized "orthonormality condition"
$$\langle \vec{x}|\vec{x}' \rangle=\delta^{(3)}(\vec{x}-\vec{x}').$$
However, you can show that they provide a "decomposition of the unit operator" analogously to proper complete orthonormal sets,
$$\int_{\mathbb{R}^3} \mathrm{d}^3 \vec{x} |\vec{x} \rangle \langle \vec{x}|=\hat{1}.$$
Now, given a proper state ket (normalized to 1 for convenience) you can define the wave function, which is nothing else than the formal "component" of the ket wrt. the generalized position eigenbasis,
$$\psi(\vec{x})=\langle \vec{x}|\psi \rangle.$$
The scalar product for this representation of the Hilbert space is also easily determined from the Dirac formalism by introducing the decomposition of the unit operator (very many calculations in QT consists in practice in a clever choice of such insertions of unit operator ;-))
$$\langle \psi_1|\psi_2 \rangle= \int_{\mathbb{R}^3} \mathrm{d}^3 \vec{x} \langle \psi_1|\vec{x} \rangle \langle \vec{x}|\psi_2 \rangle = \int_{\mathbb{R}^3} \mathrm{d}^3 \vec{x} \psi_1^*(\vec{x}) \psi_2(\vec{x}).$$
This is the realization of the abstract (separable) Hilbert space by the Hilbert space of square integrable functions ##\mathrm{L}^2(\mathbb{R}^3,\mathbb{C})##. There's a one-to-one connection between the kets in the abstract Hilbert space and it's realization as ##\mathrm{L}^2(\mathbb{R}^3,\mathbb{C})##. The Hilbert spaces are equivalent but not the same!
 
  • #13
vanhees71 said:
The usual collapse assumption is that, if the system has been prepared in the state ##\hat{\rho}## before the measurement, after the measurement, it's in the state
$$\hat{\rho}'=\frac{1}{z} \sum_{i,j=1}^2 |v_i \rangle \langle v_i|\hat{\rho}|v_j \rangle \langle v_j|, \quad Z=\sum_{i=1}^2 \langle v_i |\hat{\rho}|v_i \rangle.$$
Note that this holds true only for ideal filter measurements a la von Neumann, and that the collapse hypothesis is at least questionable, but I don't want to start another long discussion on the issue. Here it's just about the math ;-)!

The ρ' referred to above looks like an operator to me not a state ? If a measurement returns a value which is a degenerate eigenvalue does that mean the state collapses into a superposition of those eigenfunctions of the degenerate eigenvalues which we will never be able to determine ?
 
  • #14
A quantum state is represented by the statistical operator. For pure states you can equivalently say they are represented by a ray in Hilbert space, but that's so inconvenient ;-). I prefer to use the statistical operator to refer to states, and it's completely general. Of course for a pure state, if ##|\psi \rangle## is a representant of the state (ray) before the measurement then after the measurement you update the state to the new pure state represented by
$$|\psi' \rangle=\frac{1}{\sqrt{Z}} \sum_i |v_i \rangle \langle v_i|\psi \rangle, Z=\sum_i |\langle v_i|\psi \rangle|^2.$$
The statistical operators are projectors in this case
$$\hat{\rho}=|\psi \rangle \langle \psi|, \quad \hat{\rho}'=|\psi' \rangle \langle \psi'|.$$
 

FAQ: Eigenvalues, eigenvectors and the expansion theorem

1. What are eigenvalues and eigenvectors?

Eigenvalues and eigenvectors are mathematical concepts used to describe the behavior of linear transformations. Eigenvalues are the scalar values that represent how much a vector is stretched or compressed by a transformation, while eigenvectors are the corresponding vectors that remain in the same direction after the transformation.

2. How are eigenvalues and eigenvectors related?

Eigenvalues and eigenvectors are related through the eigendecomposition of a matrix. Eigendecomposition breaks down a matrix into its eigenvalues and eigenvectors, allowing for easier analysis and computation of the matrix.

3. What is the expansion theorem?

The expansion theorem, also known as the spectral theorem, states that any square matrix can be decomposed into a set of eigenvectors and eigenvalues. This allows for the simplification of complex matrix operations and also provides insight into the behavior of the matrix.

4. How are eigenvalues and eigenvectors used in real-world applications?

Eigenvalues and eigenvectors are used in a wide range of applications, including image and signal processing, data compression, and quantum mechanics. They are also used in machine learning algorithms, such as principal component analysis, to reduce the dimensionality of data and extract important features.

5. Can there be multiple eigenvalues and eigenvectors for a single matrix?

Yes, a matrix can have multiple eigenvalues and corresponding eigenvectors. However, the number of eigenvalues and eigenvectors is limited by the size of the matrix, and some matrices may not have any eigenvalues or eigenvectors at all.

Similar threads

Replies
2
Views
1K
Replies
1
Views
1K
Replies
1
Views
752
Replies
3
Views
1K
Replies
3
Views
1K
Replies
27
Views
2K
Replies
4
Views
1K
Back
Top