Proving Completeness Relations in Orthonormal Bases | Quantum Mechanics

  • Thread starter McLaren Rulez
  • Start date
  • Tags
    Relations
In summary, if we have an orthonormal basis, we can show that the relation sum|x><x| = identity is true. This is proved by showing that there exists a unique x0 in the closed convex subset K that is at the minimum distance from x. Another theorem states that for every x in H, the sequence of partial sums of the series \sum_{k=1}^\infty \langle e_k,x\rangle e_k is a Cauchy sequence. Finally, if E={e1,...,en} is an orthonormal basis, then for every x in H, x-\sum_{k=1}
  • #1
McLaren Rulez
292
3
Hi,

If we have an orthonormal basis, how can we show that the relation

[tex]\sum|x><x|[/tex] = Identity?

I see this in Quantum Mechanics but I'm not sure how to prove it. Thank you.
 
Physics news on Phys.org
  • #2
This is pretty difficult, so I will only tell you what the steps are. It would take much too long to write out the full proof. In all of these definitions and theorems, H is a Hilbert space. I will assume that H is such that there exists a countable basis. (This assumption is standard when we're dealing with single-particle quantum theories, but I've been told that it's too restrictive for quantum field theory). I'm using the convention that the inner product is linear in the second variable.


Definition: An orthonormal basis of H is an orthonormal set that's not a proper subset of any other orthonormal set.


Theorem: Suppose that K is a closed convex subset of H. For each x in H, there's a unique x0 in K that's at the minimum distance from x. (In other words, this x0 satisfies d(x,x0)=d(x,K)).


Theorem: Suppose that M is a closed linear subspace of H. For each x in H, the following conditions on x0 in M are equivalent:
(a) x0 is the unique vector at the minimum distance from x.
(b) x-x0 is orthogonal to M.


Definition: The map [itex]x\mapsto x_0[/itex] is called the orthogonal projection onto the closed linear subspace M. Orthogonal projections are also called projection operators.


Theorem: If E={e1,...,en} is an orthonormal set, and P is the projection operator for the linear subspace spanned by the members of E, then for all x in H,


[tex]Px=\sum_{k=1}^n\langle e_k,x\rangle e_k.[/tex]

(This is proved by showing that x minus the sum on the right is orthogonal to the subspace, and then appealing to the previous theorem).


Theorem: If E={e1,e2,...} is an orthonormal set, then for all x in H,

[tex]\sum_{k=1}^\infty|\langle e_k,x\rangle|^2\leq\|x\|^2.[/tex]

(The inequality above is called Bessel's inequality).


Theorem: For each x in H, the sequence of partial sums of the series [itex]\sum_{k=1}^\infty \langle e_k,x\rangle e_k[/itex] is a Cauchy sequence. (By definition of "Hilbert space", this means that the series is convergent).


Theorem: If E={e1,e2,...} is an orthonormal basis, then for each x in H, [itex]x-\sum_{k=1}^\infty \langle e_k,x\rangle e_k[/itex] is orthogonal to E (and therefore =0).

You will need to use other results along the way, like the Pythagorean theorem for Hilbert spaces, and this theorem about series whose terms are real numbers:

Theorem: If [itex]\sum_{k=1}^\infty a_k[/itex] is a convergent series in [itex]\mathbb R[/itex], then [itex]\lim_m\sum_{k=m}^\infty a_k=0[/itex].

This stuff is covered pretty well in Conway, but I don't recommend the rest of the book. It's ridiculously hard to read. Kreyszig would be a much better choice. (That's what people are telling me. I haven't read it myself).
 
Last edited:
  • #3
I suspect that most people who think they want to know the answer to the question you asked will decide that they really don't when they see my reply above. Most people will settle for the corresponding theorem for finite-dimensional Hilbert spaces, which is much easier to prove. Let x be an arbitrary member of H. If {e1,...,en} is a basis, then there exist complex numbers {a1,...,an} such that

[tex]x=\sum_{k=1}^n a_k e_k.[/tex]

This implies

[tex]\langle e_i,x\rangle=\sum_{k=1}^n a_k \langle e_i,e_k\rangle=a_i.[/tex]
 
Last edited:
  • #4
Fredrik is right, most physicists will only care about the finite dimensional version and assume since it's describing a physical Hilbert space, the infinite version works the same. Here I basically just rewrite his last post in bra-ket notation:

Having a basis |ei> implies that any vector |x> can be written as

|x> = ∑i ai |ei>

If the basis is orthonormal, then taking the innerproduct of the above is easy (assuming no convergence issues)

<ej|x> = ∑i ai <ej|ei> = aj

Substituting back into the first equation gives

|x> = ∑i |ei><ei|x>

Since this is true for all |x> it must be that
i |ei><ei|
is the unique identity for the Hilbert space.
 
  • #5
Thank you Fredrik and Simon.

Yes, I think I probably can't handle the first proof for the infinite dimensional case since I've just started on QM. But thank you for writing it out. Hopefully, I'll come back to it after a while and figure it out.
 

FAQ: Proving Completeness Relations in Orthonormal Bases | Quantum Mechanics

What is a completeness relation?

A completeness relation is a mathematical statement that expresses the fact that a set of vectors or functions is sufficient to represent any other vector or function in a given space. It essentially states that the set is "complete" in the sense that no other vectors or functions are needed to fully describe the space.

How is completeness related to orthogonality?

Completeness and orthogonality are closely related concepts. In a complete set of vectors or functions, each element is orthogonal (perpendicular) to all others. This means that the set is not only sufficient to represent any other element in the space, but also that the elements are "independent" in a sense, as no two elements can be expressed as a linear combination of each other.

What is the importance of completeness relations in quantum mechanics?

In quantum mechanics, the concept of completeness plays a fundamental role in the understanding and calculation of physical systems. In particular, the completeness relation for the position and momentum operators (known as the Heisenberg uncertainty principle) is a key aspect of the theory, with applications in fields such as atomic and molecular physics, solid state physics, and quantum field theory.

How do completeness relations relate to representations of operators?

In mathematics, operators (such as matrices or differential operators) can be represented by sets of vectors or functions, known as their "eigenfunctions". Completeness relations allow us to express operators in terms of their eigenfunctions, making it easier to perform calculations and understand the properties of the operators.

Are completeness relations unique for a given space?

No, completeness relations are not unique. There can be multiple complete sets of vectors or functions for a given space, depending on the specific properties or requirements of the space. However, all complete sets will have the same number of elements, known as the "dimension" of the space.

Similar threads

Replies
6
Views
2K
Replies
4
Views
589
Replies
3
Views
1K
Replies
15
Views
976
Replies
2
Views
1K
Replies
9
Views
2K
Back
Top