- #1
adriank
- 534
- 1
I have a Hilbert space H; given a closed subspace U of H let PU denote the orthogonal projection onto U. I also have a lattice L of closed subspaces of H, such that for all U and U' in L, PU and PU' commute. The problem is to find an orthonormal basis B of H, such that for every element b of B and every element U of L, b is an eigenvector of PU (equivalently, b is in U or U⊥).
The obvious thing to do is to apply Zorn's lemma to obtain a maximal orthonormal subset B of H satisfying the above condition, and this part works. For some reason or other, though, I'm having trouble showing that span B = H. If not, then letting W = span B, I need to find a normalized vector v in W⊥ such that for every U in L, v is an eigenvector of PU; then B ∪ {v} contradicts the maximality of B. (The following may or may not be helpful: It suffices to consider the case where U contains W.)
The idea I have right now is this: Suppose I could find a one-dimensional subspace V of W⊥ such that PV commutes with PU for all U in L, and let v be a normalized vector in V. Then for every U in L, PVPU(v) = PUPV(v) = PU(v), so PU(v) is in V. Since V is 1-dimensional, v is an eigenvector of PU(v), as desired.
The problem is that I have no idea how to choose V. I feel like this should be really easy, but for some reason I'm not seeing it.
The obvious thing to do is to apply Zorn's lemma to obtain a maximal orthonormal subset B of H satisfying the above condition, and this part works. For some reason or other, though, I'm having trouble showing that span B = H. If not, then letting W = span B, I need to find a normalized vector v in W⊥ such that for every U in L, v is an eigenvector of PU; then B ∪ {v} contradicts the maximality of B. (The following may or may not be helpful: It suffices to consider the case where U contains W.)
The idea I have right now is this: Suppose I could find a one-dimensional subspace V of W⊥ such that PV commutes with PU for all U in L, and let v be a normalized vector in V. Then for every U in L, PVPU(v) = PUPV(v) = PU(v), so PU(v) is in V. Since V is 1-dimensional, v is an eigenvector of PU(v), as desired.
The problem is that I have no idea how to choose V. I feel like this should be really easy, but for some reason I'm not seeing it.