Eigenvalue degeneracy in real physical systems

In summary: BillIn summary, according to quantum mechanics, degeneracies can be associated with symmetry or topological characteristics of the system. If a system has an odd number of electrons, for example, it will have at least a two-fold degeneracy. When all the operators in a system are represented by non-degenerate matrices, it is true that the eigenvalues are distinguishable. However, this is only true when all the observables in the system are measured. If not, the system is said to be in a superposition of different eigenstates and the collapse postulate must be taken with some grain of salt.
  • #141
bhobba said:
Yes - that's basically what I said.

Thanks
Bill

Ok. So, the conclusion is, POVMs are not the answer.

This is the best attempt so far:

In order to do physics, we only need to know the eigenvalues to the precision of the measurement apparatus. We don't need to know the multiplicity, since we need to project onto the space of states that are close enough to the measured eigenvalue. If the numerics gives us many eigenspaces for eigenvalues close enough to the measured value, we would project onto their direct sum. If the numerics gives us fewer, degenerate eigenspaces, we would also project onto their direct sum, but we would need fewer projectors. In both cases, the numerics would provide us with a sufficiently good projector, even though we might not know whether it projects onto degenerate or non-degenerate eigenspaces.​
 
Physics news on Phys.org
  • #142
ErikZorkin said:
Ok. So, the conclusion is, POVMs are not the answer.

This is the best attempt so far:

In order to do physics, we only need to know the eigenvalues to the precision of the measurement apparatus. We don't need to know the multiplicity, since we need to project onto the space of states that are close enough to the measured eigenvalue. If the numerics gives us many eigenspaces for eigenvalues close enough to the measured value, we would project onto their direct sum. If the numerics gives us fewer, degenerate eigenspaces, we would also project onto their direct sum, but we would need fewer projectors. In both cases, the numerics would provide us with a sufficiently good projector, even though we might not know whether it projects onto degenerate or non-degenerate eigenspaces.​
This is correct as far as it goes but to be realistic one needs POVMs rather than projections. If you simulate the universe with projections it will be very far from the real universe.
 
  • Like
Likes bhobba
  • #143
A. Neumaier said:
This is correct as far as it goes but to be realistic one needs POVMs rather than projections. If you simulate the universe with projections it will be very far from the real universe.

Well, but, at least in case of discrete variables (and in a simulation, they will all be discrete in fact), POVMs simply coincide with PVMs, do they?

Also, and that's what I am trying to understand, how is it easier to simulate a world with POVMs if you still have to deal with uncomputable spectral decompositions? I'd understand if you could define POVMs which consistently describe measurement up to a finite precision.
 
  • #144
ErikZorkin said:
Well, but, at least in case of discrete variables (and in a simulation, they will all be discrete in fact), POVMs simply coincide with PVMs, do they?
No. POVMs are far more general than their projection-valued special case.
ErikZorkin said:
Also, and that's what I am trying to understand, how is it easier to simulate a world with POVMs if you still have to deal with uncomputable spectral decompositions?
There are no spectral decompositions in a POVM unless you define the latter in terms of one.

Anyway, you seem not to be interested in simulations by present-day computers - if you were you would ask quite different questions.

But if the simulation is the one done by God to run the actual universe - God is not bound by Turing machines to our human concept of computability. For example, God could employ hypercomputers in which the halting problem for Turing machines is solvable, or where one can process an infinite number of statements in a finite time, etc..
 
  • Like
Likes bhobba
  • #145
A. Neumaier said:
There are no spectral decompositions in a POVM unless you define the latter in terms of one.

What worries me a bit is that there are often answers directly contradicting each other.

For instance, I asked:

My question is, do you use spectral decomposition theorem to derive POVMs or if yes, where?​

bhobba: Of course.

Regarding the simulation, let's assume, as I said in the OP, that computational resources are big enough, but finite. So any realizable process would be computable.

Goind back to Stern-Gerlach and too narrow beam splitting. How would you go about that using POVMs or something else that would be consistent with the fact that the measurement can be done up to a finite precision?
 
  • #146
ErikZorkin said:
My question is, do you use spectral decomposition theorem to derive POVMs or if yes, where?
bhobba: Of course.
Well, it is in your question. If you derive POVMs from the projection case in a bigger Hilbert space, you need the spectral decomposition there. But if you just take a POVM and use it to simulate, all you need is that the sume condition is satisfied.
 
  • #147
ErikZorkin said:
Stern-Gerlach and too narrow beam splitting.
What does this mean? In Stern-Gerlach you only measure in which half the particle appears - and you can do the measurement only if the experiment creates the two partial beams in directions so that the spots on the screen are sufficiently far apart. Nothing fancy is needed here, and only very low precision.
 
  • Like
Likes vanhees71
  • #148
Thanks. I learned something from this thread.

So do you mean that you can come up with POVMs without using the spectral theorem in the first place? I read in the materials, linked here, that they are hard to construct.

Regarding the Stern-Gerlach experiment, it's still possible to have a situation when the dots are NOT far apart enough. What do you do in this case? PVMs should tell you which eigenspace to project to, but you can't decide. Apprximation doesn't seem to make much sense since the eigenspaces are exactly the opposites.
 
  • #149
ErikZorkin said:
a situation when the dots are NOT far apart enough. What do you do in this case?
In this case you get only one spot and you only measure the presence of the particle but not its spin. Thus no factorization is needed..
 
  • #150
ErikZorkin said:
Thanks. I learned something from this thread.

So do you mean that you can come up with POVMs without using the spectral theorem in the first place? I read in the materials, linked here, that they are hard to construct.

Regarding the Stern-Gerlach experiment, it's still possible to have a situation when the dots are NOT far apart enough. What do you do in this case? PVMs should tell you which eigenspace to project to, but you can't decide. Apprximation doesn't seem to make much sense since the eigenspaces are exactly the opposites.

The collapse depends on what knowledge you retain of the measurement. Hence if you perform a measurement, but forget the results, the collapse is different from if you retain knowledge of the results. Terminology varies here - some people use the term "collapse" only if you remember the results. But regardless of terminology, the rule for the evolution of the state after a measurement is performed depends on the knowledge of the observer.

In this sense the "computational universe" is irrelevant, because quantum mechanics in the Copenhagen interpretation is not a theory of all of reality. It requires an observer, and concerns what probabilistic predictions the observer can make if he does certain things.
 
  • #151
ErikZorkin said:
Well, but, at least in case of discrete variables (and in a simulation, they will all be discrete in fact), POVMs

Did you read the link I gave on quantum measurement theory? It explains it all there.

Quantum operators give resolutions of the identity via the spectral theorem. However they are not the most general observation in QM. POVM's are - they are resolutions of the identity with the disjoint requirement removed. However POVM's can be reduced to resolutions of the identity using the concept of a probe. You have a Von Neuman type observation (ie a resolution of the identity type) and insert a probe to observe it. You then observe the probe to indirectly observe the first system. It can be shown the probe is described by a POVM - not a resolution of the identity. The detail is in the link I gave - please read it.

Thanks
Bill
 
  • #152
ErikZorkin said:
For instance, I asked:

My question is, do you use spectral decomposition theorem to derive POVMs or if yes, where?​

bhobba: Of course.

The resolution of the identity from the spectral theorem is a POVM - that's the of course.

I gave a link explaining observations in QM. If you were to study it it will likely answer your queries.

Thanks
Bill
 
  • #153
ErikZorkin said:
they are hard to construct.
Not really. Take ##N## matrices ##A_k## without a common null vector. Then the (computable) Cholesky factor ##R## of ##\sum A_k^*A_k## is invertible and the (computable) ##P_k=A_kR^{-1}## form matrices with ##\sum P_k^*P_k=1##, which is all you need. One can use least squares to adapt the matrix entries to real measurements if one wants to use this to simulate a real life transformation behavior. Matching reality is called quantum estimation theory.
 
Last edited:
  • Like
Likes ErikZorkin
  • #154
bhobba said:
The resolution of the identity from the spectral theorem is a POVM - that's the of course.

I gave a link explaining observations in QM. If you were to study it it will likely answer your queries.

Thanks
Bill

I read through that link, but it's of no use for me since it assumes that the density matrix is diagonalizable in the first place. My question whether you can consistently describe a measurement in an APPROXIMATE FORMAT when you know spectral decomposition only up to some finite precision. The exact expresison for an approximate spectral decomposition I have given above.
 
  • #155
A. Neumaier said:
Not really. Take ##N## matrices ##A_k## without a common null vector. Then he (computable) Cholesky factor ##R## of ##\sum A_k^*A_k## is invertible and the (computable) ##P_k=A_kR^{-1}## form matrices with ##\sum P_k^*P_k=1##, which is all you need. One can use least squares to adapt the matrix entries to real measurements if one wnat to use this to simulate a rela life transformation behavor. Matching reality is called quantum estimation theory.

That's nice
 
  • #156
ErikZorkin said:
That's nice
You can take the ##A_k## to be approximate projectors. In this case you get something that is close to an ideal Copenhagen measurement.
 
  • Like
Likes ErikZorkin
  • #157
A. Neumaier said:
You can take the ##A_k## to be approximate projectors. In this case you get something that is close to an ideal Copenhagen measurement.

Like in this case?

For any Hermitian operator T and any ε>0, there exist commuting projections P1, ... Pn with PiPj =0 and real numbers c1,...cn such that || T - ∑i=1nciPi || ≤ ε.
 
  • #158
ErikZorkin said:
Like in this case?
I was thinking of computing an approximate spectrum, deciding how to group the approximate eigenvalues then compute approximate projectors to the corresponding invariant subspaces then apply the construction to these.
 
  • Like
Likes ErikZorkin
  • #159
ErikZorkin said:
I read through that link, but it's of no use for me since it assumes that the density matrix is diagonalizable in the first place.

Scratching head. Cant find where it makes that assumption, but maybe I am blind - its been a while since I went through it.

Thanks
Bill
 
  • #160
bhobba said:
Cant find where it makes that assumption,
A density operator is Hermitian and trace class, hence always self-adjoint, hence diagonalizable. So this is not an assumption but a provable result.
 
  • Like
Likes bhobba and vanhees71
  • #161
A. Neumaier said:
A density operator is Hermitian and trace class, hence always self-adjoint, hence diagonalizable. So this is not an assumption but a provable result.

Kicking self o0)o0)o0)o0)o0)o0)o0)o0)

It follows simply from the spectral theorem since its Hermitian and obviously normal.

Thanks
Bill
 
Last edited:
  • #162
bhobba said:
Dirac's elegant formulation is now rigorous since Rigged Hilbert Spaces have been worked out.

Came here to say exactly this! I decided not to read through the whole thread, but to instead search each page for the words "rigged" or "triplet" and came across your post on page 5. Was this post resolved? I don't see how there would be an issue in the extended nuclear space...as Ballentine himself says, "[...] rigged Hilbert space seems to be a more natural mathematical setting for quantum mechanics than is Hilbert space."
 
  • #163
bhobba said:
Kicking self o0)o0)o0)o0)o0)o0)o0)o0)

It follows simply from the spectral theorem since its Hermitian and obviously normal.

Thanks
Bill

The question was, what you can do when you can't have an exact diagonalization.
 
  • #164
HeavyMetal said:
Came here to say exactly this! I decided not to read through the whole thread, but to instead search each page for the words "rigged" or "triplet" and came across your post on page 5. Was this post resolved? I don't see how there would be an issue in the extended nuclear space...as Ballentine himself says, "[...] rigged Hilbert space seems to be a more natural mathematical setting for quantum mechanics than is Hilbert space."

The OP's background is math and was thinking in terms of a highly rigorous approach like you find in pure math. QM can be done that way and I gave a link to a book, but he didn't want to pursue it.

Thanks
Bill
 
  • #165
ErikZorkin said:
The question was, what you can do when you can't have an exact diagonalization.

By the definition of a state as a positive operator of unit trace it must. I actually derived it in a link I gave previously:
https://www.physicsforums.com/threads/the-born-rule-in-many-worlds.763139/page-7

Its the modern version of a famous theorem from the mathematician Gleason:
http://www.ams.org/notices/200910/rtx091001236p.pdf

The rock bottom essence is non-contextualty and is a much more general result of the equally famous Kochen–Specker theorem (its a simple corollary of Gleason)
https://en.wikipedia.org/wiki/Kochen–Specker_theorem

Thanks
Bill
 
Last edited:
  • #166
bhobba said:
The OP's background is math and was thinking in terms of a highly rigorous approach like you find in pure math. QM can be done that way and I gave a link to a book, but he didn't want to pursue it.

Thanks
Bill

Not really, thank you for the book, I'll take a look at that later. It's been just a bit off the original question.
 
  • Like
Likes bhobba
  • #167
bhobba said:
By the definition of a state as a positive operator of unit trace it must.

Only classically. But in terms of computable analysis, it doesn't.
 
  • Like
Likes bhobba
  • #168
ErikZorkin said:
Not really, thank you for the book, I'll take a look at that later. It's been just a bit off the original question.

:smile::smile::smile::smile::smile::smile::smile:

Thanks
Bill
 
  • #169
ErikZorkin said:
Only classically. But in terms of computable analysis, it doesn't.

Got it.

Thanks
Bill
 
  • #170
bhobba said:
Got it.

Thanks
Bill

But it shouldn't be a problem in practical physics and approximate spectral decomposition should be sufficient. That's the main message that I'm trying to check with this community. Also, you might be interested in this book (giving this in little hope that it gets viewed though). It's suprsinig how much of the physics can be done in pure computable framework.
 
  • #171
rubi said:
In order to do physics, we only need to know the eigenvalues to the precision of the measurement apparatus. We don't need to know the multiplicity, since we need to project onto the space of states that are close enough to the measured eigenvalue. If the numerics gives us many eigenspaces for eigenvalues close enough to the measured value, we would project onto their direct sum. If the numerics gives us fewer, degenerate eigenspaces, we would also project onto their direct sum, but we would need fewer projectors. In both cases, the numerics would provide us with a sufficiently good projector, even though we might not know whether it projects onto degenerate or non-degenerate eigenspaces.

I know it's a bit outdated, but can someone give me a reference to such a spectral decomposition in approximate (and possibly computable) format? The theorem that I mentioned above does not cover the question of approximating eigenvectors and -spaces.
 
  • #172
ErikZorkin said:
I know it's a bit outdated, but can someone give me a reference to such a spectral decomposition in approximate (and possibly computable) format? The theorem that I mentioned above does not cover the question of approximating eigenvectors and -spaces.
One discretizes the time-independent Schroedinger equation then solves a matrix eigenvalue problem. There are many excellent solvers on the web that give approximations to the spectrum and the eigenvectors. The grouping and approximate projection-building can be done on this level. If there is a continuous spectrum one also has to do an additional fit to approximately extract the corresponding scattering information, which is a bit more complicated. Details depend very much on the system to be handled and the accuracy required.
 
  • Like
Likes bhobba and ErikZorkin
  • #173
Matrix eigenvalue problem is undecidable, only the roots of det{A - αI} are computable, but their multiplicity isn'. That is why these solvers suffer from instability when the matrix is degenerate and cardinality of spectrum is unknown as mentioned by Ziegler & Brattka. "Effective" algorithm means it always outputs a correct answer. We may do so by essentially allowing eigenvalues/vectors/spaces and projections to be computed in approximate format. Honestly, I thought it would be easy to find a reference, but I couldn't so far. Numerical methods is something different.
 
  • #174
ErikZorkin said:
Matrix eigenvalue problem is undecidable
You never did any actual simulation, else you wouldn't care about the abstract notion of computability. Statistical errors in simulation are typically much larger than all other sources of inaccuracies.

Undecidability doesn't matter as only an approximate solution is needed. Therefore only numerical methods count. Engineers use the packages available routinely to solve high-dimensional eigenvalue problems for the design of cars, bridges, high-rise buildings, ships, etc..
 
  • #175
A. Neumaier said:
You never did any actual simulation
Funny enough, I do it almost every work day :)

That said, numerical methods often do not meet the specifications. If something doesn't work well, it's being simply rerun. So effective algorithms, supported by formalized proofs, get more popular.
 
Back
Top