- #1
Killtech
- 344
- 35
From a discussion about completeness of QM i picked up some people having the weird idea that theories shouldn't model things that are not observable. This is bewildering as this is a very common practice in physics and the opposite can be extremely tricky to achieve. But it made me contemplate on the topic as this is indeed something i am not entirely happy with.
To give a clear idea of what i mean, the one-way-speed of light in relativity isn't observable yet looking at Maxwell's equations one sees that it is still modeled according to Einsteins synchronization convention (isotrop). Now, it's easy to see E's synch is a canonical choice and it is without doubt hugely simplifies things in many instances. Therefore i think it is clear that having non observable aspects in our theories comes with big perks.
On the other hand, i read a lot of statements about nature being concluded from such theories. The problem is that most of them actually rely implicitly on such non observable assumptions/conventions to hold and are easily falsified when those are replaced. I was totally surprised what happened when I tried to use a synchronization scheme that keeps simultanity invariant rather then Maxwell and ended up with Lorentz-Poincaré aether theory which i didn't know is equivalent to SRT. Mind you using that synch makes discussing twin paradox of any kind boringly trivial (a shared simultanity does that) which made me question why anyone would even go through the hardship to do it all in SRT when you have a equivalent representation which gives you same results for free? Why not use the convention that is most suited to the problem like we do with coordinates?
Anyhow, I realized that Maxwell's equations are a mixture of observable facts and conventions and therefore not uniquely determined by reality, which is a nightmare scenario when it comes to differentiating what a theory says about nature and what is merely an aspect of how we chose to model it.
Now, quantum mechanics isn't that different from relativity in assuming unobservable postulates - or rather it's way worse. The most problematic is how QM-observables are represented via linear operators and how to apply them. This postulate is similar to the one-way-seepd of light as it is technically required for the Kopenhagen interpretation to make predictions to begin with, but it has severe consequences for any theory that builds on it. And it comes in an unholy union with Schrödinger (or other) equations such that they seem to be inseparable when it comes to what is experimentally verifyable. On the other hand, one can see that a lot of the strangeness in QM boils down to some technical limitations this postulate incorporates, yet non of those technical limitations can be related to anything experimentally verifiable - it's merely about the interpretation of the data hardcoded into QM.
And there is the issue of quantum gravitation. Well, you cannot merge two theories that use incompatible assumptions without getting contradictions. But that doesn't actually imply there is an issue with either of them, if they just happen to implicitly use incompatible conventions. I mean Dirac's trick is all great but it comes with quite a bit of a technical legacy to make it work, parts of which are not observable. You can't extend the dimension of of a theory without implicitly making new assumptions about space-time. On the other hand that trick is only needed to linearize an equation so it conforms to the postulate how observables are extracted from the wave function...
I feel like sometimes physics seems too much focused on sticking to conventions that the first person came up with that made a theory work, up to the point that some of these conventions are mistaken for actual laws of nature. I feel like there is too little work done in cleaning up axiom-systems, separating implicit conventions from true (generalized) observations and too little concern about how much those conventions actually govern. Conventions are great but they should be purely judged by how useful they are. And whenever we need conventions, we have options and should discuss the pros and cons of different choices - usually developing a diversity of choices gives the biggest flexibility to tailor solutions to problems.
Then again, writing this down i felt quite reminded about the debate/controversy on the axiom of choice. Maybe I am again just stuck in the way of thinking from pure mathematics.
To give a clear idea of what i mean, the one-way-speed of light in relativity isn't observable yet looking at Maxwell's equations one sees that it is still modeled according to Einsteins synchronization convention (isotrop). Now, it's easy to see E's synch is a canonical choice and it is without doubt hugely simplifies things in many instances. Therefore i think it is clear that having non observable aspects in our theories comes with big perks.
On the other hand, i read a lot of statements about nature being concluded from such theories. The problem is that most of them actually rely implicitly on such non observable assumptions/conventions to hold and are easily falsified when those are replaced. I was totally surprised what happened when I tried to use a synchronization scheme that keeps simultanity invariant rather then Maxwell and ended up with Lorentz-Poincaré aether theory which i didn't know is equivalent to SRT. Mind you using that synch makes discussing twin paradox of any kind boringly trivial (a shared simultanity does that) which made me question why anyone would even go through the hardship to do it all in SRT when you have a equivalent representation which gives you same results for free? Why not use the convention that is most suited to the problem like we do with coordinates?
Anyhow, I realized that Maxwell's equations are a mixture of observable facts and conventions and therefore not uniquely determined by reality, which is a nightmare scenario when it comes to differentiating what a theory says about nature and what is merely an aspect of how we chose to model it.
Now, quantum mechanics isn't that different from relativity in assuming unobservable postulates - or rather it's way worse. The most problematic is how QM-observables are represented via linear operators and how to apply them. This postulate is similar to the one-way-seepd of light as it is technically required for the Kopenhagen interpretation to make predictions to begin with, but it has severe consequences for any theory that builds on it. And it comes in an unholy union with Schrödinger (or other) equations such that they seem to be inseparable when it comes to what is experimentally verifyable. On the other hand, one can see that a lot of the strangeness in QM boils down to some technical limitations this postulate incorporates, yet non of those technical limitations can be related to anything experimentally verifiable - it's merely about the interpretation of the data hardcoded into QM.
And there is the issue of quantum gravitation. Well, you cannot merge two theories that use incompatible assumptions without getting contradictions. But that doesn't actually imply there is an issue with either of them, if they just happen to implicitly use incompatible conventions. I mean Dirac's trick is all great but it comes with quite a bit of a technical legacy to make it work, parts of which are not observable. You can't extend the dimension of of a theory without implicitly making new assumptions about space-time. On the other hand that trick is only needed to linearize an equation so it conforms to the postulate how observables are extracted from the wave function...
I feel like sometimes physics seems too much focused on sticking to conventions that the first person came up with that made a theory work, up to the point that some of these conventions are mistaken for actual laws of nature. I feel like there is too little work done in cleaning up axiom-systems, separating implicit conventions from true (generalized) observations and too little concern about how much those conventions actually govern. Conventions are great but they should be purely judged by how useful they are. And whenever we need conventions, we have options and should discuss the pros and cons of different choices - usually developing a diversity of choices gives the biggest flexibility to tailor solutions to problems.
Then again, writing this down i felt quite reminded about the debate/controversy on the axiom of choice. Maybe I am again just stuck in the way of thinking from pure mathematics.