Busting the myth of the observer: the double slit experiment

In summary: it's been pretty much abandoned, as it doesn't really add anything to our understanding of the universe.
  • #36
bhobba said:
That's the beauty of defining an observation via decoherence - its objective.

An observation is simply once decoherence has occurred.

Thanks
Bill

Decoherence does not remove the need for the cut in the first place, and that is where "common sense" enters.

Secondly perfect decoherence never occurs. So objective decoherence never occurs.
 
Physics news on Phys.org
  • #37
atyy said:
However, if we take the view that there are no deeper laws underlying quantum mechanics, then the cut and common sense become fundamental.
Not within the MWI. If the MWI works, classical mechanics and QM appear on equal footing. In order to use the theories in practice, we make a cut. We can place it in an arbitrary place between a part of the universe we want to investigate on the one side, and the rest of the universe -which includes at least ourselves- on the other side. From such experiments, we extrapolate that a state of the universe exists. This state of the whole is unknowable in principle because knowledge is obtained by measurements and measurements require something external.
 
  • #38
bhobba said:
But Kith collapse isn't even part of the formalism of QM.

It's simply something SOME interpretations have for filtering type observations.
I know. My point is that in interpretations with collapse, I think the only sensible notion is that consciousness causes it. All other notions contradict the predictions of QM in an ugly way.
 
  • #39
kith said:
Not within the MWI. If the MWI works, classical mechanics and QM appear on equal footing. In order to use the theories in practice, we make a cut. We can place it in an arbitrary place between a part of the universe we want to investigate on the one side, and the rest of the universe -which includes at least ourselves- on the other side. From such experiments, we extrapolate that a state of the universe exists. This state of the whole is unknowable in principle because knowledge is obtained by measurements and measurements require something external.

Yes, of course.

BTW, I have seen a claim that an alternative to MWI that works with only unitary evolution is to deny reality (not sure what that means). It's in a section in Wiseman and Milburn's book, and they cite "correlations with correlata" and Mermin's Ithaca interpretation http://arxiv.org/abs/quant-ph/9801057 .

Nowadays Mermin is a wannabe Quantum Bayesian, which does have a cut, so I don't know if that's his current thinking. Anyway, does this different ontology for unitary evolution make any sense to you?
 
Last edited:
  • #40
bhobba said:
There is nothing Earth shattering being proposed here.

Its simply that it's reasonable to put collapse just after decoherence ie when the off diagonal elements are below some threshold way below the ability to detect.

That's it - that's all.

Thanks
Bill

I don't think the issue is as simple as you seem to be presenting it. What's your defined threshold for the ability to detect off diagonal elements of the density matrix? How does one detect elements of the density matrix? Elements of the density matrix are calculated, how do you detect them?
 
  • #41
Is video camera also a conscious observer? We could leave the lab and keep video camera running, then check the interference pattern, and only after that make the decision whether to watch the tape or not.
 
  • #42
Matterwave said:
I don't think the issue is as simple as you seem to be presenting it. What's your defined threshold for the ability to detect off diagonal elements of the density matrix? How does one detect elements of the density matrix? Elements of the density matrix are calculated, how do you detect them?

The defined threshold depends on the accuracy of the equipment ie the particular observational setup.

Obviously you can't detect elements of the density matrix - its a variant of the well known problem of detecting a particular state. It's also a variant of detecting the probabilities of the side of a coin - you can't do that either.

But, again considering the coin, its not a relevant issue in the theory of say excess heads or tails in a long sequence of trials.

Thanks
Bill
 
Last edited:
  • #43
Ookke said:
Is video camera also a conscious observer?

Of course its not concious.

Its thought experiments along those lines, especially in modern versions with computers, that leaves the conciousness causes collapse brigade in a total mess.

Take for example recording a double slit experiment into computer memory. You disassemble the apparatus, even destroying its parts, and say, 10 years later, you take the ram to a computer science class.

You then claim, since it hasn't been observed by a conscious observer, you can't say that the data in the ram chip is real until someone reads it on a computer screen or similar. You can copy it, disseminate it, do whatever you like to it, but until a conscious observer actually views it its not real.

Don't be surprised if they all leave laughing their heads off, and as you leave, dejected, some nice men in white come along.

Thanks
Bill
 
  • #44
bhobba said:
Its thought experiments along those lines, especially in modern versions with computers, that leaves the conciousness causes collapse brigade in a total mess.

But let's take the statistical mechanics analogy that Nugatory brought up. There is no true equilibrium in the universe, since the universe is expanding. However, for our purposes to some accuracy, we can deem a physical situation as having reached equilibrium. So equilibrium is subjective. How is the subjectivity of equilibrium so different from the subjectivity of exactly how much decoherence is needed to place the cut? Since equilibrium is subjective, is there a big problem if we say consciousness causes equilibrium?

Also, given the Bayesian analogy you like to make, surely the Bayesian updating analogy supports the idea that consciousness causes collapse?
 
Last edited:
  • #45
bhobba said:
You then claim, since it hasn't been observed by a conscious observer, you can't say that the data in the ram chip is real until someone reads it on a computer screen or similar. You can copy it, disseminate it, do whatever you like to it, but until a conscious observer actually views it its not real.

Isn't it true you can't copy (clone) quantum systems? In principle the RAM holding the superposed data is a quantum system.
 
  • #46
atyy said:
Also, given the Bayesian analogy you like to make, surely the Bayesian updating analogy supports the idea that consciousness causes collapse?

That doesn't follow.

The view that the state is simply a subjective belief doesn't mean that conciousness is required to cause collapse - it simply means, like probabilities, its not real, simply a subjective confidence you have about something.

That said I find this subjectivism, even encoded by something that makes it amenable to analysis like the cox axioms, a rather strange thing to have in science and why I eschew it in both probability and QM and have a frequentest view.

But that's just me.

My old alma mater has heaps of courses on applied Baysean probability - and its undoubtedly called that for a reason. Hell I even did one undergrad - it does help in viewing some problems in statistical modelling and analysis.

Thanks
Bill
 
  • #47
StevieTNZ said:
Isn't it true you can't copy (clone) quantum systems? In principle the RAM holding the superposed data is a quantum system.

That's true. And it easy to see. If you could you could violate the uncertainty relations. Want to know simultaneously the momentum and position. Clone it and measure both.

But that has nothing to do with information stored in ram etc - that can be copied endlessly and with virtually 100% accuracy - especially with error correcting codes.

And having copies makes such a view even more bizarre - make a million copies - view one and the wavefunction of the original now destroyed apparatus suddenly collapses - including all the copies.

To be fair, without a doubt, by philosophical shenanigans the edifice probably could be made logically sound, but you have to ask - exactly for what gain.

Thanks
Bill
 
Last edited:
  • #48
bhobba said:
The view that the state is simply a subjective belief doesn't mean that conciousness is required to cause collapse - it simply means, like probabilities, its not real, simply a subjective confidence you have about something.

But if the state is subjective, then the state requires "consciousness" or at least something that can have subjective knowledge.

bhobba said:
You then claim, since it hasn't been observed by a conscious observer, you can't say that the data in the ram chip is real until someone reads it on a computer screen or similar. You can copy it, disseminate it, do whatever you like to it, but until a conscious observer actually views it its not real.

Don't be surprised if they all leave laughing their heads off, and as you leave, dejected, some nice men in white come along.

OK, putting Nugatory's, kith's and your comments together, I think I can argue that if "collapse" is like "equilibrium", it is at least partially subjective, and in that sense it is not terrible if we use "consciousness" as a synonym for subjectivity.

What we would object to is if we don't believe reality exists until collapse occurs, because then we would be saying "concsiousness causes reality"! So as long as we don't take a hard Copenhagen position, which I don't think exists any more since von Neumann's proof was found to be in error and we know of at least one viable ontology (even MWI is acceptable, if it is technically correct), then we can say that "consciousness causes collapse" without saying that "consciousness causes reality".
 
Last edited:
  • #49
atyy said:
But if the state is subjective, then the state requires "consciousness" or at least something that can have subjective knowledge.

All theories require conciousness to understand. The Baysian view just extends it a bit further - and QM isn't the only area with that sort of thing.

Thanks
Bill
 
  • #50
Coming back to the OP, I think it states the issue the other way around, in particular when we consider the experiment done not with photons but with massive particles. The revealing thing about the double-slit experiment then is not that "the presence of an observer causes a wave to collapse into a particle" but rather the other way around, that what appear as particles to our observations (a single dot in the recording screen) actually behave like waves while not observed (the way the dots gradually build up on the screen (with interference) indicates that what we thought of as a particle with a definite location at all times actually passed through both slits simultaneously as only a wave can do, interfering with itself).
 
  • #51
Wow, I caused an avalanche of posts. Good thing.

When you guys are saying consciousness interferes with quantum phenomena do you mean that the magnetic field produced by neural activity in our brain causes it? Because there is nothing special about that. Guess what: our consciousness is so wonderful, it interferes with the workings of entire machines. One such machine is called an MRI scanner, found in most hospitals. You just have to be sufficiently close to the machine ("inside" it) and tune it right.

Or when you talk about consciousness collapsing waves, is it the narrator with the hushed voice again, saying hypnotically: "Our consciousness is a mysterious thing which we have no idea about. The way it interferes with quantum phenomena is absolutely unknown. Perhaps it produces a hitherto undiscovered field of yet undiscovered virtual particles. Or, indeed, the terms waves and particles may not apply anymore and our consciousness may be a fundamental metaforce of nature."

So which side are you on? :-)
 
  • #52
^ No, it's nothing of the sort of "consciousness magnetic fields" or anything like that. It's just that whenever there is an attempt to obtain information about which path the quantum entity took, the wave behavior will not manifest.
My favorite presentation of the effect of observation in a quantum system is the Quantum Zeno Effect (experiment by W. Itano et al in 1990) in which it is confirmed that the beryllium atoms in the experiment must be really in a superposition of two different states while they are not observed. I guess you will find the details if you google it.
 
  • #53
Gerinski said:
The revealing thing about the double-slit experiment then is not that "the presence of an observer causes a wave to collapse into a particle" but rather the other way around, that what appear as particles to our observations (a single dot in the recording screen) actually behave like waves while not observed (the way the dots gradually build up on the screen (with interference) indicates that what we thought of as a particle with a definite location at all times actually passed through both slits simultaneously as only a wave can do, interfering with itself).

I think one of the issues here is that's not the best, or even the correct way, to look at the double slit experiment IMHO:
http://arxiv.org/ftp/quant-ph/papers/0703/0703126.pdf

Basically the wave particle duality is a crock of the proverbial well and truly consigned to the dustbin of history when Dirac came up with his transformation theory in 1927 - but still, amazingly really, hangs about.

Thanks
Bill
 
  • #54
steviereal said:
So which side are you on? :-)

Despite some discussion on the issue here its very backwater these days.

Forget about it.

Thanks
Bill
 
  • #55
bhobba said:
That said I find this subjectivism, even encoded by something that makes it amenable to analysis like the cox axioms, a rather strange thing to have in science and why I eschew it in both probability and QM and have a frequentest view.

I think we have had this discussion before, so there's probably no sense in bringing it up again, but I'm going to, anyway.

To me, the frequentist view doesn't seem completely coherent. I mean, to say that a 6-sided die has a 1/6 probability of producing a "1" certainly doesn't mean that that will happen once out of every 6 trials. You can't even say that in the limit as the number of trials goes to infinity, the fraction of trials that produce a "1" approaches 1/6. There is nothing that guarantees this. The most that can be said about it is that the probability of generating a relative frequency other than 1/6 goes to zero as the number of trials goes to infinity. The latter notion of probability certainly is not frequentist (because you can't repeat the experiment of making an infinite number of trials). So it seems to me that the frequentist view of probability is not self-sufficient, in that to make sense of it, you have to also have a notion of probability or likelihood that is not frequentist.

I agree that there is something unsatisfying about basing it all on subjective notion of probability, but I don't see a plausible alternative. Karl Popper, who was very concerned about falsifiability of scientific theories, struggled with probabilistic theories because they weren't, strictly speaking, falsifiable in the sense that he wanted. He definitely was opposed to making anything subjective part of science. But his approach to having an objective notion of probability didn't make much sense to me. He introduced the term "propensity" to mean some objective fact about a possible future event. The propensity gave the most accurate probability for the event. So even though probabilities were still somewhat subjective, they were just approximations to propensities, which were objective. But I don't know what in the world a "propensity" could be, other than a probability. I don't think anything was accomplished by introducing another term.
 
  • #56
stevendaryl said:
To me, the frequentist view doesn't seem completely coherent

Lets just say both views have issues.

Your choice which appeals better.

Or you can just base it on the Kolmogorov axioms and eschew applying it - but where's the fun in that. Applying theory is always messy.

Thanks
Bill
 
Last edited:
  • #57
steviereal said:
Wow, I caused an avalanche of posts. Good thing.

When you guys are saying consciousness interferes with quantum phenomena do you mean that the magnetic field produced by neural activity in our brain causes it? Because there is nothing special about that. Guess what: our consciousness is so wonderful, it interferes with the workings of entire machines. One such machine is called an MRI scanner, found in most hospitals. You just have to be sufficiently close to the machine ("inside" it) and tune it right.

Or when you talk about consciousness collapsing waves, is it the narrator with the hushed voice again, saying hypnotically: "Our consciousness is a mysterious thing which we have no idea about. The way it interferes with quantum phenomena is absolutely unknown. Perhaps it produces a hitherto undiscovered field of yet undiscovered virtual particles. Or, indeed, the terms waves and particles may not apply anymore and our consciousness may be a fundamental metaforce of nature."

So which side are you on? :-)

Even if one were to agree that consciousness plays a part in collapsing the wave function doesn't mean that consciousness is a fundamental metaforce of nature. Let's take the less controversial subject of thermodynamics. There is no true equilibrium in the universe. However, for some purposes a physical system can be considered to be in equilibrium. For other purposes the same physical system can be considered to be not in equilibrium. So equilibrium depends on "purpose". But we don't say that "purpose" is a fundamental metaforce of nature, because we believe "purpose" is thought up by brains, subject to the same physical laws as everything else.

So the question as to which side one is on could be answered by "neither".
 
  • #58
stevendaryl said:
To me, the frequentist view doesn't seem completely coherent. I mean, to say that a 6-sided die has a 1/6 probability of producing a "1" certainly doesn't mean that that will happen once out of every 6 trials. You can't even say that in the limit as the number of trials goes to infinity, the fraction of trials that produce a "1" approaches 1/6. There is nothing that guarantees this. The most that can be said about it is that the probability of generating a relative frequency other than 1/6 goes to zero as the number of trials goes to infinity. The latter notion of probability certainly is not frequentist (because you can't repeat the experiment of making an infinite number of trials). So it seems to me that the frequentist view of probability is not self-sufficient, in that to make sense of it, you have to also have a notion of probability or likelihood that is not frequentist.

Why can't you repeat making an infinite number of trials? In the first place, there are no infinite number of trials, so any large but finite number is close enough to infinity.

Now, let's throw a die at one location. Then we pick it up and throw it again at the same location, ie. discrete time at this location corresponds to the number of trials at this location. We can make a large but finite number of trials at this location, close enough to infinity.

Then at another location, we can do the same. At each location we can have effectively an inifnite number of trials. So we can repeat an infinite number of trials. For example, we can take ATLAs and CMS Higgs searches to each be a repetition of an "infinite" FAPP number of trials.

Basically, frequentism is experimentally true, just like classical electromagnetism and Euclidean geometry. There is nothing in Euclidean geometry or the theory of classical electromagnetism that tells us what physical objects are points or electrons. And classical electromagentism is just as circular as frequentist probability - an electron is something that is affected by an electric field, and an electric field is something that affects an electron. But as long as we can in real life identify physical operations and objects that correspond to our theory, then that particular self-consistent interpretation of the theory is useful. The physical interpretation is not unique. For example, one day we may find other objects that obey the laws of classical electromagnetism. In the case of Euclidean geometry, there is a duality between points and lines, so a physical point may correspond to a theoretical line. So frequentism does not rule out subjectivism, but subjectivism cannot say that frequentism is any more circular than classical electromagentism.
 
  • #59
atyy said:
Basically, frequentism is experimentally true, just like classical electromagnetism and Euclidean geometry.

I wouldn't say that. As I said, for any finite number of trials, the relative frequency for rolling a die will be different from 1/6. The 1/6 is something that it approaches in a limiting sense. But, as I said, to make sense of the notion of limit here requires (it seems to me) a notion of measure or probability that is not frequentist.

There is nothing in Euclidean geometry or the theory of classical electromagnetism that tells us what physical objects are points or electrons. And classical electromagentism is just as circular as frequentist probability - an electron is something that is affected by an electric field, and an electric field is something that affects an electron. But as long as we can in real life identify physical operations and objects that correspond to our theory, then that particular self-consistent interpretation of the theory is useful. The physical interpretation is not unique. For example, one day we may find other objects that obey the laws of classical electromagnetism. In the case of Euclidean geometry, there is a duality between points and lines, so a physical point may correspond to a theoretical line. So frequentism does not rule out subjectivism, but subjectivism cannot say that frequentism is any more circular than classical electromagentism.

My complaint with frequentism wasn't circularity. My complaint that I don't see how you can even state the definition of frequentism without invoking a broader notion of likelihood, measure and probability.

If you just IDENTIFY probabilities with relative frequencies, then there isn't a probability associated with a die. The first 3 rolls will give you one relative frequency, the next 10 will give you a slightly different one. The probability is some kind of abstraction, or idealization, or limiting case of the relative frequencies.
 
  • #60
stevendaryl said:
I wouldn't say that. As I said, for any finite number of trials, the relative frequency for rolling a die will be different from 1/6. The 1/6 is something that it approaches in a limiting sense. But, as I said, to make sense of the notion of limit here requires (it seems to me) a notion of measure or probability that is not frequentist.

Why not just repeat the infinite number of trials? We can have an infinite number of trials in one location, and another infinite number of trials in another location.

stevendaryl said:
My complaint with frequentism wasn't circularity. My complaint that I don't see how you can even state the definition of frequentism without invoking a broader notion of likelihood, measure and probability.

If you just IDENTIFY probabilities with relative frequencies, then there isn't a probability associated with a die. The first 3 rolls will give you one relative frequency, the next 10 will give you a slightly different one. The probability is some kind of abstraction, or idealization, or limiting case of the relative frequencies.

Yes, I don't think there is a probability associated with a die. There is a probability associated with an infinite number of identical preparations of a die.
 
  • #61
atyy said:
Why not just repeat the infinite number of trials? We can have an infinite number of trials in one location, and another infinite number of trials in another location.

Hmm. I'm a little uncomfortable with talking about a completed infinite number of experiments, but it's possible that you could make sense of such a thing. You want to assume that any infinite sequence of trials must have all relative frequencies equal to their theoretical probabilities?

One thing about making your probabilities about infinite runs of trials is this: what does it tell you about a finite trial?
 
  • #62
stevendaryl said:
One thing about making your probabilities about infinite runs of trials is this: what does it tell you about a finite trial?

It says that finite trials can be misleading, but that the more and more trials we make, it is less and less likely to be misleading. So if we are looking for the Higgs, we report "evidence" at 3 sigma, and keep on taking data. Then we report "discovery" at 5 sigma. And we are still not certain it really is the Higgs, but as take more data and the theory is not falsified, we accept it provisionally until it is.

Truth to tell, de Finetti's subjective approach is much prettier here. The only problem is that the his approach cannot be applied perfectly in real life, because it requires the prior be non-zero over all future possibilities (as long as it is non-zero, our beliefs will converge to the truth). But we don't know all future possibilities, so we must be incoherent at some point. Anyway, prettier doesn't mean the ugly method is lacking, it's just ugly.
 
  • #63
atyy said:
It says that finite trials can be misleading, but that the more and more trials we make, it is less and less likely to be misleading. So if we are looking for the Higgs, we report "evidence" at 3 sigma, and keep on taking data. Then we report "discovery" at 5 sigma. And we are still not certain it really is the Higgs, but as take more data and the theory is not falsified, we accept it provisionally until it is.

To me, using a criterion such as "3 sigma" or "5 sigma" is much MORE subjective than using Bayesian reasoning. The cutoff is completely subjective.
 
  • #64
stevendaryl said:
To me, using a criterion such as "3 sigma" or "5 sigma" is much MORE subjective than using Bayesian reasoning. The cutoff is completely subjective.

They are just arbitrary subjective criteria. It's like Maxwell's equations - are they true or not? In science, you cannot prove a theory, only falsify it. So we provisionally accept Maxwell's equations because they've passed an arbitary subjective number of tests. Similarly, we provisionally accept the Higgs boson because it's passed an abrbitrary subjective number of tests. Both theories can be falsified in the future.
 
  • #65
stevendaryl said:
To me, using a criterion such as "3 sigma" or "5 sigma" is much MORE subjective than using Bayesian reasoning. The cutoff is completely subjective.

To add to my reply above. The idea is that if God decided to use frequentist probability and quantum mechanics of the Higgs boson (with appropriate UV completion) were the true theory, there would be no problem. Similarly, if God decided to use Maxwell's equations to make the universe, he'd be ok as long as he didn't make point charges. So these are objective things. The subjectivity lies in our inability to prove that God really used these theories, instead of some other theories that mimicked them.
 
  • #66
stevendaryl said:
Hmm. I'm a little uncomfortable with talking about a completed infinite number of experiments, but it's possible that you could make sense of such a thing.

Same here.

I just think of it as simply something so large that from the law of large numbers the probability of something else is so small its negligible.

But then you face the issue of what exactly is small enough to neglect. This isn't confined to probability though - in the intuitive application of calculus you think of dt as a quantity so small you can neglect dt^2. Its wrong of course and exactly what is the amount that can be neglected. But this view will take you a long way without any issues. And if you actually want to be rigorous then how does one actually measure a limit to get say an actual velocity - what you actually do is measure the change in distance over a small time dt such that for all practical purposes dt^2 is neglected.

But it even goes further than that. Think of good old Euclidean geometry. A pont has position and no size, a line no thickness. They don't exist out there so when you apply it you decice what to model with a point, and a line - even though they will not conform to its definition.

Like I said - applying theory is always messy.

Thanks
Bill
 
  • #67
Talking about probability...
I also have a problem with that. Again, it suggests that "in any moment, it is impossible to calculate an electron's position, it only has a probability of being in a region".
Isn't it that WE are unable to calculate it? Because as soon as we measure it, we interfere yet again? Sure, it may be a practical problem for all physical creatures but it does not mean that the electron has no good reason to be wherever it is in a given moment. Sure, we may not be able to calculate it and we may not even have all the information to do so, but in theory it is possible to calculate it, right?

We wouldn't say that a given air molecule has a probability of being in a certain position in a room in a given moment, we only say that we are lazy to calculate it because it takes an awful amount of work and data, so let's just say we use probability for convenience.

So I suspect we COULD calculate the next position of an electron if we had all the information of it and its environment (virtual particles and all kinds of yet-undiscovered quantum froth included :-)).
 
  • #68
steviereal said:
Talking about probability...
I also have a problem with that. Again, it suggests that "in any moment, it is impossible to calculate an electron's position, it only has a probability of being in a region".
Isn't it that WE are unable to calculate it? Because as soon as we measure it, we interfere yet again?

That sounds like a "hidden variables" idea, that particles have definite properties, such as position, but we just don't have any way to measure them precisely. But I think that Bell's Theorem suggests that that is not the correct way to think about probabilities in quantum mechanics.
 
  • #69
steviereal said:
I also have a problem with that. Again, it suggests that "in any moment, it is impossible to calculate an electron's position, it only has a probability of being in a region".

That's not quite what QM says - but you are hardly Robinson Crusoe in not completely getting it.

Its silent about anything, being in a region, having a momentum, whatever, when not measured.

The only thing it's has is this thing called the state which aids in calculating probabilities of the outcome of observations if you were to measure it.

steviereal said:
Isn't it that WE are unable to calculate it?

No - it built right into its basic axioms. The theory is about the outcomes of observations - that's it - that's all.

See post 137:
https://www.physicsforums.com/showthread.php?t=763139&page=8

The fundamental axiom from which all else follows is:
An observation/measurement with possible outcomes i = 1, 2, 3 ... is described by a POVM Ei such that the probability of outcome i is determined by Ei, and only by Ei, in particular it does not depend on what POVM it is part of.

Thanks
Bill
 
Last edited:
  • #70
steviereal said:
Talking about probability...
I also have a problem with that. Again, it suggests that "in any moment, it is impossible to calculate an electron's position, it only has a probability of being in a region".
Isn't it that WE are unable to calculate it? Because as soon as we measure it, we interfere yet again? Sure, it may be a practical problem for all physical creatures but it does not mean that the electron has no good reason to be wherever it is in a given moment. Sure, we may not be able to calculate it and we may not even have all the information to do so, but in theory it is possible to calculate it, right?

We wouldn't say that a given air molecule has a probability of being in a certain position in a room in a given moment, we only say that we are lazy to calculate it because it takes an awful amount of work and data, so let's just say we use probability for convenience.

So I suspect we COULD calculate the next position of an electron if we had all the information of it and its environment (virtual particles and all kinds of yet-undiscovered quantum froth included :-)).

Within quantum mechanics, a particle does not have a definite position and momentum at all times. In theory the particle does not have a classical trajectory, so it is not even in theory possible to calculate the electron's definite position at all times. It is only when position is measured, that the electron can be assigned a definite position.

However, there are theories beyond quantum mechanics, in which it is possible to assign the particle a definite position at all times. If such theories are true, we could calculate the position in principle. However, we do not yet have any experimental evidence that such theories are true. Since there are many possible such theories beyond quantum mechanics, we have to wait until quantum mechanics is found to fail to match observation, before knowing which, if any, of these theories beyond quantum mechanics we should use.
 

Similar threads

Replies
14
Views
2K
Replies
24
Views
455
Replies
36
Views
4K
Replies
49
Views
4K
Replies
11
Views
957
Back
Top