Interview with Astrophysicist Adam Becker
Adam Becker is an astrophysicist and science writer whose first book “What Is Real?: The Unfinished Quest for the Meaning of Quantum Physics” just hit the bookshelves!
Table of Contents
Give us some background on how you got interested in physics and some experiences in youth/school that were formative.
I don’t remember a time when I wasn’t interested in science—some of my earliest memories are of going to the American Museum of Natural History in Manhattan and staring at the dinosaurs. Like a lot of little kids, I was obsessed with dinosaurs, but when I was six years old, a switch flipped. By that time, I’d read most of the dinosaur books in my elementary school library, and the shelf with the space books was right next to the shelf with the dinosaur books, so I tried one of the ones about space, and that was it—dinosaurs were out and space was in. (Though I still think dinosaurs are pretty cool.) My parents and my first grade teacher were very supportive, and helped me find more things to read. (My first grade teacher also set me on the path to become a science communicator by having me do a presentation in front of her class about the solar system — I talked about this on a Story Collider podcast, which you can listen to here: https://www.storycollider.org/stories/2016/12/30/adam-becker-the-solar-system)
I read absolutely everything I could get my hands on about space (a book called From Quarks to Quasars made a big impression, as did Tyson’s Universe Down To Earth). As I learned more about space, I learned more about physics too, and my interests slowly shifted to physics more generally as I got a little older. I taped the old Timothy Ferris PBS special “The Creation of the Universe” and practically wore out the VHS tape from rewatching it so many times. I watched Carl Sagan’s Cosmos, of course, and read the book too; that introduced me to the idea of a history of science, science as a process, ideas that people pieced together over time, rather than a monolithic set of facts. Similarly, Kip Thorne’s excellent book Black Holes and Time Warps brought to life some of the personalities behind the great scientific discoveries of the 20th century. By the time I was in high school, I knew I wanted to do physics—and I wanted to know more about the people who did physics.
Tell us a bit about what readers will find in your new book “What is Real?“
What is Real? is about the unfinished quest for the meaning of quantum physics. We have this beautiful theory, quantum mechanics, and it’s astonishingly accurate. But it’s not at all clear what that theory is saying about the nature of the world around us. It must be saying something about that world—there must be something in nature that resembles the mathematics of quantum mechanics, otherwise why would the theory work so well? But there’s no clarity or consensus among physicists about what, exactly, quantum physics is saying about reality. This is very strange, especially given that quantum mechanics is over 90 years old.
Even worse than that, there’s a problem at the heart of quantum physics that doesn’t have a generally accepted answer: the measurement problem. The Schrödinger equation does a beautiful job of describing what wave functions do when nobody’s looking, but when we do look, suddenly the Schrödinger equation is suspended and we have to use the Born rule instead. Why? How does that work? And what counts as a “measurement” anyhow? “What exactly qualifies some physical systems to play the role of ‘measurer’?”, John Bell asked in 1989. “Was the wavefunction of the world waiting to jump for thousands of millions of years until a single-celled living creature appeared? Or did it have to wait a little longer, for some better qualified system…with a PhD? If the theory is to apply to anything but highly idealized laboratory operations, are we not obliged to admit that more or less ‘measurement-like’ processes are going on more or less all the time, more or less everywhere? Do we not have jumping then all the time?”
The closest thing we have to a consensus about any of this is the Copenhagen interpretation. But the Copenhagen interpretation isn’t really a single coherent set of ideas about quantum mechanics—it’s a family of mutually-contradictory ideas, none of which adequately solve the measurement problem or answer the other questions at the heart of quantum theory. This is all the more strange given that reasonable alternatives to Copenhagen have existed for decades.
What is Real? is the history behind all of this—the history of quantum foundations. How did we end up with the Copenhagen interpretation? Why were superior alternatives ignored for so long? Why is the Copenhagen interpretation less popular than it once was? My book picks up where other books on the history of quantum physics leaves off—it starts with the Bohr-Einstein debates and goes all the way to the present day. What Is Real? also busts a lot of historical myths along the way, such as the true nature of Einstein’s qualms about quantum physics (it had little to do with indeterminism and more to do with locality), the real meaning of Bell’s theorem (realism is irrelevant to the theorem, and Bell hated Copenhagen), and more.
I understand that this may be controversial and that not everyone will agree with me. That’s fine. I’m happy to debate people on this subject—if you’d like me to do that at your university, drop me a line.
What was the inspiration and goal for writing “What is Real?”
Back when I was first learning about physics in all those popular science books I read when I was a kid, I noticed that explanations always got annoyingly vague whenever quantum physics came up. I figured that this would make more sense when I actually learned quantum physics. Once I did learn quantum physics in college, I was surprised to find that the vagueness got worse, not better—it was maddeningly unclear what a measurement was, or what part of the world obeyed the Schrödinger equation at all. And when I asked questions about this, some professors just shrugged, while others were sarcastic and dismissive of my questions. One professor made his disdain for my questions very clear, telling me in a witheringly haughty tone of voice that “if that’s the kind of questions you’re interested in, why don’t you go to the Philosophy Department!” I knew he meant it as an insult, but I did go over to the Philosophy Department, and ended up doing a double major in philosophy and physics at Cornell. At Cornell, and later at Michigan (where I went for my PhD in physics), I found that the philosophers actually cared about these questions, and had been thinking about them for a good long while and had developed some good ideas and arguments that most of the physicists didn’t know about. I also met some physicists (like David Mermin at Cornell) who didn’t think questions about the meaning of quantum physics were silly at all.
As a physicist, it’s nice to be able to explain asymmetries. And this asymmetry I’d found was a doozy: the philosophers of physics were, in general, quite well informed about physics, but the physicists were, by and large, wholly ignorant of philosophy, despite the fact that they were making philosophical claims when they dismissed questions about quantum foundations. As a result, the physicists were generally relying upon faulty philosophy when they answered such questions. (For example, say you ask “hey, what’s the electron doing when we’re not looking?” and you get the answer “that’s unobservable in principle, and it’s meaningless to talk about unobservable things.” That answer is dependent on an outdated and erroneous philosophy of science called “logical positivism,” and the flaws in that kind of reasoning are very well known to philosophers of science.) Where did this asymmetry come from? The answer had to be the history of quantum foundations. So I started digging into this field as a side project while I was in graduate school, and what I found there was this totally astonishing story about the history of physics in the 20th century. The story wasn’t exactly hidden—it’s easy to find out what happened by piecing together various papers and books on the history of physics, and by reading what John Bell and others actually said—but it was scattered, and most physicists didn’t seem to know the story. And it was an interesting story, one that physicists, philosophers, and the scientifically-minded public might find compelling reading. Hence the book.
In “What is Real?“, how do you balance the technicality of physics with the required accessibility for the general public?
That balancing act is hard, and it’s the central struggle of all science writing. (I’m not totally sure I pulled it off successfully, though I hope I did.) When I was writing What Is Real?, I had two basic rules that I kept in mind to try to keep things accessible.
First, people generally care more about other people than they do about ideas. That doesn’t mean people don’t care about ideas! It just means that people will care more about ideas if you can tie those ideas to a person, and use the story of that person to explain the ideas. So when I was writing the book, I generally tried to use personal stories from the history of quantum foundations to explain ideas that were new to the reader. This was also the animating principle behind the structure of the book as a whole: it’s structured as a history, so I can talk about the people in the story as a way into explanations of the ideas that we grapple with in quantum foundations. And focusing on the people also makes it easier for me to quote those people, and good quotations have a way of bringing a story to life.
Second, only explain the new concepts and jargon that are absolutely essential for telling the story. For example, despite the fact that quantum wave functions live in configuration space (when using the position basis), I don’t introduce the concept of configuration space in my book. That’s not because I think the average reader couldn’t understand the idea; I’m confident that they could, given a clear explanation. But there are already many other unfamiliar concepts that I’m throwing at readers in this book (wave functions and their collapse, entanglement, the measurement problem, decoherence, etc.) and I didn’t want to burden them with one more, especially if it wasn’t truly essential for explaining other things.
Should schools be trying to teach every kid physics or should they instead divert resources into the few that might have the potential to contribute?
I think that this question is based on a faulty premise: we don’t teach kids physics because we think they’re all going to become physicists, any more than we teach kids history because we think everyone’s going to become a historian. Instead, we teach kids history because a knowledge of history is vital to being an informed citizen of a democracy, and makes it possible to have a deeper understanding of other people, other cultures, and current events. We teach kids physics for exactly the same reasons. A basic understanding of physics gives a new and important perspective on the world, one that students will hopefully carry with them for the rest of their lives, whether or not they become physicists.
I’ll also add that any attempt to identify “the few that might have the potential to contribute” would run into insurmountable problems. There’s no good way to tell what a person’s future potential is in physics or in almost any other field. And if we tried to do it anyway, not only would we fail, but we’d most likely fail in ways that reinforce existing societal biases that favor white men, especially in the sciences. So yes, we should be teaching every kid at least some physics. We can’t know in advance where the next Einstein will be found, and finding the next Einstein isn’t the sole purpose of physics education anyhow.
What’s your opinion of still using ‘Apparent Magnitude’ in astronomy
I don’t feel strongly about this, but then again my background is in statistical cosmology, not observational astronomy.
What is considered an inertial frame in astronomy/cosmology and can you point one out?
I’m not sure what this question is driving at. The rest frame of the CMB is an inertial frame. And the rest frame of the sun is pretty close to an inertial frame; the acceleration it feels due to its orbit around the center of the Milky Way is very small. But the idea of “inertial frame” is an idealization; even if it turns out that it’s hard to define the rest frame of the CMB, it doesn’t mean there’s anything wrong with talking about inertial frames.
Is there anything you found particularly interesting about the evolution of the structure of the universe while working on your thesis? What about your thesis’ topic did you find particularly challenging?
One of the things I really liked best about my thesis was that I was trying to understand the inflationary epoch, a period so far back in the history of the universe that there’s no material of any kind left over from it—no atoms, no quarks or electrons, not even any photons. All that we have left from that time in history are the patterns in the distribution of stuff in the universe, and so our only hope of better understanding that period is to tease out statistical features of the cosmic microwave background radiation and large-scale structure. That kind of statistical work is where cosmological theory and observation and simulations all intersect, and that’s a great place to be when you’re doing science.
Do you have an explanation for the Cosmic Axis of Evil and the Spin of Galaxies?
No. It’s unclear what’s going on with the “axis of evil.” To the best of my knowledge, it’s an open problem.
As a science historian, can you generalize your insights about the lines of inquiry that have enjoyed traditional success in approaching big questions tackled by astrophysicists, and how these compare or may apply to big questions surrounding dark matter and dark energy?
From a historical perspective, the modern idea of dark matter is in pretty good company. There are other kinds of “dark matter” that have been suggested in the past to explain different phenomena, and they have often met with success. When astronomers in the early 1800s noticed an anomaly in the motion of Uranus, they invoked “dark matter” in the form of another as-yet-unseen planet, and they were right—that’s how Neptune was discovered. And when beta decay seemed to violate the conservation of energy, Wolfgang Pauli suggested “dark matter” in the form of neutrinos, which weren’t seen for another quarter-century. Of course, this kind of strategy doesn’t always work. In the mid-19th century, “dark matter,” in the form of an unseen planet or asteroids, was suggested as an explanation for the extra precession of the perihelion of the orbit of Mercury. That turned out to be false—that extra precession is a result of general relativity, as Einstein found in 1915. But dark matter is certainly a reasonable idea from a historical perspective. And from a scientific perspective, I don’t really think we can reasonably doubt that dark matter is there. The evidence is truly overwhelming.
Dark energy is a little bit weirder, historically speaking—it’s hard to know what a good analogy is. Certainly there are many historical examples of a postulated thing permeating all of space, some of which we still accept (electromagnetic field) and some of which we don’t (luminiferous aether). As for the idea of dark energy itself, it’s got a long history, longer than dark matter. Einstein famously considered a cosmological constant and then dismissed it once Hubble discovered the distance-redshift relation, implying that the universe was expanding. And Einstein’s usage of a cosmological constant to keep the universe static wouldn’t have worked anyhow—it was an unstable equilibrium. So Einstein’s idea was abandoned for most of the 20th century. But although the first good evidence for dark energy showed up on the scene in the very late 1990s, it had been anticipated well before that. If you look in cosmology textbooks from the early 1980s, they’re already talking about the possibility of a cosmological constant quite seriously. And again, the cosmological evidence for dark energy is very good.
Do you think a consistent Bohmian formulation of QFT is possible?
That’s an open research question, and it’s not my area of expertise. I have heard that part of the difficulty people have encountered with developing a consistent Bohmian formulation of QFT comes from the fact that our best QFTs are questionably consistent, due to weirdness like Haag’s theorem and renormalization. But take that with a grain of salt—it’s really not my area.
Concerning the different interpretations of quantum mechanics... could one determine if one interpretation is more fundamental or more encompassing than another? An experimental test? A successful quantum theory of gravity or of unified fields?
Fundamentally, I don’t think we’re going to be able to determine which interpretation is closest to the mark until we have a theory that goes beyond quantum mechanics, like a theory of quantum gravity. But there’s a catch-22: I don’t think we’re going to be able to come up with such a theory if we’re stuck thinking about quantum mechanics in a fundamentally misguided way. So, since there’s no way to know which interpretation will lead to the insights that will yield a theory of quantum gravity, I think it’s important for researchers to be familiar with several different interpretations, even if they have a strong feeling about which interpretation is right.
Do you have a view on the ‘reality’ of the wave function?
I think that there must be something in nature that approximately resembles the wave function, or that directly gives rise to something like a wave function. That’s an intentionally broad statement. It could be that there really is a big wave function of the universe out there, constantly splitting in the way that the many-worlds interpretation posits. It could be that there’s something out there like a wave function, but it’s not the whole story, as pilot-wave theory (aka de Broglie-Bohm) posits. Or there’s any number of other possibilities: there’s something out there like the wave function but it doesn’t quite behave the way quantum mechanics dictates (spontaneous collapse); there’s something real out there that that isn’t much like the wave function, but it behaves in such a way that our information about it obeys the Schrödinger equation, and thus that information can be modeled with a wave function (information-theoretic interpretations); etc. But in all of these cases, there’s a real thing, out in the world, that guarantees that the Schrödinger equation will hold and that the Born rule applies in the usual way.
Why do I think this? Because quantum physics works phenomenally well. It explains a huge diversity of phenomena to a breathtaking degree of accuracy. How could quantum physics possibly work so well if there weren’t something out in the world that it was accurately describing? Why would the theory be so accurate if it bore no resemblance to nature at all? Remember, this is a theory that was initially developed to explain atomic spectra—that’s all. Now we use it to understand why the sun shines and how to build lasers. A theory that can do that has got to be latching on to some true fact about nature, even if it’s just in an indirect or approximate way.
Say we have a piece of matter with some temperature T, regarding it for now as a classical system. If we view it as a quantum system, does it still have a temperature?
Sure. The statistical mechanics definition of temperature still applies perfectly well to composite quantum systems.
Thanks so much for your time Adam! Now readers go out and buy his book!
Read the next interview with physicist Niels Tuning
I have a BS in Information Sciences from UW-Milwaukee. I’ve helped manage Physics Forums for over 22 years. I enjoy learning and discussing new scientific developments. STEM communication and policy are big interests as well. Currently a Sr. SEO Specialist at Shopify and writer at importsem.com
Should we expect that the new ontology can be guessed from within the old theories? Hardy argues a bit against this in noting that it was impossibly to discover spacetime curvature as the solution to the conceptual problems of Newtonian gravity (instantaneous action at a distance). Contemporary ontologies for how the action could be transmitted didn't point in the correct direction at all. I recently started a thread on his approach.The ontology I have for GR and QM (see our book "Beyond the Dynamical Universe") was obtained by resolving mysteries in those theories. So, as you suggest, it doesn't lead to new theories of physics, only new physics within existing theories.
Should we expect that the new ontology can be guessed from within the old theories? Hardy argues a bit against this in noting that it was impossibly to discover spacetime curvature as the solution to the conceptual problems of Newtonian gravity (instantaneous action at a distance). Contemporary ontologies for how the action could be transmitted didn't point in the correct direction at all. I recently started a thread on his approach.The ontology I have for GR and QM (see our book "Beyond the Dynamical Universe") was obtained by resolving mysteries in those theories. So, as you suggest, it doesn't lead to new theories of physics, only new physics within existing theories.
I started working on an interpretation of QM so I could have an ontology for all of physics. In other words, I want an ontology that is just as good for GR as it is for QM. I knew that such an ontology would change the way we view reality and consequently lead to new physicsShould we expect that the new ontology can be guessed from within the old theories? Hardy argues a bit against this in noting that it was impossibly to discover spacetime curvature as the solution to the conceptual problems of Newtonian gravity (instantaneous action at a distance). Contemporary ontologies for how the action could be transmitted didn't point in the correct direction at all. I recently started a thread on his approach.
This was interesting and thought-provoking, I hope that the effects last until longer when it'll be clearer for me to think about it.
That's because all interpretations make the same predictions for all experimental results; they have to, since they all use the same (or equivalent) mathematical machinery.
To make progress, someone needs to come up with a new theory–different mathematical machinery that makes the same predictions for experiments that have already been done, but makes different ones for some experiment that hasn't yet been done. If the new theory also rules out some subset of interpretations of current QM, then running the new experiment might help, if it confirms the new theory (and therefore contradicts current QM).As a physicist involved in this program, I agree completely. I started working on an interpretation of QM so I could have an ontology for all of physics. In other words, I want an ontology that is just as good for GR as it is for QM. I knew that such an ontology would change the way we view reality and consequently lead to new physics, e.g., when we changed from geocentricism to heliocentricism. And that's what excited me about FoP. But, I found many participants didn't even care if their interpretation was compatible with physics other than QM. I can't tell you how many talks I've given with Silberstein (philosopher of physics) where he told me we had to restrict our talk to applications in QM because that's all the audience was interested in. Given that restriction, I fail to see the advantage of any interpretation over any other. Indeed, my adynamical interpretation of QM is unnecessarily deviant from intuition if all it's good for is interpreting QM. The reason I'm so pleased with it is precisely because I can use it to understand all of physics, even resolving controversies in classical physics, e.g., paradoxes of CTCs, dark matter, dark energy, horizon problem, etc. Sorry to prattle on, this is a pet peeve of mine :-)
We get new experiments, some of which are even motivated by a particular interpretation, but then everyone brings out their favorite interpretation and explains the experimental result to their own satisfaction.That's because all interpretations make the same predictions for all experimental results; they have to, since they all use the same (or equivalent) mathematical machinery.
To make progress, someone needs to come up with a new theory–different mathematical machinery that makes the same predictions for experiments that have already been done, but makes different ones for some experiment that hasn't yet been done. If the new theory also rules out some subset of interpretations of current QM, then running the new experiment might help, if it confirms the new theory (and therefore contradicts current QM).
Here's another Bub story from our dinner on Wed related to the book. Adam is bemoaning the fact that so many physicists don't bother to articulate their ontological assumptions concerning QM, indeed some even deny having them altogether! After arguing against this attitude, Adam says physics students should at least be shown various interpretative options for QM.
At dinner, I told Jeff I hadn't seen any real progress in the debate over QM interpretations since I began work in the field in 1994. We get new experiments, some of which are even motivated by a particular interpretation, but then everyone brings out their favorite interpretation and explains the experimental result to their own satisfaction. The people I first met in foundations of physics (FoP) in 1994 are still today arguing for what are basically their same interpretations from 1994. Jeff said he sees FoP splitting along two lines — the old line of hackneyed interpretative debate and a new line exploring the deeper mathematical underpinnings of quantum theory, e.g., as with quantum information theory. He thinks the future of FoP lies in this new line.
That is a cut, because the "macroscopic" equipment is not included in the quantum state.Why is this a cut? If you study a particular system you can ignore the rest of the universe or use an approximate description of some other systems if that is good enough. It would be a cut only if you say that all of the rest cannot be in principle described by quantum mechanics and you need at some point a classical system.
So did Jeff Bub buy Adam's book?He was interviewed for the book, so of course he received a complimentary copy :-)
So did Jeff Bub buy Adam's book?
Silberstein and I gave a talk at Univ of Maryland on Wed. Afterwards, we had dinner with Jeff Bub and he had some interesting responses to Adam’s book. He was not happy that the book made it seem like he wasn’t aware of Bohm’s interpretation when he was Bohm’s grad student. In fact, Bohm wasn’t taking any more students when Jeff was picking an advisor, but Bohm took Jeff precisely because Jeff had done an undergrad thesis on Bohm’s interpretation. More stories from Bub to follow :-)
I don't use a cut. I use real-world macroscopic equipment to prepare states and perform measurement (well, I let my experimental colleagues do that, because I'd for sure mess up the experiment being a theorist ;-)).That is a cut, because the "macroscopic" equipment is not included in the quantum state.
I don't claim to solve any "measurement problem". I deny that one exists do begin with for the simple reason that we are able use QT to successfully predict the outcome of measurements (in terms of probability and statistics).That alone would be ok (not my position, but certainly one that is coherent and attractive), but you often add that the macroscopic equipment can be included in the quantum state by suitable coarse graining (without hidden variables or MWI) – that would not be ok.
It is erroneous because it is self-contradictory. vanhees71 uses a cut, yet he says there is no cut.
Also, he claims to have a solution to the measurement problem that involves neither hidden variables nor MWI, only coarse graining. This is basically a variant of "decoherence solves the measurement problem", which is an error.
And yes, it is an argument from authority – but there is a reason that the standard texts like Landau and Lifshitz or Weinberg use a Copenhagen-like interpretation.I don't use a cut. I use real-world macroscopic equipment to prepare states and perform measurement (well, I let my experimental colleagues do that, because I'd for sure mess up the experiment being a theorist ;-)).
I don't claim to solve any "measurement problem". I deny that one exists do begin with for the simple reason that we are able use QT to successfully predict the outcome of measurements (in terms of probability and statistics).
Landau and Lifshitz use indeed a Copenhagen-like flavor, but they hardly discuss interpretational issues at all. Weinberg doesn't take any side but says that the interpretational problem is undecided, although I also fail to see where this apparent problem might be for the reason just given. Weinberg's chapter on interpretation is, however, among the best I've read about the issue (which is as valid for the entire content of this and all his other textbooks). Nevertheless I'm not sharing his opinion on the final dictum on interpretation.
Well if you want a causal account of the experiment shown in the Sci Am article, then either the electron hitting the screen causes the agent to insert or not insert the lens (forward causality) or the agent’s decision to insert or not insert the lens causes the electron to hit the screen in the correct place (retrocausality). One might deny that the Sci Am experimental prediction will be seen because a human is making the decision (unlike the Kim et al experiment where beam splitters “make the decision”). QM doesn’t make different predictions based on conscious versus nonconscious intervention so if you believe that, you would be claiming QM (and QFT by extension) is wrong. Hardy proposed an experiment to explore this possibility https://arxiv.org/pdf/1705.04620.pdfOf course, I don't claim that. My point simply was that all hitherto done experiments with entangled photons and other systems to test Bell's inequality against the prediction of its violation by QT are all fully understood within relativistic local microcausal QFT and thus by construction exclude both spooky-action at a distance and the possibility of retrocausality. All there is is the state preparation in the very beginning which implies the correlations described by entanglement, and all experiments agree with the predictions of QT (particularly relativistic QFT). I don't expect any changes with this conclusion when using humans for the switching decision, but of course one has to do the experiment to be really sure. Physics is indeed an empirical scienc!
(basically it is a variant of Ballentine's erroneous interpretation).
It is erroneous because it is self-contradictory. vanhees71 uses a cut, yet he says there is no cut.I wasn't aware that Ballentine and vanhees71 are the same person!
How can an interpretation be erroneous?! May be you mean that it is not complete in some sense because it doesn't address some questions?It is erroneous because it is self-contradictory. vanhees71 uses a cut, yet he says there is no cut.
Also, he claims to have a solution to the measurement problem that involves neither hidden variables nor MWI, only coarse graining. This is basically a variant of "decoherence solves the measurement problem", which is an error.
And yes, it is an argument from authority – but there is a reason that the standard texts like Landau and Lifshitz or Weinberg use a Copenhagen-like interpretation.
It's your claim that the Minimal Interpretation is errorneous. By repeating this claim, it doesn't become true! The minimal interpretation is all that you need to confront the theory with experiments (at least those realized up to today), and the theory stands all tests. Anything going beyond the minimal interpretation enters the realm of personal world views and thus is not testable by observation and thus is not part of physics but maybe religion. Not that religious believes are unimportant for individuals, but for sure they are not in the realm of science and the part of humane experience described by it.You’ll have to read the book. Adam presents many arguments against physics as whole adopting such an attitude. Once you’ve read his arguments, get back to us as to why you think they’re wrong.
… It is simply not valid quantum mechanics (basically it is a variant of Ballentine's erroneous interpretation).How can an interpretation be erroneous?! May be you mean that it is not complete in some sense because it doesn't address some questions?
Well, I've not read the book (I've ordered the paper back edition arriving end of May), and it might be unfair against the author to discuss about what's claimed to be in that book in a forum, but if he claims that standard QT implies retrocausality, he's utterly wrong. By the very construction of local microcausal relativistic QFT there cannot be any retrocausality by construction, and so far nothing ever observed hints in this direction!Well if you want a causal account of the experiment shown in the Sci Am article, then either the electron hitting the screen causes the agent to insert or not insert the lens (forward causality) or the agent’s decision to insert or not insert the lens causes the electron to hit the screen in the correct place (retrocausality). One might deny that the Sci Am experimental prediction will be seen because a human is making the decision (unlike the Kim et al experiment where beam splitters “make the decision”). QM doesn’t make different predictions based on conscious versus nonconscious intervention so if you believe that, you would be claiming QM (and QFT by extension) is wrong. Hardy proposed an experiment to explore this possibility https://arxiv.org/pdf/1705.04620.pdf
It's your claim that the Minimal Interpretation is errorneous. By repeating this claim, it doesn't become true! The minimal interpretation is all that you need to confront the theory with experiments (at least those realized up to today), and the theory stands all tests. Anything going beyond the minimal interpretation enters the realm of personal world views and thus is not testable by observation and thus is not part of physics but maybe religion. Not that religious believes are unimportant for individuals, but for sure they are not in the realm of science and the part of humane experience described by it.I do not agree, and neither do standard texts like Landau & Lifshitz or Weinberg.
@RUTA, as you can see from vanhees71's quote, he is indeed not espousing any legitimate variant of Copenhagen, since it has no cut. It is simply not valid quantum mechanics (basically it is a variant of Ballentine's erroneous interpretation).It's your claim that the Minimal Interpretation is errorneous. By repeating this claim, it doesn't become true! The minimal interpretation is all that you need to confront the theory with experiments (at least those realized up to today), and the theory stands all tests. Anything going beyond the minimal interpretation enters the realm of personal world views and thus is not testable by observation and thus is not part of physics but maybe religion. Not that religious believes are unimportant for individuals, but for sure they are not in the realm of science and the part of humane experience described by it.
In Chap 12 Adam does mention retrocausality in passing. He talks about it dynamically, i.e., future outcomes sending information into the past, which is in the early spirit of some adherents, but Aharonov, Price, Wharton, and Cramer have all dismissed this pseudo-time-evolved narrative story at some point (to me personally or in print). As I said earlier, given access to the block universe for explanatory purposes, there's no reason to introduce pseudo-time-evolved explanation, it's superfluous.Well, I've not read the book (I've ordered the paper back edition arriving end of May), and it might be unfair against the author to discuss about what's claimed to be in that book in a forum, but if he claims that standard QT implies retrocausality, he's utterly wrong. By the very construction of local microcausal relativistic QFT there cannot be any retrocausality by construction, and so far nothing ever observed hints in this direction!
I finished the appendix where Adam showed how dBB, MWI, and GRW explain Wheeler’s delayed choice experiment done with an interferometer. The delayed choice was simply to insert the second beam splitter (BS) or not after the photon has passed through the first BS. The explanation is trivially dynamical for these interpretations in this experiment. In order to challenge these dynamical interpretations, you need an experiment like the one shown in Sci Am (below). In that experiment you can choose to insert a lens between photons scattered off electrons passing through a twin slit thereby destroying electron which-way info. If you choose not to insert the lens, the scattered photons carry which-way info on the electron to the photon detector. The electrons make an interference pattern when the lens is inserted and a particle pattern when the lens is not inserted. The lens can be inserted after the electrons have already hit their detector (as in the Kim experiment, below). In the Kim experiment, one could easily say the pilot wave takes info from the first photon (“electron” counterpart) to the second photon (“scattered photon” counterpart) to make sure it goes to the correct detector. But, if a human agent is deciding whether or not to place a lens in front of the scattered photon, as in the Sci Am experiment, then dBB would have to either say the pilot wave is influencing the decisions of the human agent, or that the pilot wave is retrocausal from the lens to the electron.
View attachment 223939 View attachment 223941 View attachment 223942
@RUTA, as you can see from vanhees71's quote, he is indeed not espousing any legitimate variant of Copenhagen, since it has no cut. It is simply not valid quantum mechanics (basically it is a variant of Ballentine's erroneous interpretation).Wow, yes, I totally misread his post #108. He's not claiming there is no Copenhagen interpretation, he's simply claiming HIS interpretation isn't Copenhagen and then explaining why. I'm too tired to read critically today :-) I'll delete my last post. Thnx, atty.
You are espousing a variant of the Copenhagen interpretation here. Did you read the book?
This is no Copenhagen this is the Minimal Statistical Interpretation, i.e., in my understanding there's no quantum-classical cut (classical theory is a valid approximation to QT due to the sufficiency of coarse-grained observables for macroscopic properties and decoherence) as seems to be the main point of all flavors of the Copenhagen interpretation. As well there's no collapse due to measurement, which is part of some flavors of the Copenhagen Interpretation.
I've not read the book yet. I've to get it first and then (more difficult) also find the time!@RUTA, as you can see from vanhees71's quote, he is indeed not espousing any legitimate variant of Copenhagen, since it has no cut. It is simply not valid quantum mechanics (basically it is a variant of Ballentine's erroneous interpretation).
This is no CopenhagenAdam references and quotes many physicists making explicit reference to the Copenhagen interpretation. Are you claiming he has fabricated these references and quotes?
In Chap 12 Adam does mention retrocausality in passing. He talks about it dynamically, i.e., future outcomes sending information into the past, which is in the early spirit of some adherents, but Aharonov, Price, Wharton, and Cramer have all dismissed this pseudo-time-evolved narrative story at some point (to me personally or in print). As I said earlier, given access to the block universe for explanatory purposes, there's no reason to introduce pseudo-time-evolved explanation, it's superfluous.
This is no Copenhagen this is the Minimal Statistical Interpretation, i.e., in my understanding there's no quantum-classical cut (classical theory is a valid approximation to QT due to the sufficiency of coarse-grained observables for macroscopic properties and decoherence) as seems to be the main point of all flavors of the Copenhagen interpretation. As well there's no collapse due to measurement, which is part of some flavors of the Copenhagen Interpretation.
I've not read the book yet. I've to get it first and then (more difficult) also find the time!
Well, as an experimentalist you should be much less worried about what's reality than the theoreticians, because it's you who defines what reality is! You set up your devices to produce the entangled bi-photon states and the various optical devices and detectors to observe them. What's real is what your detectors show. The theory (in this case QED, simplifying the devices to an effective description which is more or less the same as in classical electrodynamics (quantum optics of optical devices is mostly the hemiclassical approximation, i.e., matter treated phenomenologically in terms of response functions/susceptibilities), except for the detection process of photons itself, which usually is some kind of photoelectric effect (which can be almost always be treated semiclassically, i.e., assuming classical em. fields but quantized electrons). All this is not reality but an (effective) quantum-field theoretical description for the statistical outcome of your detector clicks, and what's real are the clicks, not the theorists' field operators and state operators!You are espousing a variant of the Copenhagen interpretation here. Did you read the book?
QT gives statistical description of entanglement. But the point of Bell theorem is that there is a testable difference between "long-ranged correlations" realized by local physical mechanisms and non-local physical mechanisms when you analyze the data on the event by event basis.
In that sense there is no difference between QT and QED. QED gives its predictions on statistical level and gives no handle for event by event analysis. I suppose that this not so obvious because QFT speaks about "fields" just like electromagnetic field that is considered physical. But the "field" of QFT is not physical. It's statistical.QT=Quantum Theory, of which QFT is one realization to describe the electromagnetic interaction in terms of charged particles and the em. field (both quantized quantum fields in the first-principle level of description).
According to QT (and thus also of course QFT) there's nothing else than probabilities. If an observable is not determined through preparation, then it's value is indetermined, and you can only know probabilities for the outcome of measurements of this observable. To test the theory you have to perform experiments on a sufficiently large ensemble to gain enough statistics for the aimed level of statistical significance.
The formalism for the experimental outcomes is in the paper. That's not the issue. The question is, what is the nature of reality such that those correlations obtain? Simply saying the formalism maps onto the outcomes in no way tells me WHY those correlations obtain, only that you found a formalism that maps onto to them. Again, go to Adam's roulette wheel analogy and the formalism of the paper would equally map to those outcomes. How can that be?Well, as an experimentalist you should be much less worried about what's reality than the theoreticians, because it's you who defines what reality is! You set up your devices to produce the entangled bi-photon states and the various optical devices and detectors to observe them. What's real is what your detectors show. The theory (in this case QED, simplifying the devices to an effective description which is more or less the same as in classical electrodynamics (quantum optics of optical devices is mostly the hemiclassical approximation, i.e., matter treated phenomenologically in terms of response functions/susceptibilities), except for the detection process of photons itself, which usually is some kind of photoelectric effect (which can be almost always be treated semiclassically, i.e., assuming classical em. fields but quantized electrons). All this is not reality but an (effective) quantum-field theoretical description for the statistical outcome of your detector clicks, and what's real are the clicks, not the theorists' field operators and state operators!
QED, as any QT, allows to describe entanglement without violating locality by construction, and also the linked-cluster theorem holds true. It is just careless use of the word "non-locality" instead of "long-ranged correlations" you find very often in the literatureQT gives statistical description of entanglement. But the point of Bell theorem is that there is a testable difference between "long-ranged correlations" realized by local physical mechanisms and non-local physical mechanisms when you analyze the data on the event by event basis.
In that sense there is no difference between QT and QED. QED gives its predictions on statistical level and gives no handle for event by event analysis. I suppose that this not so obvious because QFT speaks about "fields" just like electromagnetic field that is considered physical. But the "field" of QFT is not physical. It's statistical.
I don't understand, what you think is "not true" in my previous statement. Your nice undergrad-lab experiment described in your paper does not prove quantum nonlocality, or do you claim that its outcome cannot be described by QED? What your experiment indeed demonstrates (as far as I can see from glancing over the paper) are the long-ranged correlations between entangled parts of a single quantum system, which is not contradicting locality of the interactions. QED, as any QT, allows to describe entanglement without violating locality by construction, and also the linked-cluster theorem holds true. It is just careless use of the word "non-locality" instead of "long-ranged correlations" you find very often in the literature, and that is bound to confuse your students rather than helping them to understand that the beautiful Bell-test experiments with photons done in the last 2-3 decades demonstrate that entanglement really means what QT predicts, i.e., the incompatibility of the probabilistic predictions of QT about ensembles with any classical-statistical local deterministic hidden-variable model a la Bell.The formalism for the experimental outcomes is in the paper. That's not the issue. The question is, what is the nature of reality such that those correlations obtain? Simply saying the formalism maps onto the outcomes in no way tells me WHY those correlations obtain, only that you found a formalism that maps onto to them. Again, go to Adam's roulette wheel analogy and the formalism of the paper would equally map to those outcomes. How can that be?
That's not true, the formalism maps beautifully onto the experimental set-ups and data. There are many analyses, but one for undergrads that I use in my QM course is attached. There's nothing in the formalism that resolves this issue.I don't understand, what you think is "not true" in my previous statement. Your nice undergrad-lab experiment described in your paper does not prove quantum nonlocality, or do you claim that its outcome cannot be described by QED? What your experiment indeed demonstrates (as far as I can see from glancing over the paper) are the long-ranged correlations between entangled parts of a single quantum system, which is not contradicting locality of the interactions. QED, as any QT, allows to describe entanglement without violating locality by construction, and also the linked-cluster theorem holds true. It is just careless use of the word "non-locality" instead of "long-ranged correlations" you find very often in the literature, and that is bound to confuse your students rather than helping them to understand that the beautiful Bell-test experiments with photons done in the last 2-3 decades demonstrate that entanglement really means what QT predicts, i.e., the incompatibility of the probabilistic predictions of QT about ensembles with any classical-statistical local deterministic hidden-variable model a la Bell.
I just finished chapter 11 where Adam defends the various many-worlds views (string theory’s landscapes, inflation’s multiverse, and Everett’s Many-Worlds Interpretation, MWI). He admits MWI has a problem with the meaning of probability, but dismisses it as something to be solved in the future. I’m less optimistic, since the idea has been in vogue (in FoP anyway) for many years and yet the problem persists. For example, it can’t be simply that the branches split with a “frequentist interpretation of probability,” as Adam illustrates with the Schrodinger Cat in a 25% dead — 75% alive probability when there are only two possible outcomes. Another problem with a frequentist-splitting interpretation would be that many branches would not in fact obtain empirical evidence for the correct splitting probabilities (as seen from a global perspective “outside” all the branches), as Adrian Kent pointed out years ago. So, how do we know we’re in a branch where our experiments actually reflect the correct probabilities? Finally, Adam defends these many-worlds views against accusations that they’re unscientific because they’re unverifiable. He properly points out that all scientific theories are unverifiable in the sense of Popper, e.g., deviations in Uranus’s predicted orbit led to the discovery of Neptune, not the overthrow of Newtonian gravity. Later, deviations in the orbit of Mercury did lead to Newtonian gravity being “falsified,” i.e., replaced by a more accurate theory (GR). Here I think Adam’s defense is strained at best. There is a huge difference b/w Newtonian gravity not being falsified by a single apparently discordant measurement (Uranus’s orbit) and the fact that EVERY POSSIBLE measurement outcome is compatible with a theory. To claim the former case is equivalent to the latter is an egregious misrepresentation of the objection of unfalsifiability. To paraphrase one opponent of such views, “Does a theory that predicts everything explain anything?” On to chapter 12!
some of the history has surprised meI was surprised to find out how Wheeler treated Everett.
I'm reading Part III and some of the history has surprised me. I got into the game (1994) after the situation in foundations of physics had started to improve, but Aharonov warned me at the time there were perils associated with working in foundations. The hostility of the physics community towards physicists working in foundations was appallingly anti-intellectual. Albert had publications in Phys Rev with Aharonov yet his university would not let him do this work for his PhD thesis. He was told flat out that if he didn't do the problem in QFT they had given him, then he would be dismissed from their program. Work by Bell and even Clauser's experimental work were deemed "junk science." Another thing I didn't know was that Holt had repeated Clauser's experiment and found the Bell inequality was not violated. At that time, there were just the two contradictory results, so it wasn't clear whether QM was right or not. The guys doing these experiments had to beg for lab space and had to borrow or scrounge for equipment. It took Aspect six years to build, conduct and publish his first experiment. When he ask Bell about doing the experiment, Bell refused to talk to him until Aspect assured Bell that he had tenure. Zeh has similar horror stories. I already respected the pioneers in this field for their discoveries, now I respect them as well for their perseverance in the face of such adversity.
I cannot tell whether this post is tongue-in-cheek or serious.
I didn't know that Niels Bohr developed any wise dogmas. Dogmas, yes.- but don't forget, Bohr had Einstein to discuss things with; – and nowadays you only have guys whose Most Sacred Hope is just to attain unto perfect non-existence in the end; – so, to you, Bohr's ideas are of no use of course.
The thing is, everyone naturally has his own wishful thinking! To avoid offending someone's sacred hopes (like materialism or many worlds or Divine Choice), it's necessary to keep certain wise dogmas developed by Niels Bohr and company .I cannot tell whether this post is tongue-in-cheek or serious.
I didn't know that Niels Bohr developed any wise dogmas. Dogmas, yes.
The thing is, everyone naturally has his own wishful thinking! To avoid offending someone's sacred wishes (like materialism or many worlds), it's necessary to keep certain wise dogmas developed by Niels Bohr.
He’s advocating for dBB and MWI, not because he necessarily believes those are “right,” but simply because they offer counterexamples to Copenhagen. I didn’t realize Copenhagen was so dogmatic, I thought it was merely instrumentalist, which I have always considered “agnostic.”It depends on whose Copenhagen. I go to both churches without any sense of conflict.
Just finished Part II. Chapter 8 is what the Copenhagenists, instrumentalists, operationalists, and positivists among you should read.
He’s advocating for dBB and MWI, not because he necessarily believes those are “right,” but simply because they offer counterexamples to Copenhagen. I didn’t realize Copenhagen was so dogmatic, I thought it was merely instrumentalist, which I have always considered “agnostic.” Adam’s take on instrumentalism is a la positivism and operationalism, both of which strike me as more dogmatic. Physicists who are just not interested in analyzing various interpretations aren’t impeding progress, since their lack of interest means they wouldn’t likely contribute anything meaningful anyway. It’s those who naively believe they don’t even possess an interpretation themselves and actively dissuade younger physicists from asking those questions. Part II presents an interesting history explaining how the attitudes of Copenhagen, instrumentalism, positivism, and operationalism became so popular among physicists when philosophers have long since dismissed them on intellectual grounds.
The experiment instantiates a QM violation of a Bell inequality. There is nothing more needed to experimentally confirm the mystery a la Adam's roulette wheels or Mermin's device, unless you believe there is something wrong with QM (Adam's third option). Is that what you're implying? It's the theoretical gloss in the paper that I find lacking. I'm confident the experiment as given, using off the shelf components, violates Bell inequalities, and I'm reading you to be saying that your students have done the experiment dozens of times over the years? I asked for Gregor Weihs' raw data at one time and analyzed it in a way that showed him a new feature, though it's not earth-shattering (arXiv:1207.5775, also on my very irregularly maintained blog, https://quantumclassical.blogspot.com/2010/03/modulation-of-random-signal.html — astonishing, for me, to see that that is 8 years ago).
So I don't doubt the weirdness.
I'm by no means saying that others can't tackle classical chaos in sophisticated ways in an attempt to model quantum level systems deterministically, it'd be great if someone could give us a toe-hold on that, but I'm certain I'm not a good enough mathematician to tackle that head on. My only hope would be to notice something serendipitously as a result of being so immersed in the relationship between quantum and random fields, although I think that's probably already given in to the urge to address chaos with probability.
Quantum theory, being probabilistic, only makes predictions about statistics associated with recorded measurements. As a probabilistic theory, it has nothing to say about individual recorded events, only about their statistics. As a statistical theory, it includes the notion of microcausality, that measurements associated with space-like separated regions commute, but this is consistent with us being able to prepare states in which there are correlations at space-like separation.Thnx for intervening, hopefully this exchange will educate those who are likewise confused :-) The issue isn't with the formalism and it isn't with the data (I hope that isn't what you're implying). The formalism maps beautifully onto the data, as you can see in the paper. The issue is what you appear to brush aside. The statistical data is collected one event (coincidence) at a time (within the 25-ns coincidence window), just like the roulette balls in Adam's analogy. Therefore, any explanation for the correlation in the statistical data should be based on the nature of reality as it pertains to each trial (and it's not accidental coincidences as you can see from the last column of Table 1).
I think we have no honest choice but to say "hypotheses non fingo").Bring your explanation supra to bear on Adam's roulette wheel analogy and you'll see where it's lacking. That is, you'd be attempting to resolve the mystery by saying, "I have a statistical mathematical formalism that maps onto the statistical data." That answer in no way tells me what is causing the two balls to land in the same color every time the two experimentalists choose the same wheel number, but land in the same color only 25% of the time that the two experimentalists choose a different wheel number. [This is exactly the Mermin analogy, see my QLE explanation, where we expect at least 33% agreement for different wheel numbers in order to account for 100% agreement for same wheel numbers.] Giving up on finding the underlying cause for the experimental correlations is your choice, but that in no way resolves the issue for those of us who haven't given up.
I can see some merits to the paper you attach, but, of course, I'd like something better.The experiment instantiates a QM violation of a Bell inequality. There is nothing more needed to experimentally confirm the mystery a la Adam's roulette wheels or Mermin's device, unless you believe there is something wrong with QM (Adam's third option). Is that what you're implying?
how QFT micro-causality is supposed to solve the EPR macro stochastic causality behaviors ?QFT "micro-causality" means that spacelike separated measurements must commute (i.e., the results must not depend on the order in which they are performed). Bell-inequality violating experiments meet this condition. So I don't see what there is to "solve".
That is a nice refresher for those who think that quantum theory is a description of reality, instead of just a description of what would happens to "equally prepared state", that is "in a laboratory"What I said, that QM/QFT is a probabilistic theory —which can be understood to model, and hence in appropriate circumstances to predict, statistics of recorded experimental events—, seems to me not inconsistent with quantum theory being "a description of reality". I think of QM/QFT, admittedly loosely, as being as much as we can say about "reality" because to predict individual events in a chaotic world would require more information than I think we can plausibly have access to, perhaps even might require infinite information.
As a layman, do you know of any resource that will explains how QFT micro-causality is supposed to solve the EPR macro stochastic causality behaviors ? I have a terrible memory, I'm afraid. I retain concepts more-or-less, once I've grokked them, but I too often forget where I learned about them and where the good references are. That said, I don't think of microcausality as solving EPR. Microcausality —that measurements are compatible with and don't change the statistics of other measurements that are at space-like separation— is apparently consistent with experiment, whereas in fact we can set up states in which there are correlations and Bell inequality violations between space-like separated measurements.
On this topic of timing, isn't Bohmian's mechanic supposed to have more predictive power over classical QM ? I think of Bohmian mechanics more as retrodicting a trajectory, given an individual actual event, if we know (or think we know) the quantum dynamics. That is, if the event is caused by a particle, that particle must have come from somewhere, because that's what particles do. We can massage the quantum dynamics to give us an equation that determines a trajectory when it's given just a single point on that trajectory (it's sometimes cited as a conceptual difficulty for Bohmian mechanics that we don't need to know the velocity as well as the position to determine the trajectory —differently from the case for classical mechanics, that is). BUT, at least in those cases where we do not observe more than one point (not high energy physics, and not a football or anything else large, in other words, but for most low energy experiments, because then the particle is absorbed and doesn't carry on along the same trajectory), that's not a prediction. To claim that de Broglie-Bohm is empirically equivalent to QM, one has to say that de Broglie-Bohm is a probabilistic theory.
Personally, I'm OK with de Broglie-Bohm trajectories for the non-relativistic case, except that, crucially, the math is a mess compared to just using Hilbert spaces. When we use QFT, however, I've not seen de Broglie-Bohm work out well enough. Most physicists just cite the QFT case as a one-line dismissal.
Quantum theory, being probabilistic, only makes predictions about statistics associated with recorded measurements. As a probabilistic theory, it has nothing to say about individual recorded events, only about their statistics.That is a nice refresher for those who think that quantum theory is a description of reality, instead of just a description of what would happens to "equally prepared state", that is "in a laboratory"
As a statistical theory, it includes the notion of microcausality, that measurements associated with space-like separated regions commute, but this is consistent with us being able to prepare states in which there are correlations at space-like separation.As a layman, do you know of any resource that will explains how QFT micro-causality is supposed to solve the EPR macro stochastic causality behaviors ?
Also Ruta's point on block -universe "interpretation" seems quite interesting, I'll try to dig into that also…
I see this as resolving the difference between vanhees71 and yourself, that quantum theory is microcausal as a probabilistic theory, whereas a theory that non-stochastically predicts the precise timings of individual recorded events would appear to have to be either nonlocal or superdeterministic (or some combination thereof: any such model might require infinite information to be predictive if there's any chaos, so I can't see how we could determine what a non-stochastic theory would be, I think we have no honest choice but to say "hypotheses non fingo").Aouch .. Latin hurts more than math :wink: On this topic of timing, isn't Bohmian's mechanic supposed to have more predictive power over classical QM ? If some spin value is observed to be X by Alice, isn't the (entangled?) pilot wave time dependency suppose to make more accurate prediction over the entangled value over time (and space) at Bob's end ?
I just finished Part I of Adam's book. Did you read it? It speaks precisely against this attitude.As I haven't read the book yet, I don't know Adam Becker's attitude.
That's not true, the formalism maps beautifully onto the experimental set-ups and data. There are many analyses, but one for undergrads that I use in my QM course is attached. There's nothing in the formalism that resolves this issue.Quantum theory, being probabilistic, only makes predictions about statistics associated with recorded measurements. As a probabilistic theory, it has nothing to say about individual recorded events, only about their statistics. As a statistical theory, it includes the notion of microcausality, that measurements associated with space-like separated regions commute, but this is consistent with us being able to prepare states in which there are correlations at space-like separation.
I see this as resolving the difference between vanhees71 and yourself, that quantum theory is microcausal as a probabilistic theory, whereas a theory that non-stochastically predicts the precise timings of individual recorded events would appear to have to be either nonlocal or superdeterministic (or some combination thereof: any such model might require infinite information to be predictive if there's any chaos, so I can't see how we could determine what a non-stochastic theory would be, I think we have no honest choice but to say "hypotheses non fingo").
I can see some merits to the paper you attach, but, of course, I'd like something better. In particular, IMO the role played by the incompatibility of the pairs of measurements at each end should be emphasized: if we were to perform only compatible measurements at each end separately, there would be no violation of any Bell inequalities. That there are incompatibilities means that there are time-like dependencies, but between the two measurements at A and between the two measurements at B, not between the ends (that is, if we have two measurements at A and two measurements at B, ##[A_i,B_j]=0, [A_1,A_2]not=0, [B_1,B_2]not=0##.) And I'd prefer "particles" not to be mentioned at all (instead of the word appearing 34 times): to be trite, for the quantized EM field there's just a wave/field duality. But that's a different paper altogether.
He forgot the third, which is the contemporary solution of this apparent problem, which is local microcausal relativistic QFT. It's local (i.e., fulfilling the linked-cluster principle) and allows for the long-range correlations described by entanglement of parts of quantum systems that are observed at far-distant points. Of course, you have to give up naive collapse interpretations, which introduce an artificial action at a distance, which is in clear contradiction to the very foundations the Standard Model rests upon, namely locality and microcausality. Of course QM is only a non-relativistic approximation of the relativstic QFT and thus becomes wrong when applied to situations where the approximation is invalid.That's not true, the formalism maps beautifully onto the experimental set-ups and data. There are many analyses, but one for undergrads that I use in my QM course is attached. There's nothing in the formalism that resolves this issue.
I just finished Adam's analysis of the Bell inequality via a roulette wheel. I've heard this before in a different context, but it's a very nice way to introduce the Bell inequality to laymen. His claim afterwards is that only one of three logical possibilities exists: nonlocality, superdeterminism, or QM is wrong.He forgot the third, which is the contemporary solution of this apparent problem, which is local microcausal relativistic QFT. It's local (i.e., fulfilling the linked-cluster principle) and allows for the long-range correlations described by entanglement of parts of quantum systems that are observed at far-distant points. Of course, you have to give up naive collapse interpretations, which introduce an artificial action at a distance, which is in clear contradiction to the very foundations the Standard Model rests upon, namely locality and microcausality. Of course QM is only a non-relativistic approximation of the relativstic QFT and thus becomes wrong when applied to situations where the approximation is invalid.
vanhees71, I can't see which comment you're referring to here. I understand if you might not want to use QUOTE, but it would help a lot if you would cite a comment number. TBH, I'm saying this because I've been unsure what or who you've been referring to a number of times, not just because of this one comment. Sorry!:sorry: I won't say this again until I forget that I said it.I only quote if I refer to a posting not immediately before the posting I'm answering to.
I just finished Adam's analysis of the Bell inequality via a roulette wheel. I've heard this before in a different context, but it's a very nice way to introduce the Bell inequality to laymen. His claim afterwards is that only one of three logical possibilities exists: nonlocality, superdeterminism, or QM is wrong. Most people accept the experimental results vindicating QM, so few if any argue for the third option anymore (it was more common when I started working on this in 1994). I'm assuming he believes retrocausality falls into the SD camp? It's semantics, but I'd disagree with that since the "common cause" resides in both the future and past. I wouldn't say that any of the three options applies to the ontology of Relational Blockworld (RBW) where explanation is adynamical and QM is certainly correct. Therein, the fundamental ontological element is 4-dim and QM provides a distribution function for these 4D "spacetimesource elements" in the context of a classical block universe. So, we do have "realism" and there are no superluminal signals required in the explanation of the distribution of these real 4D objects in spacetime. I'm not even sure that the concept of nonlocality is relevant when discussing 4D objects (careful, this nonlocality has to do with superluminal signaling, not the locality assumed in differentiable manifolds). Silberstein and I are giving a talk to the foundations group at the Univ of MD next Wed, so I'll solicit their opinions. But, he and I agree that the standard analyses of Bell inequalities tacitly assume dynamism and are meaningless for adynamical explanation. Continuing, there is certainly no SD in RBW because there is no dynamical causation in adynamical explanation. In other words, when Adam claims to have exhausted all logical possibilities for the implications of Bell's inequality, he has failed to consider adynamical explanation.
Foundations of physics (FoP) doesn't spend much time on this subject. FoP's attitude is that the weird/fun stuff is in QM, the only mysteries about QFT are technical, e.g., Haag's theorem, so FoP deals almost exclusively with QM. In my 24 years of attending FoP conferences and talks, I don't remember even one presentation on QFT issues. I'm very interested in your interpretation of QFT, as you know, because it looks to fill in technical gaps with my interpretation of QFT.Different circles! I think you're right, although I haven't been to a Foundations of physics conference, ##lo##, the last ten years. Perhaps it's more the philosophers who have taken up the philosophy of QFT, and there are several mathematicians who have tried to make sense of the mathematics of renormalization/interacting QFT with what seem almost philosophical motivations. I filter out a majority of non-QFT foundations these days, so it seems quite the opposite way round. QFT changes the game totally, IMO, makes everything much easier, partly because there are already fields, so it's fields/waves duality, which I think is easier to live with, but of course I have to convince anyone of that.
I'm in Adam's chapter 6 on Bohm and Everett. I haven't seen anything about QFT mentioned in the other reviews and he has made no mention of it so far in his book, so I doubt he talks about interpretations of QFT. We offer an interpretation of QFT in chapter 5 of our book and that chapter opens with the following:
As for progress in this area, Healey notes, “no consensus has yet emerged, even on how to interpret the theory of a free, quantized, real scalar field” [Healey,
2007, p. 203]. And, “There is no agreement as to what object or objects a quantum field theory purports to describe, let alone what their basic properties would
be” [Healey, 2007, p. 221].
Foundations of physics (FoP) doesn't spend much time on this subject. FoP's attitude is that the weird/fun stuff is in QM, the only mysteries about QFT are technical, e.g., Haag's theorem, so FoP deals almost exclusively with QM. In my 24 years of attending FoP conferences and talks, I don't remember even one presentation on QFT issues. I'm very interested in your interpretation of QFT, as you know, because it looks to fill in technical gaps with my interpretation of QFT. With your help, I'll figure it out :-)
In the first 5 chapters, Adam has focused on the history of the Copenhagen interpretation (in its many variations) and why we're stuck with it now. His coverage of interpretational issues of QM has been sparse to this point. Based on reviews I've read, I'm assuming he'll plug those holes in part 3 of the book.
Did you read Adam's book?I look forward to reading a review from you, RUTA. Having been to the talk Adam gave last night in New York, I'm not very enthusiastic. The last time I remember someone landing hard on a conversation at a foundations of physics conference with "Copenhagen says X, so everything you're saying is nonsense", was in the early 90's, and my sense is that physicists now more often fall back on decoherence (notwithstanding that the last mile from a mixed state to actual events is glossed), an interpretation which Adam didn't mention in his talk (I suppose because many philosophers would be loath to call decoherence an interpretation at all). Furthermore, I just read that Feyerabend in 1962 said (cited in arXiv:1509.09278, page 43) . . . many physicists are very practical people and not very fond of philosophy. This being the case, they will take for granted and not further investigate those philosophical ideas which they have learned in their youth and which by now seem to them, and indeed to the whole community of practicing scientists, to be the expression of physical common sense. In most cases these ideas are part of the Copenhagen Interpretation.
A second reason for the persistence of the creed of complementarity in the face of decisive objections is to be found in the vagueness of the main principles of this creed. This vagueness allows the defendants to take care of objections by development rather than a reformulation, a procedure which will of course create the impression that the correct answer has been there all the time and that it was overlooked by the critic. Bohr's followers, and also Bohr himself, have made full use of this possibility even in cases where the necessity of a reformulation was clearly indicated. Their attitude has very often been one of people who have the task to clear up the misunderstandings of their opponents rather than to admit their own mistakes. which seems a clear statement, 56 years ago, of what seemed to be a large part of Adam's argument for why Copenhagen is still given lip service today.
Adam at one point said that he hopes to give his talk to physics departments, but TBH with nothing at all said about QFT (is there anything about QFT in the book?), and decoherence unmentioned, I can't see physicists taking him seriously. One high point of going to Adam's talk was that I talked to several Masters and PhD students and postdocs, all of whom seemed quite knowledgeable about and willing to talk about the interpretation of QFT.
Sir Arthur Stanley Eddington in "The Nature of the Physical World“:
"Scientific instincts warn me that any attempt to answer the question “What is real?” in a broader sense than that adopted for domestic purposes in science, is likely to lead to a floundering among vain words and high-sounding epithets."Did you read Adam's book?
Indeed. Even the most appealing creative thought has to be confronted with observations and accurate measurements. If you cannot make contact to observables, it's a nice mathematical idea at best or just philosophical gibberish at worst. If your predictions are clearly countered by observation, it's a physical theory that's wrong and needs to be modified (at best) or abandoned (at worst)! As all natural sciences physics after all is an empirical science.vanhees71, I can't see which comment you're referring to here. I understand if you might not want to use QUOTE, but it would help a lot if you would cite a comment number. TBH, I'm saying this because I've been unsure what or who you've been referring to a number of times, not just because of this one comment. Sorry!:sorry: I won't say this again until I forget that I said it.
I just finished Part I of Adam's book. Did you read it? It speaks precisely against this attitude.Sir Arthur Stanley Eddington in "The Nature of the Physical World“:
"Scientific instincts warn me that any attempt to answer the question “What is real?” in a broader sense than that adopted for domestic purposes in science, is likely to lead to a floundering among vain words and high-sounding epithets."