In summary: That confirms my (still superficial) understanding that now I'm allowed to interpret ##\hat{\rho}## and the trace operation as expectation values in the usual statistical sense, and that makes the new approach much more understandable than what you called before "thermal interpretation".I also think that the entire conception is not much different from the minimal statistical interpretation. The only change to the "traditional" concept seems to be that you use the more general concept of POVM than the von Neumann filter measurements, which are only a special case.The only objection I have is the statement concerning EPR. It cannot be right, because local realistic theories are not consistent with the quantum-theoretical probability theory, which
  • #71
vanhees71 said:
I've started to read the paper, and I first had the impression, it's now much closer to the real use of QT as a physical theory as done by physicists since 1926,
It describes the real use of QT as a physical theory as done by physicists since 1970, when the more comprehensive view of measurement that goes beyond von Neumann's was introduced.
vanhees71 said:
but now it seems again, I'm completely misunderstanding its intended meaning. Obviously I misunderstood what you mean by "source". For me a "source" is just some device which "prepares quantum systems", and this can be also a single "particle" or even a single "photon" and not only macroscopic systems.
Yes. You probably need to read the whole to see the differences and understand the new point of view.
Arnold Neumaier (p.66) said:
From the perspective of the present considerations, quantum particles appear to be ghosts in the beams. This explains their spooky properties in the quantum physics literature!
Experiments in physics laboratories use controllable sources, filters and detectors to explore the nature of the microscopic world. All these are macroscopic objects, the only things that can be observed. The microscopic aspecs are not obsved but inferred; they define inferables, not observables. On p.13 of my paper I quote:
Asher Peres said:
If you visit a real laboratory, you will never find there Hermitian operators. All you can see are emitters (lasers, ion guns, synchrotrons and the like) and detectors. The experimenter controls the emission process and observes detection events. [...] Quantum mechanics tells us that whatever comes from the emitter is represented by a state ρ (a positive operator, usually normalized to 1). [...] Traditional concepts such as ”measuring Hermitian operators”, that were borrowed or adapted from classical physics, are not appropriate in the quantum world. In the latter, as explained above, we have emitters and detectors.
A quantum source is simply a piece of equipment that has some measurable effect on detectors placed at some distance from them. How this effect comes about is not observed but described by theoretical models whose consequences can be checked against experimental results. The techniques and results described in my paper are agnostic about the models (Section 7.1), they are just about the observable (and hence macroscopic) aspects.
Arnold Neumaier (p.66) said:
The present approach works independent of the nature or even the presence of a mediating substance: What is measured are properties of the source, and this has a well-define macroscopic existence. We never needed to make an assumption on the nature of the medium passed from the source to the detector. Thus the present approach is indifferent to the microscopic cause of detection events. It does not matter at all whether one regards such an event as caused by a quantum field or by the arrival of a particle. In particular, a microscopic interpretation of the single detection events as arrival of particles is not needed, not even an ontological statement about the nature of what arrives. Nor would these serve a constructive purpose.
This is the reason why the state is assigned to the source and not to something postulated as being transmitted. More precisely, the measured property (a quantum expectation) is assigned to the particular location at which the measurement is done, which leads naturally to a quantum field picture (Section 5.4).
Arnold Neumaier (p.50) said:
Suppose that we have a detector that is sensitive only to quantum beams entering a tiny region in space, which we call the detector’s tip. We assume that we can move the detector such that its tip is at an arbitrary point x in the medium, and we consider a fixed source, extended by layers of the medium so that x is at the boundary of the extended source. The measurement performed in this constellation is a property of the source. The results clearly depend only on what happens at x, hence they may count as a measurement of a property of whatever occupies the space at x. Thus we are entitled to consider it as a local property of the world at x at the time during which the measurement was performed.
 
Last edited:
  • Like
Likes gentzen, mattt and vanhees71
Physics news on Phys.org
  • #72
Fra said:
Given your ambitions, I guess this makes good sense. It's just that it does not solve the mysteries, at least not for me.

I still see your perspective as as limiting case of a general (yet unknown) theory.
It may well be that there is a more comprehensive theory then contemporary quantum theory. Who knows, what the solution of the problem to describe the gravitational interaction quantum mechanically and what this might imply for the description of spacetime, but there's no hint for such a new theory. Empirically there are no phenomena which are not described with our standard theories, as incomplete they might be.
Fra said:
As a macroscopic object is essentially just a part of the classical reality, this to me seems quite close to Bohrs angle to the CI (in contrast to Heisenbergs). You still need a CONTEXT, and this context is classical reality. This is of course a good thing from the perspective of human science... and as the context is the good old classical reality, it becomes more trivial as except for relativity, the observer-observer interaction is more trivial.
I think the contrary makes more sense. Macroscopic objects as we observe them are only consistently describable with quantum theory, given the atomistic structure. There'd not even be stable atoms as bound states of charged particles let alone molecules and condensed matter within classical physics. The "classical reality" is emergent, understood via the classical physical laws as effective description of the dynamical behavior of macroscopic coarse-grained observables.
Fra said:
But it's a bad thing if you think there is explanatory power to be found by considering the logic of interacting observers. And I am not sure how it helps with fine tuning problems or unification quests. As I understand it, it isn't the ambition either. Then it's fine. I think part of the confusion, is different expectations, which I think what you wrote yourself in post 15 as well.

/Fredrik
 
  • #73
A. Neumaier said:
It describes the real use of QT as a physical theory as done by physicists since 1970, when the more comprehensive view of measurement that goes beyond von Neumann's was introduced.

Yes. You probably need to read the whole to see the differences and understand the new point of view.

Experiments in physics laboratories use controllable sources, filters and detectors to explore the nature of the microscopic world. All these are macroscopic objects, the only things that can be observed. The microscopic aspecs are not obsved but inferred; they define inferables, not observables. On p.13 of my paper I quote:

A quantum source is simply a piece of equipment that has some measurable effect on detectors placed at some distance from them. How this effect comes about is not observed but described by theoretical models whose consequences can be checked against experimental results. The techniques and results described in my paper are agnostic about the models (Section 7.1), they are just about the observable (and hence macoscopic) aspects.

This is the reason why the state is assigned to the source and not to something postulated as being transmitted. More precisely, the measured property (a quantum expecation) is assigned to the particular location at which the measurement is done, which leads naturally to a quantum field picture (Section 5.4).
In this way it makes a lot of sense again, but as I said before, today the experimentalists are able to prepare single-quantum (particles, atoms, molecules, photons) states and observe them. The observations are of course always via macroscopic measurement devices.
 
  • #74
vanhees71 said:
In this way it makes a lot of sense again, but as I said before, today the experimentalists are able to prepare single-quantum (particles, atoms, molecules, photons) states and observe them. The observations are of course always via macroscopic measurement devices.
In this case the traditional interpretation in terms of Born's rule is vacuous since probabilities are meaningful only in an ensemble context but individual quantum systems do not form an ensemble.

Instead, these experiments are traditionally interpreted in the fashion of classical stochastic processes, which have a trajectory interpretation for individual realizations, so that individual systems can be discussed. For example, this explains observed quantum jumps in atoms in an ion trap subject to external fields; see, e.g.,
  • Plenio, M. B., & Knight, P. L. (1998). The quantum-jump approach to dissipative dynamics in quantum optics. Reviews of Modern Physics, 70(1), 101.
In the context of the present paper, one has to distinguish several cases of tiny quantum systems.

(A) A tiny quantum system mounted on a macroscopic objects.
  1. If mounted for subsequent view in a scanning tunneling microscope, the tiny system acts as a stationary quantum source of the kind discussed in my paper, and its state can be determned by quantum tomography.
  2. If mounted on an ion trap, the tiny quantum system can be manipulated by applying external classical field. This turns it into an instationary quantum system, for which standard quantum tomography is limited to short times within which the system cn be regarded as stationary. This means that its state can be measured only to limited accuracy - only very limited information can be reliably collected. This is discussed in Section 4.5 of my paper.
  3. Nonstationary quantum tomography has hardly been studies, so it is too early to tell in detail which kind of limitations this imposes on what can be achieved experimentally.
  4. For example, in the quantum jump experiments, observations interpretable in terms of the system state are restricted to observing a noisy piecewise constant time series that shows that apart from very short times of transitions (jumps) the system is stationary - being in one of to eigenstates of the appropriate Hamiltonian. Thus, per time step, one only gets one bit of information about the changing density operator.
(B) A tiny quantum system freely moving in a homogeneous macroscopic medium.
  1. Single massive particles in flight can be observed in a bubble chamber or, in the context of modern particle accelerators, in a time projection chamber. The latter case is discussed in Section 3.4 of my paper; the former can be done in a similar way.
  2. Single photons in flight cannot be observed; at best one can observe their death. It is impossible to do quantum tomography on them. Thus it is experimentally impossible to measure their state. Hence assigning a state to them is very questionable. Instead, a highly nonstationary photon state must be assigned to the cavity producing the photon, and case (A) applies.
 
  • #75
This we discussed repeatedly. One can do statistics using a single particle in, e.g., a Penning trap, as described here:

https://doi.org/10.1088/0031-8949/1988/T22/016

but isn't this indeed a paradigmatic example for your formulation?

Also nondestructive photon measurements are done, but also the standard photon detection of course measures properties of single photons like energy, momentum, and polarization, or what else do you think the photon measurements in all the accelerators in HEP and heavy-ion physics provide?
 
  • #76
vanhees71 said:
One can do statistics using a single particle in, e.g., a Penning trap, as described here:
https://doi.org/10.1088/0031-8949/1988/T22/016
but isn't this indeed a paradigmatic example for your formulation?
One can do statistics with any collection of measurement results.
But in the case you mention, where the data come from a single particle, the statistics is not governed by Born's rule. Each data point is obtained at a different time, and at each time the particle is in a different state affected in an unspecified way by the previous measurement. So how could you calculate the statistics from Born's rule?

Instead, the statistics is treated in the way I discussed in case (A).
vanhees71 said:
Also nondestructive photon measurements are done,
If the nondestructive single photon measurements result in a time series, the situation for this photon is the same as for the particle in the Penning trap.
vanhees71 said:
but also the standard photon detection of course measures properties of single photons like energy, momentum, and polarization, or what else do you think the photon measurements in all the accelerators in HEP and heavy-ion physics provide?
I didn't know that accelerators measure momentum and polarization of individual photons. Could you provide me with a reference where I can read details? Then I'll be able to show you how it matches the description in my paper.
 
Last edited:
  • #77
I wouldn't call results of experiments with single electrons, protons, ions etc. in Penning traps which are among the most precise ever "of limited precision" ;-). The tgeoretical description uses standard quantum theory based on Born's rule (see Dehmelt's above quoted review).

Detectors measure particles and photons of coarse. Real and virtual photons (dileptons) are among the most interesting signals in pp, pA, and heavy-ion collisions at CERN for some decades.
 
  • #78
A. Neumaier said:
I prefer to frame the known in an optimally rational way, rather than to speculate about the unknown.
This is certainly respectable position, and I think your exposition is good from this position.

With what "is known" I think you effectively refer to human science. But if we even here consider and obsererver: What is the real difference between what an observers knows, and what it THINKS it knows? And does it make difference to the observes betting strategy? (action)

A. Neumaier said:
It doesn't matter who observes, excpt that poor observations lead to poor approximations of the state.
Just to contrast: In an interacting agent view, "poor approximations" should lead to "poor actions", "poor betting" which should be observable by other agents. My take on this is different. As I see it, the inside agent/observer has no access to an external judgement of what is a good or bad approximation. The agent just has to act on available evidence. What is right or wrong, poor or precise, should be irrelevant from the perspective of the agents betting. So the causal mechanism is independent of wether the information is "right". Information as well as desinformation will provoke a response which is depedent of the subjective information . But these ideas are IMO part of the non-equilibrium parts. Ie. an agent that is consisently "wrong", will soon be put out of business in the overall game. So instead of thinking of a ordinary equilibration, I see it as a evolution (as state spaces also evolve, there is no objective entropic flow). During this process, agents has two choices, learn (improve their predictions) or face destruction (deleted from the population pool). In this process lies also the emergence of symmetries. I seek and struggle with the mathematical or rather, algorithmic description on this. This is admittedly more speculative though. But I have the opinion that "speculation" and revision from feedback is at the heart of true learning. And I take this seriously also applied to the "observer". Even a measurement, can be seen as a "speculation" considering the choice of WHICH measurement to perform (in order for say maximal information gain).

Can we infer Vanhees secret speculations about he hopes to find(that you don't say loud as scientists should not be biased;), from the way the chooses to construct the next measurement or experiment?

This is why I can not help viewing, current QM as a limiting case of a very massive dominant agent, which is for all practical purposes classical and provides the background. I see that both the "preparation", and "detectors" are constructed from the agent itself. The information gained, is between the "action"(which can be seen as a "preparation") and the "backreaction of the environment", which I abstractly see as the correspondence of a general measurement of a real inside observer.

I feel that, trying to "close" or "polish" the limit theory case has a big value in itself, but it also risks polish away the open ends that are the clues to progress, and I prefers the open ends as clues forward.

/Fredrik
 
Last edited:
  • #79
Fra said:
With what "is known" I think you effectively refer to human science. But if we even here consider and obsererver: What is the real difference between what an observers knows, and what it THINKS it knows? And does it make difference to the observes betting strategy? (action)
Science has no single betting strategy. Each scientist makes choices of his or her own preference, but published is only what passed the rules of scientific discourse, which rules out most poor judgment on the individual's side. What schience knows is an approximation to what it thinks it knows, and this approximation is quite good, otherwise resulting technology based on it would not work and not sell.
 
  • Like
Likes Fra, dextercioby and vanhees71
  • #80
vanhees71 said:
I wouldn't call results of experiments with single electrons, protons, ions etc. in Penning traps which are among the most precise ever "of limited precision".
I didn't call these results "of limited precision" but said that they determine the state to limited precision only. The state in these experiments is a continuous stochastic function ##\rho(t)## of time with ##d^2-1## independent real components, where ##d## is the dimension of the Hilbert space. Experiments resulting in ##N## measured numbers can determine this function only to limited precision. By the law of large numbers, the error is something like ##O((dN^{1/2})^{-1})##.

What is usually done is to simply assume a Lindblad equation (which ignores the fluctuating part of the noise due to the environment) for a truncated version of ##\rho## with very small ##d##. Then one estimates from it and the experimental results a very few parameters or quantum expectations.

This is very far from an accurate state determination...
vanhees71 said:
The theoretical description uses standard quantum theory based on Born's rule (see Dehmelt's above quoted review).
Since Born's rule is a statement about the probability distribution of results for an ensemble of identically prepared systems, it is logically impossible to obtain from it conclusions about a single of these systems. A probability distribution almost never determines an individual result.

I'll read the review once I can access it and then comment on your claim that it derives statements about a single particle from Born's rule.
vanhees71 said:
Detectors measure particles and photons of coarse. Real and virtual photons (dileptons) are among the most interesting signals in pp, pA, and heavy-ion collisions at CERN for some decades.
Then it should be easy for you to point to a page of a standard reference describing how the measurement of photon momentum and polarization in collision experiments is done, in sufficient detail that one can infer the assumptions and approximations made. I am not an expert on collision experiments and would appreciate your input.
 
Last edited:
  • Like
Likes gentzen and vanhees71
  • #81
That's a nice article. However I somehow miss an explanation, what actually is meant with "quantum tomography" and one has to revert to the arxiv preprint to get an explanation. Given the title of the insights article, maybe you could add some words on what is meant with quantum tomography.
 
  • Like
Likes dextercioby
  • #82
A. Neumaier said:
I didn't call these results "of limited precision" but said that they determine the state to limited precision only. The state in these experiments is a continuous stochastic function ##\rho(t)## of time with ##d^2-1## independent real components, where ##d## is the dimension of the Hilbert space. Experiments resulting in ##N## measured numbers can determine this function only to limited precision. By the law of large numbers, the error is something like ##O((dN^{1/2})^{-1})##.

What is usually done is to simply assume a Lindblad equation (which ignores the fluctuating part of the noise due to the environment) for a truncated version of ##\rho## with very small ##d##. Then one estimates from it and the experimental results a very few parameters or quantum expectations.

This is very far from an accurate state determination...

Since Born's rule is a statement about the probability distribution of results for an ensemble of identically prepared systems, it is logically impossible to obtain from it conclusions about a single of these systems. A probability distribution almost never determines an individual result.

I'll read the review once I can access it and then comment on your claim that it derives statements about a single particle from Born's rule.

Then it should be easy for you to point to a page of a standard reference describing how the measurement of photon momentum and polarization in collision experiments is done, in sufficient detail that one can infer the assumptions and approximations made. I am not an expert on collision experiments and would appreciate your inpu.
In Dehmelt's paper it is describe, how various quantities using single electrons/ions in a Penning trap are measured. I still don't understand, why you think there cannot be statistics collected using a single quantum. I can also get statics of throwing a single coin again and again to check whether it's a fair one or not. I just do the "random experiment" again and again using the same quantum and collect statistics and evaluate confidence levels and all that. Another review paper, which may be more to the point, because it covers both theory and experiment, is

https://doi.org/10.1103/RevModPhys.58.233

I also think that very rarely one does full state determinations. What's done are preparations and subsequent measurements of observables of interest.

I'm also not an experimental physicist and far from knowing any details, how the current CERN experiments (ATLAS, CMS, and ALICE) measure electrons and photons. I use their results to compare to theoretical models, which are based on standard many-body QFT and simulations of the fireball created in heavy-ion collisions. All this is based on standard quantum theory and thus after all on Born's rule. Here you can look at some papers by the ALICE collaboration as one example for what's measured concerning photons created in pp, pA, and AA collisions (pT spectra, elliptic flow, etc.). Concerning polarization measurements (particularly for dileptons) that's a pretty new topic, and of course an even greater challenge than the spectra measured for decades now. After all these are "rare probes".
 
  • #83
vanhees71 said:
I still don't understand, why you think there cannot be statistics collected using a single quantum.
I don't think that, and I explicitly said this. The point is that this statistics is not statistics about an ensemble of identically prepared systems hence has nothing to do with what Born's rule is about.
vanhees71 said:
I can also get statistics of throwing a single coin again and again to check whether it's a fair one or not.
In this case the system identically prepared is the throw, not the coin. The coin is a system described by a rigid body, with a 12D phase space state ##z(t)##, in contact with an environment that randomizes its motion through its collision with the table. The throw is what you can read off when the coin is finally at rest.

The state of the coin is complicated and cannot be identically prepared (otherwise it would fall identically and produce identical throws);. But the state of the throw is simple - just a binary variable, and the throwing setup prepares its state identically. Each throuw is different - only the coin is the same; that's why one gets an ensemble.

This is quite different from a quantum particle in a trap, unless (as in a throw) you reset before each measuement the state of the particle in the trap. But then the observation bevomes uninteresting. The interesting thing is to observe the particle's time dependence. Here the state changes continuously, as with the coin and not as with the throw.
 
Last edited:
  • Like
Likes gentzen
  • #84
vanhees71 said:
All this is based on standard quantum theory and thus after all on Born's rule.
The 'thus' is not warranted.

Quantum field theory is completely independent of Born's rule. It is about computing ##N##-point functions of interest.

Weinberg's QFT book (Vol.1) mentions Born's rule exactly twice - once in its review of quantum mechanics, and once where the probabilistic interpretation of the scatttering amplitude is derived. In the latter he assumes an ensemble of identically prepared particles to give a probabilistic meaning in terms of the statistics of collision experiment.

Nothing at all about single systems!
 
  • #85
I don't think that we reach consensus about this issue. For me Born's rule is one of the fundamental postulates of QT (including QFT). You calculate the correlation functions (Green's functions) in QFT to get statistical information about observables like cross sections. How these correlation functions are related to the statistics of measurement outcomes is derived based on the fundamental postulates of QT, including Born's rule. Of course, that's what Weinberg and any other book on QFT does. A cross-section measurement consists of course always of collecting statistics over very many collision events using not the same particles again and again.

You use yourself Born's rule all the time since everything is based on taking averages of all kinds defined by ##\langle A \rangle=\mathrm{Tr} \hat{\rho} \hat{A}## (if you use normalized ##\hat{\rho}##'s).
 
Last edited:
  • Like
Likes Lord Jestocost
  • #86
vanhees71 said:
Another review paper, which may be more to the point, because it covers both theory and experiment, is
https://doi.org/10.1103/RevModPhys.58.233
[...] you can look at some papers by the ALICE collaboration as one example for what's measured concerning photons created in pp, pA, and AA collisions (pT spectra, elliptic flow, etc.). Concerning polarization measurements (particularly for dileptons) that's a pretty new topic,
Are the papers where I can read about ALICE measurements and about polarization measurements in the above review?
 
  • #87
  • #88
DrDu said:
That's a nice article. However I somehow miss an explanation, what actually is meant with "quantum tomography" and one has to revert to the arxiv preprint to get an explanation. Given the title of the insights article, maybe you could add some words on what is meant with quantum tomography.
Thanks. I added to the Insight article a link to Wikipedia and an explaining paragraph.
 
  • Like
Likes gentzen
  • #89
vanhees71 said:
I was referring to the measurements on single particles in a trap, not on ALICE photon measurements. There are tons of papers about "direct photons":

https://inspirehep.net/literature?sort=mostrecent&size=25&page=1&q=find title photons and cn alice

Polarization measurements for dileptons or photons are very rare today. There's a polarization measurement by the NA60 collaboration on di-muons:

https://arxiv.org/abs/0812.3100
Thanks for the pointers. Will reply in more detial after having read more. I expect that it will mean that the instancs of case (B) are not so different from those of case (A) in my earlier classification of single-particle measurements.
 
  • Like
Likes vanhees71
  • #90
I still do not understand why you say that the content of the review papers by Dehmelt and Brown contain anything denying the validity of Born's rule. For me it's used all the time!
 
  • #91
vanhees71 said:
I still do not understand why you say that the content of the review papers by Dehmelt and Brown contain anything denying the validity of Born's rule. For me it's used all the time!
Because Born's rule assumes identical preparations which is not the case when a nonstationary system is measured repeatedly. I am not denying the validity but the applicability of the rule!

I need to read the paper before I can go into details.
 
  • #92
I don't understand this argument. You just measure repeatedly some observable. The measurements (or rather the reaction of the measured system to the coupling to the measurement device) themselves of course have to taken into account as part of the "preparation" too.
 
  • #93
vanhees71 said:
I don't understand this argument. You just measure repeatedly some observable. The measurements (or rather the reaction of the measured system to the coupling to the measurement device) themselves of course have to be taken into account as part of the "preparation" too.
It is a preparation, but not one to which Born's rule applies. Born's rule is valid only if the ensemble consists of independent and identically prepared states. You need independence because e.g., immediately repeated position measurements of a particle do not respect Born's rule, and you need identical prepartion because there is only one state in Born's formula.

In the case under discussion, one may interpret the situation as reeated preparation, as you say. But unless the system is stationary (and hence uninteresting in the context of the experiment under discussion), the state prepared before the ##k##th measurement is different for each ##k##. Moreover, due to the preceding measurement this state is only inaccurately known and correlated with the preceding one. Thus the ensemble prepared consists of nonindependent and nonidentically prepared states, for which Born's rule is silent.
 
  • Like
Likes dextercioby
  • #94
This would imply that you cannot describe the results about a particle in a Penning trap with standard quantum theory, but obviously that's successfully done for decades!
 
  • #95
vanhees71 said:
This would imply that you cannot describe the results about a particle in a Penning trap with standard quantum theory,
This statement is indeed true if you restrict standard quantum theory to mean the formal apparatus plus Born's rule in von Neumann's form. Already the Stern-Gerlach experiment discussed above is a counterexample.
vanhees71 said:
but obviously that's successfully done for decades!
This is because standard quantum theory was never restricted to a particular interpretation of the formalism. Physicists advancing the scope of applicability of quantum theory were always pragmatic and used whatever they found suitable to match the mathematical quantum formalism to particular experimental situations. This - and not what the introductory textbooks tell - was and is the only relevant criterion for the interpretation of quantum mechanics. The textbook version is only a simplified a posteriori rationalization.

This pragmatic approach worked long ago for the Stern-Gerlach experiment. The same pragmatic stance also works since decades for the quantum jump and quantum diffusion approaches to nonstationary individual quantum systems, to the extent of leading to a Nobel prize. They simply need more flexibility in the interpretation than Born's rule offers. What is needed is discussed in Section 4.5 of my paper.
 
  • #96
I don't understand, what the content of Sect. 4.5 has to do with our discussion. I don't see, how you can come to the conclusion that the "pragmatic use" of the formalism contradicts the Born rule as the foundation. To the contrary all these pragmatic uses are based on the probabilistic interpretation of the state a la Born. Also, as I said before, I don't understand how you can say that with a non-stationary source no accuracy is reachable, while the quoted Penning-trap experiments lead to results which are among the most accurate measurements of quantities like the gyro-factor of electrons or, just recently reported even in the popular press, the accurate measurement of the charge-mass ratio of the antiproton.

Nowhere in your paper I can see, that there is anything NOT based on Born's rule, although you use the generalization to POVMS, but I don't see that this extension is in contradiction to Born's rule. Rather, it's based on it.
 
  • #97
A. Neumaier said:
Science has no single betting strategy. Each scientist makes choices of his or her own preference, but published is only what passed the rules of scientific discourse, which rules out most poor judgment on the individual's side. What schience knows is an approximation to what it thinks it knows, and this approximation is quite good, otherwise resulting technology based on it would not work and not sell.
Yes. By a similar reasoning, I think observer/agents that fail to adapt to their environment, will not be ubiquitous. But the fitness is relative to the environment only, just as a learning agent will be "trained" to what it's exposed to. What is true in an absolute sense seems be be about as irrelevant as the absolute space is to relative motion.

/Fredrik
 
  • Like
Likes vanhees71
  • #98
vanhees71 said:
Nowhere in your paper I can see, that there is anything NOT based on Born's rule, although you use the generalization to POVMS, but I don't see that this extension is in contradiction to Born's rule. Rather, it's based on it.
As I read this again, I think I also may have confused the "issue" with borns rule. Some objections I have in mind(having todo with the choice of optimal compression), seems to be off topic here, but now it seems that the main point here is the generalized "born rule", it the one relevant for mixed states? But as Vanhees says, the core essence of the "born rule" is still there, right?

/Fredrik
 
  • Like
Likes vanhees71
  • #99
vanhees71 said:
I don't understand, what the content of Sect. 4.5 has to do with our discussion. I don't see, how you can come to the conclusion that the "pragmatic use" of the formalism contradicts the Born rule as the foundation.
I didn't claim a contradiction with, I claimed the nonapplicability of Born's rule. These are two very different claims.
vanhees71 said:
all these pragmatic uses are based on the probabilistic interpretation of the state a la Born.
You seem to follow the magic interpretation of quantum mechanics. Whenever you see statistics on measurements done on a quantum system you cast the magic spell "Born's probability interpretation", and whenever you see a calculation involving quantum expectations you wave a magic wand and say "ah, an application of Born's rule". In this way you pave your way through every paper on quantum physics and say with satisfaction at the end, "This paper proves again what I knew for a long time, that the interpretation of quantum mechanics is solely based on the probabilistic interpretation of the state a la Born".

You simply cannot see the difference between the two statements
  1. If an ensemble of independent and identically prepared quantum systems is measured then ##p_k=\langle P_k\rangle## is the probability occurrence of the ##k##th event.
  2. If a quantum system is measured then ##p_k=\langle P_k\rangle## is the probability occurrence of the ##k##th event.
The first statement is Born's rule, in the generalized form discussed in my paper.
The second statement (which you repeatedly employed in your argumentation) is an invalid generalization, since the essential hypothesis is missing under which the statement holds. Whenever one invokes Born's rule without having checked that the ensemble involved is actually independent and identically prepared, one commits a serious scientific error.

It is an error of the same kind as to conclude from x=2x through division by x that 1=2, because the assumption necessary for the argument was ignored.
vanhees71 said:
Also, as I said before, I don't understand how you can say that with a non-stationary source no accuracy is reachable, while the quoted Penning-trap experiments lead to results which are among the most accurate measurements of quantities like the gyro-factor of electrons or, just recently reported even in the popular press, the accurate measurement of the charge-mass ratio of the antiproton.
This is not a contradiction since both the gyro-factor of electrons and the charge-mass ratio of the antiproton are not observables in the traditional quantum mechanical sense but constants of Nature.

A constant is stationary and can in principle be arbitrarily well measured, while the arbitrarily accurate measurement of the state of a nonstationary system is in principle impossible. This holds already in classical mechanics, and there is no reason why less predictable quantum mechanical systems should behave otherwise.
vanhees71 said:
Nowhere in your paper I can see, that there is anything NOT based on Born's rule, although you use the generalization to POVMS, but I don't see that this extension is in contradiction to Born's rule. Rather, it's based on it.
This is because of your magic practices in conjunction with mixing up "contradition to" and "not applicable". Both prevent you from seeing what everyone else can see.
 
  • Like
Likes dextercioby, Fra and gentzen
  • #100
I think the problem is that I understand something completely different when I read this paper than it's the intention of the authors. Particularly I have no clue, why behind the entire formalism of the description of the outcomes of measurements there should not be Born's rule. For me POVMs are just a description of measurement devices and the corresponding experiments, where one does not perform an ideal von Neumann filter measurement, and it's of course right that only a very few real-world experiment are such ideal von Neumann filter measurements, and a more general description of the experiments that have become possible nowadays (starting roughly with the first Bell tests by Aspect et al).

My understanding of the paper is that it is very close to the view as provided, e.g., by Asher Peres in his book

A. Peres, Quantum Theory: Concepts and Methods, Kluwer
Academic Publishers, New York, Boston, Dordrecht, London,
Moscow (2002).

What's new is the order of presentation, i.e., it is starting from the most general case of "weak measurements" (described by POVMs) and then brings the standard-textbook notion of idealized von Neumann filter measurements as a special case, and this makes a lot of sense, if you are aiming at a deductive (or even axiomatic) formulation of QT. The only problem seems to be that this view is not what the author wants to express, and I have no idea what the intended understanding is.

Maybe it would help, when a concrete measurement is discussed, e.g., the nowadays standard experiment with single ("heralded") photons (e.g., produced with parametric down conversion using a laser and a BBO crystal, using the idler photon as the "herald" and then doing experiments with the signal photon). In my understanding such a "preparation procedure" determines the state, i.e., the statistical operator in the formalism. Then one can do an experiment, e.g., a Mach-Zender interferometer with polarizers, phase shifters etc. in the two arms and then you have photon detectors to do single-photon measurements. It should be possible to describe such a scenario completely with the formalism proposed in the paper and then pointing out where, in the view of the author, this contradicts the standard statistical interpretation a la Born.
 
  • #101
A. Neumaier said:
the arbitrarily accurate measurement of the state of a nonstationary system is in principle impossible.
This fact is one reason for my own views. What is "stationary or not", is I think also relative. Ie. relative to the speed of information processing of the observer. This is why IMO what is "stationary enough" is observer dependent.

Any realistic scenario is necessarily about decision making and placing best under incomplete information, of the sort that we can not even measure the incompleteness. This is why seek an instrinsic starting point.

Just as the case of the impure state, real total limitations of predictability have two reasons, one is the classical one which we can think of as just ignorance of agent (or it beeing misinformed etc), and the other thing which has to do with dependence between pieces of information, which is the essence of quantum mechanics. Certainly both issues are important in a real inference, and perheps also it's interplay.

/Fredrik
 
  • #102
How then can it be that, e.g., the measurement of the gyrofactor of the electron using a Penning trap is as precise as it is?
 
  • #103
IF the precessing electron is "stationary enough", if they are able to keep a single electron precessing for a month?

/Fredrik
 
  • #104
Fra said:
IF the precessing electron is "stationary enough", if they are able to keep a single electron precessing for a month?
Electrons in accelerators come in large bunches, not a single electrons...
Fra said:
What is "stationary or not", is I think also relative. Ie. relative to the speed of information processing of the observer.
It is only relative to the speed and accuracy with which reliable measurements can be taken. This is independent of any information processing on the side of the agent.
 
  • #105
A. Neumaier said:
Electrons in accelerators come in large bunches, not a single electrons...
I didn't analyze this in depth, or perhaps I misseed soemthing as it's not my focus, but what I was thinking of was for example this:

New Measurement of the Electron Magnetic Moment and the Fine Structure Constant
"A measurement using a one-electron quantum cyclotron gives the electron magnetic moment in Bohr magnetons, g/2 = 1.001 159 652 180 73 (28) [0.28 ppt], with an uncertainty 2.7 and 15 times smaller than for previous measurements in 2006 and 1987."
-- https://arxiv.org/abs/0801.1134

A. Neumaier said:
It is only relative to the speed and accuracy with which reliable measurements can be taken. This is independent of any information processing on the side of the agent.
I think this answer makes pretty good sense, if we view the "information processing" as a classical conventional processing of "detector data". Ie. for example they way a physicists processes experimental data. I agree to this extent.

But my point was to add a possible other perspective. From the perspective of my interpretation, "measurement" and "decision making" and "informaiton processing" is all part of the agents general inference (of which we of course don't have a theory for, at least not yet). If one takes the agent to be a real part of the interaction, then the agents "decisions and preparations" for the next interaction (measurement) should be constrained. So for the agent to be able to entertain an theory, at least approximately isomorphic to QM, the agents capacaity to detect, recode and production and action (a measurement) the amount of information and the confidence of the information implied in the measurement must be comparatively insignificant. Otherwise the theory will need to be evolve or deform before a reliable "statistics" is acquired. This makes this insanely complicated and self-referencing indeed. But I think it's how nature is - I want to undertand the more robust QM/QFT as the limiting caser in such a bigger picture. I think one can appreciate the structure of QM, but still entertain other possibilities for the purpose of seeking more explanatory value. There is no contradiction between the success of QM, and thinking that there is a better paradigm. It may be problematic only if one does not see QM as an effective theory but as logical strucutre that is proven perfect and that can never change, but only the be added upon. Like as if science is about unravelling pieces of absolute truth, one bit at a time without ever needing to revise what old pieces of the "truth".

/Fredrik
 
  • Like
Likes vanhees71
Back
Top