Evaluate this paper on the derivation of the Born rule

In summary, The paper discusses the Curie Wiess model of the quantum measurement process and how it can be used to derive the Born rule.
  • #211
And A determines the basis and so in part determines a.
 
Physics news on Phys.org
  • #212
Jilang said:
And A determines the basis and so in part determines a.
So? What is your point?
 
  • #213
mikeyork said:
So? What is your point?
Please see post #206.
 
  • #214
mikeyork said:
What kind of macroscopic measurement do you have in mind?
Measuring the center of mass of a macroscopic body, or measuring the mass or the total energy of a brick of iron. In each case the measurement produces a single number, and for the corresponding ##A## the value equals <A> to several significant digits.
mikeyork said:
but my point is that we don't measure <A>.
Whereas my point (and the point of the authors of the papers under discussion here) is that whenever one makes a macroscopic measurement one measures <A>.

Even in a Stern-Gerlach measurement what one actually measures is a macroscopic spot on the screen, From such measurements one deduces theoretically - using a semiclassical model calculation - the values of the angular momentum of the silver atoms and arrives at an eigenvalue of the microscopic observable ##A##. The papers under discussion show how (in a similar, slightly idealized experiment to make it tractable theoretically) statistical mechanics (with the alternative interpretation repeatedly discussed by me in this thread) produces the correct predictions, namely those that were postulated (rather than deduced) by Born's rule.
 
  • #215
A. Neumaier said:
Measuring the center of mass of a macroscopic body, or measuring the mass or the total energy of a brick of iron. In each case the measurement produces a single number, and for the corresponding ##A## the value equals <A> to several significant digits.

Whereas my point (and the point of the authors of the papers under discussion here) is that whenever one makes a macroscopic measurement one measures <A>.

Even in a Stern-Gerlach measurement what one actually measures is a macroscopic spot on the screen, From such measurements one deduces theoretically - using a semiclassical model calculation - the values of the angular momentum of the silver atoms and arrives at an eigenvalue of the microscopic observable ##A##. The papers under discussion show how (in a similar, slightly idealized experiment to make it tractable theoretically) statistical mechanics (with the alternative interpretation repeatedly discussed by me in this thread) produces the correct predictions, namely those that were postulated (rather than deduced) by Born's rule.
The SG experiment can be understood without any semiclassical approximation (at least if you admit some simple numerics to solve the time-dependent Schrödinger equation). It's of course clear that the final measurement via looking at a CCD screen (or in the original Frankfurt setup a photoplate using sulphur-reach cigars for better contrast ;-)) involves macroscopic measurement devices.

Concerning the center of mass of a macroscopic body you measure the one-body observable, represented by the operator
$$\hat{\vec{X}}=\frac{1}{M} \sum_{j=1}^N m_j \hat{\vec{x}_j}.$$
For "typical states" of a macroscopic system (e.g., the equilibrium state), you'll find a value (repeating the measurement sufficiently often since a single measurement is as good as doing no measurement, as you learn in the freshman introductory lab on day 1) that fluctuates around the expectation value (understood in the usual probabilistic sense) with the fluctuations very small since the standard deviation of this quantity is small.

What I still don't understand is, why you need an alternative interpretation of (quantum) statistical mechanics. Already the name "statistical" implies for me the use of probability theory (which you as a mathematician can formalize in a rigorous way, if you like). For the practical application of the theory to real-world experiment there's anyway no way out to use probability theory in the applied statistics any experimenter is supposed to deliver if he wants anything published in a peer-reviewed journal.
 
  • #216
A. Neumaier said:
Measuring the center of mass of a macroscopic body, or measuring the mass or the total energy of a brick of iron. In each case the measurement produces a single number, and for the corresponding ##A## the value equals <A> to several significant digits.
In that case, how is ##<A>## different from ##<a>##? (See my post #210.)

How is the distinction between macroscopic and microscopic relevant? In fact if your brick was composed of (say) ##n## identical atoms, then we would have ##<a> = n \bar{a}## where ##\bar{a}## is the atomic mean. Are you simply trying to say that ##\bar{a}## (an ensemble average) is not necessarily the same as ##<a_i>## (the expectation value for any particular atom)? That is a fair enough comment because ##\bar{a}## should only converge on ##<a_i>## as ##n\rightarrow\infty##. . But then you still haven't told me what the expectation value of an operator means physically (if ##<A>## does not mean ##<a>## )-- despite my asking several times.
 
Last edited:
  • #217
vanhees71 said:
What I still don't understand is, why you need an alternative interpretation of (quantum) statistical mechanics.
Because measurement is a complicated statistical mechanics process that should not enter the foundations of quantum mechanics - just as it doesn't enter the foundations of classical mechanics. Born's rule should be a consequence of good foundations rather than a postulate that is part of (in the opinion of many physicists problematic) foundations.

That measuring a spot on a screen tells us anything about the state of a silver atom is something that needs to be proved from the dynamics of quantum mechanics rather than postulated at the outset.

At least this is the opinion of the authors whose work is discussed in this thread, and it is also my opinion, having spent before many years on trying to understand the foundations until I realized that.
 
  • #218
vanhees71 said:
a single measurement is as good as doing no measurement, as you learn in the freshman introductory lab on day 1
Well, engineers disagree.

Most things in everyday practice (which is the origin of the majority of macroscopic measurements made) are measured only once or twice, with very informative results.

Only measurements that are very noisy need many repetitions - and even then only the final average counts as the real measurement, not the individual instance.
 
  • #219
mikeyork said:
In that case, how is ##<A>## different from ##<a>##? (See my post #210.
I don't know what a is, hence not what <a> should mean.

mikeyork said:
How is the distinction between macroscopic and microscopic relevant?
Because all measurements are derived by computations or interpretation from macroscopic measurements. So the latter are the basic objects without which the former cannot even be found. Moreover, macroscopic measurements give meaningful results even without repetition. Hence macroscopic objects have a more realistic nature than microscopic ones.
 
  • #220
A. Neumaier said:
Because measurement is a complicated statistical mechanics process that should not enter the foundations of quantum mechanics - just as it doesn't enter the foundations of classical mechanics. Born's rule should be a consequence of good foundations rather than a postulate that is part of (in the opinion of many physicists problematic) foundations.

That measuring a spot on a screen tells us anything about the state of a silver atom is something that needs to be proved from the dynamics of quantum mechanics rather than postulated at the outset.

At least this is the opinion of the authors whose work is discussed in this thread, and it is also my opinion, having spent before many years on trying to understand the foundations until I realized that.
This is a typical misunderstanding of many theoretical physicists. Physics is all about measurements! You cannot even do good old classical Newtonian mechanics without defining observables first, and observables are defined by (equivalence classes) of measurement procedures. For Newtonian mechanics you need to quantitatively define time, length, and mass as the fundamental quantities upon which the entire edifice is built.

In the case of the SG experiment you can indeed quite easily "prove" the meaning of the spot in their standard interpretation by solving the time-dependent Schrödinger equation. In a simplified form, I've done this once in a QM 2 lecture:

http://th.physik.uni-frankfurt.de/~hees/publ/hqm.pdf
Sect. 2.12
 
  • #221
A. Neumaier said:
Well, engineers disagree.

Most things in everyday practice (which is the origin of the majority of macroscopic measurements made) are measured only once or twice, with very informative results.

Only measurements that are very noisy need many repetitions - and even then only the final average counts as the real measurement, not the individual instance.
I hope very much, using a lot of products by engineers, that they do not disagree, and as far as I can say from what's taught in the engineering faculties around the world, they indeed don't!
 
  • #222
A. Neumaier said:
I don't know what a is, hence not what <a> should mean.
##<a> = <\psi|A\psi>##. It's the Born rule expectation. A macroscopic state has a state vector just like a particle. (I have edited my last post to distinguish the macroscopic ##<a>## from the atomic ##<a_i>##.)

A. Neumaier said:
Because all measurements are derived by computations or interpretation from macroscopic measurements. So the latter are the basic objects without which the former cannot even be found. Moreover, macroscopic measurements give meaningful results even without repetition. Hence macroscopic objects have a more realistic nature than microscopic ones.
Fair enough. But we're talking QM here. So the distinction between macroscopic and microscopic is quantitative not qualitative.

Now please tell me what the expectation value of an operator means, and why you think it describes a measurement.
 
  • #223
I hope, we all agree that the Born rule is not restricted to pure states. Otherwise, all our debates make even less sense!

I agree with the last statement. I've no clue what all the symbolism of QT as a physical theory (not pure formalistic mathematics) should make, if I'm not allowed to interpret the quantum state (no matter whether pure or mixed) in the usual probabilistic sense, and indeed it's expectation values of observables (derived from Born's rule to be ##\langle A \rangle=\mathrm{Tr}(\hat{\rho} \hat{A})##) not of operators. There are no operators in nature but only in our formal description of nature within QT.
 
  • #224
In my post #216 I should have written ##a_{macro}## (meaning an eigenvalue of the macroscopic system) not ##<a>##. Sorry for the confusion. I get confused because I really do not understand what you mean by the expectation value of an operator and why you claim it to be the result of a measurement. It is the most annoying thing about this thread that you will not explain it to me.
 
  • #225
vanhees71 said:
This is a typical misunderstanding of many theoretical physicists. Physics is all about measurements!
No. This is your misunderstanding!

Physics is about understanding nature in terms of mathematics. (Galilei: The book of nature is written in the language of mathematics.)

We cannot measure anything in the past or future but still believe that physics draws a reasonably correct picture of dynamics, no matter what is measured.
Technology based on physics works, although nothing or very little is measured. Thus theoretical physics without measurement has lots of healthy uses, whereas measurement without underlying theory does not even get off the ground since a lot of theory is needed to even design and calibrate the devices that create measurements. This shows that theory is the foundation!

Measurements are only used to check the quality of predictions and theories, and to collect data that may lead to better or new theories.
 
  • Like
Likes dextercioby
  • #226
I have no clue what you mean by ##a_{\text{macro}}##. We discuss QT, and there expectation values are given by ##\langle A \rangle=\mathrm{Tr} (\hat{\rho} \hat{A})##, where ##\hat{\rho}## is the statistical operator of the system (no matter whether it's "microscopic" or "macroscopic") and ##\hat{A}## is the (usually self-adjoint) operator representing the observable ##A##.
 
  • #227
vanhees71 said:
There are no operators in nature but only in our formal description of nature within QT.
There are also no measurements in nature but only in our formal descriptions of nature within scientist's logbooks.
 
  • Like
Likes Greg Bernhardt
  • #228
A. Neumaier said:
No. This is your misunderstanding!

Physics is about understanding nature in terms of mathematics. (Galilei: The book of nature is written in the language of mathematics.)

We cannot measure anything in the past or future but still believe that physics draws a reasonably correct picture of dynamics, no matter what is measured.
Technology based on physics works, although nothing or very little is measured. Thus theoretical physics without measurement has lots of healthy uses, whereas measurement without underlying theory does not even get off the ground since a lot of theory is needed to even design and calibrate the devices that create measurements. This shows that theory is the foundation!

Measurements are only used to check the quality of predictions and theories, and to collect data that may lead to better or new theories.
All our success in technology is indeed based on both sides of physics, theoretical and experimental, and thus particularly in the ability to precisely quantify observations of nature, and this quantification is possible only by defining measurement procedures, which includes itself both theory and experiment/engineering. Even to define a simple quantity as the length of my table, I need both theory (basically the assumption about the validity of some geometry of space, in this case Euclidean geometry) and engineering to build a measurement device (in this most simple case simply a meter stick).

Of course, I agree with you, that mathematics is the only adequate language to do theoretical physics, but it's still physics and refers to well-defined quantities. Finally how well defined your quantities are is also a question of the progress of technology. That's why in the not too far future, we'll have a redefinition of some quantities in the SI units (mass, mole, Ampere).
 
  • #229
A. Neumaier said:
There are also no measurements in nature but only in our formal descriptions of nature within scientist's logbooks.
No, a measurement is a very "real" activity and not merely a formal description.
 
  • #230
vanhees71 said:
I have no clue what you mean by ##a_{\text{macro}}##.
I use ##a## as the value of an observable. ##a_{macro}## is the result of measuring it for a macroscopic system and is an eigenvalue of that system.

vanhees71 said:
We discuss QT, and there expectation values are given by ##\langle A \rangle=\mathrm{Tr} (\hat{\rho} \hat{A})##, where ##\hat{\rho}## is the statistical operator of the system (no matter whether it's "microscopic" or "macroscopic") and ##\hat{A}## is the (usually self-adjoint) operator representing the observable ##A##.
You use ##A## here as my ##a## and ##\hat{A}## for the operator. But Neumaier uses ##A## for the operator! (Which is why I use ##a## for the variable.)

But it seems (and he neither denies nor explains why, which I find very frustrating) that he uses ##<A>## to be the expectation value of an operator. Confusing? Yes, very!
 
  • #231
vanhees71 said:
No, a measurement is a very "real" activity and not merely a formal description.

I would say that observations are real. But interpreting an observation as a measurement of something is theory-dependent.
 
  • Like
Likes A. Neumaier
  • #232
A measurement is a quantified observation. Concerning the confusion with the notation, it is clear that observables are themselves not operators on a Hilbert space but defined as an equivalence class of measurement procedures in the real world. That's why I use ##\hat{A}## for the operator, ##A## for the observable, and the average is either an average over many measurement results on an ensemble of equally prepared systems (that's the case, e.g., for standard scattering experiments with single particles, nuclei, atoms, etc.) or a temporal or spatial average by an measurement apparatus (e.g., if you measure the effecive value of an AC current or voltage or the intensity of light.

I still don't know, what you mean by "measuring a macroscopic system". Macroscopic systems are quantum systems too. I guess what you mean are the usual "bulk observables" of macroscopic system (i.e., a system consisting of very many particles) like single-particle densities/phase-space distributions, the total energy and momentum, the center-of-mass position etc. These behave under usual conditions (e.g., close to thermal equilibrium at finite temperature) classically, because they are averaged over many microscopic degrees of freedom and quantum as well as thermal fluctuations (quantified by standard deviations of the macroscopic observables) are small compared to the typical relevant order of magnitude of changes of these variables just thanks to the "law of large numbers".
 
  • #233
vanhees71 said:
A measurement is a quantified observation.

Okay, you can define it that way, but my point was that people normally assume that a measurement implies that you are measuring something. But what it is that is measured by an observation is theory-dependent.
 
  • #234
vanhees71 said:
A measurement is a quantified observation. Concerning the confusion with the notation, it is clear that observables are themselves not operators on a Hilbert space but defined as an equivalence class of measurement procedures in the real world. That's why I use ##\hat{A}## for the operator, ##A## for the observable, and the average is either an average over many measurement results on an ensemble of equally prepared systems (that's the case, e.g., for standard scattering experiments with single particles, nuclei, atoms, etc.) or a temporal or spatial average by an measurement apparatus (e.g., if you measure the effecive value of an AC current or voltage or the intensity of light.

I still don't know, what you mean by "measuring a macroscopic system". Macroscopic systems are quantum systems too. I guess what you mean are the usual "bulk observables" of macroscopic system (i.e., a system consisting of very many particles) like single-particle densities/phase-space distributions, the total energy and momentum, the center-of-mass position etc. These behave under usual conditions (e.g., close to thermal equilibrium at finite temperature) classically, because they are averaged over many microscopic degrees of freedom and quantum as well as thermal fluctuations (quantified by standard deviations of the macroscopic observables) are small compared to the typical relevant order of magnitude of changes of these variables just thanks to the "law of large numbers".
An average is an empirical number obtained from a sample; an expectation is a theoretical quantity derived from a theoretical distribution and applied to a single measurement. We expect them to become the same only with an infinitely large sample. Do we agree on that distinction?

One can treat a macroscopic object as a single quantum entity with an expectation or one can treat it as an ensemble of microscopic quantum objects.
 
  • #235
vanhees71 said:
No, a measurement is a very "real" activity and not merely a formal description.
Then maybe it is an informal description.

It is a social concept invented by physicists to help them make correct statements about Nature. The latter are supposed to hold without any measurement; otherwise we wouldn't get any insight into unmeasured systems.

There were no measurements in nature before 4000 BC, say, but physics still applies to everything before that time.
 
  • #236
vanhees71 said:
, the total energy and momentum, the center-of-mass position etc. These behave under usual conditions (e.g., close to thermal equilibrium at finite temperature) classically, because they are averaged over many microscopic degrees of freedom
The total energy is not an average over many microscopic degrees of freedom, neither is the total mass.

Even for position, which may be viewed as such an average, the microscopic degrees of freedom are never measured, so Born's rule (which is exclusively about measurement results) cannot apply even in principle!
 
  • #237
vanhees71 said:
A measurement is a quantified observation. Concerning the confusion with the notation, it is clear that observables are themselves not operators on a Hilbert space but defined as an equivalence class of measurement procedures in the real world.
There is no such notion of ''equivalence class of measurement procedures in the real world''; it is your invention!

The collection of measurement procedures for a particular quantity (let us say mass) in the real world strongly depends on time, but still we believe that Newton had the same notions of length, force, or mass in mind that we have today. Moreover, the form and accuracy of measurement procedures varies wildly depending on the size of the object and the details of the procedure, and is always limited. So how can they define a concept in a way that it could subsequently be the subject of theoretical physics?

One needs theory (including a theoretical definition of the quantity). to even determine whether a proposed measuring protocol is in fact measuring the desired quantity. A famous quote of Callen (p.15 in the second edition of his even more famous book on thermodynamics) says:
Callen said:
Operationally, a system is in an equilibrium state if its properties are consistently described by thermodynamic theory.
The context in which this quote - in the original emphasized by putting it in italic! - appears shows that he clearly means this and understands its implications.

Thus the theory is always the primary thing, defining everything conceptually, and measurement is the way to check its consistency with the real world.
 
Last edited:
  • #238
A. Neumaier said:
The total energy is not an average over many microscopic degrees of freedom, neither is the total mass.

Even for position, which may be viewed as such an average, the microscopic degrees of freedom are never measured, so Born's rule (which is exclusively about measurement results) cannot apply even in principle!
It's really very difficult to discuss, if you don't want to understand each other. Born's rule for me applies to both pure and mixed states. For a macroscopic system, of course, we don't measure microscopic degrees of freedom (e.g., the position of all particles within the system), because we are not able to get this information, because it's too complex (if you have 1 mol of a gas, you cannot measure ##3N_{\text{A}}## position components, because it's too much information to store). What you can, however measure is the center of mass, and it's described by the operator
$$\hat{\vec{R}}=\frac{1}{M} \sum_{j} m_j \hat{\vec{x}}_j.$$
You also cannot know the mircoscopic pure state of the system but guess only a statistical operator, given the information about the system (e.g., the total energy, momentum, angular momentum of the system) and then use the maximum-entropy principle, which you may take as a fundamental principle of statistical physics (a very reasonable one, given the meaning of entropy in the information-theoretical approach). You are let to the (generalized) equilibrium distribution (most simply stated in the grand-canonical approach, where only the averages are specified),
$$\hat{\rho}=\frac{1}{Z} \exp[-\beta (\hat{H}-\vec{v} \cdot \hat{\vec{P}})],$$
here for simplicity assuming a non-rotating system, i.e., with total angular momentum 0. Then it's easy to see that all the macroscopic properties, defined by the expectation value are as expected (e.g., you have ##\hat{\vec{P}}=M \dot{\hat{\vec{X}}}## in the Heisenberg picture, which is most convenient for this discussion and thus ##\langle \vec{X} \rangle=\langle \vec{X} \rangle_{0}+\vec{v} t##). Then, if the system is very large, also the standard deviations of the macroscopic variables are small compared their values and the relevant accuracy with which these macroscopic observables are measured, so that you get classical behavior, and the fluctuations are hard to observe (although it's of course possible, and it lead to Einstein's work on Brownian motion and related subjects, finally proving the existence of atoms, molecules, etc.).

If you know more about the system than the mere values of the additive conserved quantities, you can refine your state by working in the corresponding constraints in the maximum-entropy principle, which leads to off-equibrium statistical mechanics. As with any situation, where the full information about the system (e.g., by preparing a a state for which one complete set of compatible observable take given values, which leads to a pure state described by the then uniquely defined common eigenvector of the corresponding operators), you can only make educated guesses about the right statistical description, and the maximum-entropy principle is one way to make such an educated guess. Whether or not this guess leads to a good description of the situation considered is subject to empirical confirmation and may lead to refinements of the description. This is not specific for quantum theory but for any statistical approach to a coarse-grained description, which always needs the specification of the relevant observables and the accuracy with which their determination is necessary for the corresponding "macroscopic" description.
 
  • #239
A. Neumaier said:
There is no such notion of ''equivalence class of measurement procedures in the real world''; it is your invention!

The collection of measurement procedures for a particular quantity (let us say mass) in the real world strongly depends on time, but still we believe that Newton had the same notions of length, force, or mass in mind that we have today. Moreover, the form and accuracy of measurement procedures varies wildly depending on the size of the object and the details of the procedure, and is always limited. So how can they define a concept in a way that it could subsequently be the subject of theoretical physics?

One needs theory (including a theoretical definition of the quantity). to even determine whether a proposed measuring protocol is in fact measuring the desired quantity. A famous quote of Callen (p.15 in the second edition of his even more famous book on thermodynamics) says:

The context in which this quote - in the original emphasized by putting it in italic! - appears shows that he clearly means this and understands its implications.

Thus the theory is always the primary thing, defining everything conceptually, and measurement is the way to check its consistency with the real world.
Yes, and to define, what's the meaning of mass, length, force, etc. you have to give measurement procedures to enable their quantitative observation, and there are many different ways to operationally define the quantities, and as you state yourself it's also changing with time due to the development of new technical possibilities to measure these quantities. That's why I summarized this as "equivalence class of measurement procedures". Of course, I assumed (obviously falsely) what every physics student learns in the first experimental-course lectures, namely that physical observables are defined by appropriate measurement procedures, i.e., operationally in the lab and not as abstract mathematical definitions within some theory.

The theoretical physicist of course aims at such descriptions like Newtonian (analytical) mechanics, where you can define observables in a rather abstract way. Nevertheless, to make this more than a pure mathematical exercise, i.e., to a physical theory, you need the definition of the quantities described by the formalism via operational definitions by measurement procedures. Even the famouls Callen cannot "check its [the theory's] consistency with the real world" without having measurement procedures defined to measure the quantities described by the theory!
 
  • #240
vanhees71 said:
What you can, however measure is the center of mass
So you measure once a single operator, and do not take an average of many measurements. But Born's rule only applies to an ensemble of measurements, not to a single one. Your argument about means has weight only if your averages are averages of measurements (to which Born's rule applies), not if your averages are averages of operators, about which Born's rule is silent.

vanhees71 said:
Yes, and to define, what's the meaning of mass, length, force, etc. you have to give measurement procedures to enable their quantitative observation,
To explain what it means one gives sample procedures that result in approximate measurements - not equivalence classes of procedures. One explains that length is what you measure with a rule, force what one measures with a scale, and time what you measure with a clock. This is enough to create a preliminary correspondence of the theoretical concepts with reality. But it is only a very approximate correspondence since rulers, scales, and clocks have limited accuracy.

Once you need more accuracy, it is the theory that tells whether a measurement device is accurate enough
, since only the theory is able to give precise definitions of the concepts. You cannot define the mass of a star by a measurement procedure for it. Instead, the mass of a star is defined theoretically as a parameter in a stellar model, and the measurement procedure is derived purely from the stellar model!

Thus it is theory that defines the precise meaning of any observable, and whatever preliminary explanation is given in terms of a simple measurement procedure is only a heuristic illustration, not its foundation.

vanhees71 said:
Even the famous Callen cannot "check its [the theory's] consistency with the real world" without having measurement procedures defined to measure the quantities described by the theory!
This agrees with my claim that measurements are only needed to check a theory's consistency with the real world.
 
  • #241
I never ever have seen an operator in a physics lab, and my experimental colleagues measure observables, defined by appropriate measurement procedures. I don't know, why you resist this simple fact of how physics is done.

I really described very clearly that also macroscopic observables are described as any observable by a self-adjoint operator in Hilbert space and further used your own rule, how to predict measurements of these observables for the typical case of macroscopically determined states, taking the expectation value and argued, why the fluctuations around this mean value under this circumstances are exspected to be small compared to the macroscopically necessary accuracy. I don't understand why you argue against your own interpretation. Is it only because you are for some incomprehensible reason against the statistical interpretation of the state, i.e., Born's rule? The problem with this is that you are not willing to give a clear physical interpretation of the state. The formalism you give in your book is not clear at all for application in the physics lab!
 
  • Like
Likes Auto-Didact
  • #242
vanhees71 said:
I never ever have seen an operator in a physics lab, and my experimental colleagues measure observables
So what? I never ever have seen an observable, though I have done lots of measurements. Mass, distance, momentum, charge, etc. are all invisible.

But there are operators called position, momentum, distance, angular momentum, mass, spin, energy, charge, electric field in a region of space, etc., and these are measured in the lab.

How is the mass of the Earth (or of a distant star) defined in terms of lab measurements? I have never seen it explained anywhere in terms of your "equivalence class of measurement procedures". But every physicist understands the term as a theoretical quantity figuring as a parameter in the gravitational law. Based on that, a number of ways were found to measure it under appropriate conditions.

vanhees71 said:
The formalism you give in your book is not clear at all for application in the physics lab!
What does not apply?

It just needs to be augment it by a dictionary relating the notions in the book to the notions in the lab. This is easily done by telling which instruments prepare and measure what. Such a dictionary is necessary for the application of any language to anything, hence not the fault of my description. Even a book on experimental physics needs this dictionary to be applicable to the lab, unless you assume that the common language is already known. But then I am allowed to assume this as well!
 
Last edited:
  • #243
I don't think that we are able to communicate about the first part in an adequate way.

It is, e.g., very clear how to determine the mass of astronomical bodies from the motion, making use of (post)Newtonian theory. A very amazing exsample of accuracy is "Pulsar Timing":

http://th.physik.uni-frankfurt.de/~hees/cosmo-SS17/pulsar-timing-theorie.pdf

And I'd never say ##\hat{\vec{x}}## "is the postion" but "it's the operator representing position" or shorter "it's the position operator" (the same holds for any observable).

I think the problem is your last paragraph
You just need to augment it by a dictionary relating the notions in the book to the notions in the lab. But this is necessary for the application of any language to anything, hence not the fault of my description. Even a book on experimental physics needs this dictionary to be applicable to the lab, unless you assume that the common language is already known. But then I am allowed to assume this as well!
It is not my task to "provide the dictionary relating the notion in the book to the notions in the lab", because I take the standard way physicists do this for over 90 years no as sufficient, and the relation you ask for is simply the probabilistic meaning of the quantum state according to Born's rule, no more no less. There's no principle distinction between macroscopic and microscopic observables but only in the systems considered and the degree of coarse-graining to be taken as satisfactory accuracy of determining the "relevant" observables.

You deny the probabilistic meaning of the state and define "expectation values" with all the properties of the probabilistic standard meaning but on the other hand just denying this standard way to relate the formalism to the physics in the lab. To convince any physicist of your alternative interpretation, you must give the physical meaning of your mathematics by precisely what you formulated in the quoted paragraph, i.e., you have to "provide the dictionary relating the notion in the book to the notions in the lab".
 
  • Like
Likes Auto-Didact
  • #244
vanhees71 said:
It is not my task to "provide the dictionary relating the notion in the book to the notions in the lab" [...] you have to "provide the dictionary relating the notion in the book to the notions in the lab".
This is easily done by telling which instruments prepare and measure what; no more. I actually know this dictionary; I have more physics education than you may assume. Once this dictionary is set up one can check to which extent theory and experiment agree.

I know that with my thermal interpretation, quantum mechanics and experiment fully agree on the level of thermodynamic measurements. Because (as shown in my book) the probability interpretation can be derived from the thermal interpretation under the appropriate conditions I also know that with my thermal interpretation, quantum mechanics and experiment fully agree for the Stern-Gerlach experiment or for quantum optical experiments.

Thus your claim is wrong that I deny the probabilistic meaning of the state in the cases where such a meaning is appropriate.
 
  • #245
That's great progress! So finally what you get is the standard probabilistic/statistical connection between theory and experiment. So what are we debating after all?
 
  • Like
Likes Auto-Didact
Back
Top