When Quantum Mechanics is thrashed by non-physicists #1

In summary: The same state can be described using different finite dimensional vector spaces, each corresponding to a different frame of reference. So, the same state can be said to exist in different ways, and the different interpretations of the state might be considered to be "correct" or "incorrect", depending on your perspective.I haven't read the paper, so I can't say much more about it.
  • #176
stevendaryl said:
I think that there is a sense in which Popperian falsifiability can be seen as a way to manage the complexity of a full-blown Bayesian analysis. If there is a number of possible theories, you just pick one. Work out the consequences, and compare with experiment. Then if it's contradicted by experiment, then you discard that theory, and pick a different one.
Yes. But one problem of the Popperian approach was to handle statistical theories, and statistical experiments, appropriately.
When does a statistical observation falsify a theory? This is where one needs Bayesian reasoning, where one can have a few theories and some statistical observations with unclear outcomes.
 
Physics news on Phys.org
  • #177
I don't know how this ended up discussing Jaynes view but here it is, unedited (From his book, Probability Theory: The logic of science):

The “new” perception amounts to the recognition that the mathematical rules of probability theory are not merely rules for calculating frequencies of “random variables”; they are also the unique consistent rules for conducting inference (i.e. plausible reasoning) of any kind, and we shall apply them in full generality to that end.

It is true that all “Bayesian” calculations are included automatically as particular cases of our rules; but so are all “frequentist” calculations. Nevertheless, our basic rules are broader than either of these, and in many applications our calculations do not fit into either category. To explain the situation as we see it presently: The traditional “frequentist” methods which use only sampling distributions are usable and useful in many particularly simple, idealized problems; but they represent the most proscribed special cases of probability theory, because they presuppose conditions (independent repetitions of a “random experiment” but no relevant prior information) that are hardly ever met in real problems. This approach is quite inadequate for the current needs of science.

In addition, frequentist methods provide no technical means to eliminate nuisance parameters or to take prior information into account, no way even to use all the information in the data when sufficient or ancillary statistics do not exist. Lacking the necessary theoretical principles, they force one to “choose a statistic” from intuition rather than from probability theory, and then to invent ad hoc devices (such as unbiased estimators, confidence intervals, tail-area significance tests) not contained in the rules of probability theory. Each of these is usable within a small domain for which it was invented but, as Cox’s theorems guarantee, such arbitrary devices always generate inconsistencies or absurd results when applied to extreme cases; we shall see dozens of examples.
 
  • #178
Ilja said:
But in Jaynes' variant there is more than only the axioms which define probability. There are also rules for the choice of prior probabilities..

And that's part of how a particular view of something affects how you solve a problem - which is what I said right from the start.

It in no way changes the fact the two are mathematically exactly the same.

Ilja said:
If we have no information which makes a difference between the six possible outocomes of throwing a dice, we have to assign equal probability to them, that means, 1/6.

Its simply confirming what I said - how you view a problem affects how you approach it. Its adding something beyond the Kolmogorov axioms, which are exactly equivalent to the Cox axioms Bayesian's use.

Thanks
Bill
 
Last edited:
  • #179
billschnieder said:
I don't know how this ended up discussing Jaynes view but here it is, unedited (From his book, Probability Theory: The logic of science):

And, as the link I gave detailed, his views are not universally accepted. I certainly do not accept them. All it is is a particular philosophical view that is useful in some circumstances. So is the frequentest view. As one poster in the link, IMHO correctly, said:
'Whether frequentist or Bayesian methods are appropriate depends on the question you want to pose, and at the end of the day it is the difference in philosophies that decides the answer (provided that the computational and analytic effort required is not a consideration).'

The Baysian view of probability vs the Frequentest is not going to be resolved here.

Thanks
Bill
 
Last edited:
  • #180
stevendaryl said:
I haven't read Jaynes, but I don't see how the choice 1/6 is essential to a Bayesian account of probability.

It isn't. Its simply a reasonable assumption that you as a rational agent wouldn't, without evidence one way or the other, prefer one over the other so assign an initial confidence level of 1/6. The frequentest view has no reason to do that, but in practice a frequentest would do the same based on the symmetry of the situation - in a long number of trials you wouldn't expect any face to occur more often than another.

Thanks
Bill
 
  • #181
bhobba said:
It isn't. Its simply a reasonable assumption that you as a rational agent wouldn't, without evidence one way or the other, prefer one over the other so assign an initial confidence level of 1/6. The frequentest view has no reason to do that, but in practice a frequentest would do the same based on the symmetry of the situation - in a long number of trials you wouldn't expect any face to occur more often than another.

Well, the thing that's interesting to me about a symmetry argument for probability is that unlike subjective Bayesian probability, and unlike frequentist probability, which is really a property of an ensemble, rather than an individual event, symmetry-based probability seems like it's an intrinsic property of the entities involved in the random event. So it seems like a candidate for an "objective" notion of probability for a single event.
 
  • #182
stevendaryl said:
Well, the thing that's interesting to me about a symmetry argument for probability is that unlike subjective Bayesian probability, and unlike frequentist probability, which is really a property of an ensemble, rather than an individual event, symmetry-based probability seems like it's an intrinsic property of the entities involved in the random event. So it seems like a candidate for an "objective" notion of probability for a single event.

Its really the same thing in disguise - since if you relabel the faces differently it shouldn't make any difference with one proviso - there is some intrinsic difference between the faces - which is basically what symmetry type arguments that is used in making physical problems easier to solve involve.

Like I said - its simply a different philosophy suggesting a different approach.

Thanks
Bill
 
  • #183
Ilja said:
But in Jaynes' variant there is more than only the axioms which define probability.
About the probabilities many people make the confusion between the axiomatic (mathematics only, say nothing about semantics; based on an independent axiomatic from any application ; as like all pure maths), A methodology of statistical analysis (like http://en.wikipedia.org/wiki/Bayesian_inference or a more general as a methodology for reasoning on uncertain, incomplete, ... data as E.T Jaynes ) and the philosophy about the interpretation of probability.

Patrick
 
  • #184
bhobba said:
Its really the same thing in disguise - since if you relabel the faces differently it shouldn't make any difference with one proviso - there is some intrinsic difference between the faces - which is basically what symmetry type arguments that is used in making physical problems easier to solve involve.

Like I said - its simply a different philosophy suggesting a different approach.

Karl Popper suggested a "propensity" interpretation of probability, where the fact that a coin has a 50/50 chance of landing heads or tails is an objective fact about the coin. I couldn't really see how that made much sense, except possibly as a symmetry argument.
 
  • #185
stevendaryl said:
Karl Popper suggested a "propensity" interpretation of probability, where the fact that a coin has a 50/50 chance of landing heads or tails is an objective fact about the coin. I couldn't really see how that made much sense, except possibly as a symmetry argument.

There is all sorts of different attitudes, philosophies, views etc, call it what you will, towards probability.

As you probably have guessed for me the 'truth' lies in the Kolmogorov axioms - one chooses the view best suited to the problem. For me that's frequentest. It doesn't make it right, better than other views, simply what I prefer.

Thanks
Bill
 
  • #186
microsansfil said:
About the probabilities many people make the confusion between the axiomatic (mathematics only, say nothing about semantics; based on an independent axiomatic from any application ; as like all pure maths),

See page 2 - Feller - An Introduction To Probability Theory And Applications:

In applications the abstract mathematical models serve as tools and different models can describe the same empirical situation. The manner is which mathematical theories are applied does not depend on pre-conceived ideas, it is a purposeful technique depending on and changing with experience. A philosophical analysis of such techniques is a legitimate study, but is not in the realm of mathematics, physics or statistics. The philosophy of the foundations of probability must be divorced from mathematics and statistics exactly as the discussion of our intuitive space concept is now divorced from geometry.

The axioms, in this case the Kolmogerov axioms, and how they are applied, is what applied math and physics is concerned with. Philosophy, experience etc etc guide us in how to apply the axioms - but its the axioms themselves that is the essential thing.

Thanks
Bill
 
  • #187
bhobba said:
The axioms, in this case the Kolmogerov axioms, and how they are applied, is what applied math and physics is concerned with.
The axioms does not tell how to determine probability of an event.

The bayesian Inference or the frequentist Inference are usefull methodology to make this job in many scientific domain.

Until here i don't need to speak about philosophy to use statistics methodology.

Patrick
 
  • #188
microsansfil said:
The axioms does not tell how to determine probability of an event.

I think you need to become acquainted with the strong law of large numbers.
https://terrytao.wordpress.com/2008/06/18/the-strong-law-of-large-numbers/

microsansfil said:
The bayesian Inference or the frequentist Inference are usefull methodology to make this job in many scientific domain.

That I definitely agree with.

Thanks
Bill
 
  • #189
bhobba said:
Its simply confirming what I said - how you view a problem affects how you approach it. Its adding something beyond the Kolmogorov axioms, which are exactly equivalent to the Cox axioms Bayesian's use.

How can there be an equivalence if the domain of applicability is completely different, and the meaning is completely different?

Bayesian probability is about logic of reasoning - what can we conclude given some information. Frequentist probability is about some physical laws of nature, which define how often in repeated experiments the outcome x will be observed, given the preparation procedure.

So if we, for example, do not have all the information about the preparation procedure, frequentist probability tells us nothing (given our information). Bayesian probability would give me something - which would be different from what it would give me if I have the full information.

And frequentism gives simply nothing for the decision which of two theories I should prefer given the data. Ok, what to do in this case you can name "how to view a problem". But, following Bayesian probability, you have rules of logical consistency which you have to follow. The orthodox statistician is, instead, free to violate these rules and name this "his view of the problem". But essentially we can only hope that his "view of the problem" is consistent, or, if inconsistent, his "view" does not give a different result from the consistent one.

This is the very problem you don't seem to see: The Bayesian is required to apply the Kolmogorov axiom in his plausible reasoning. The orthodox not, because plausible reasoning is not about frequencies, thus, no probabilities are involved, and it makes not even sense to say "GR is false with probability .07549", thus, it makes no sense to apply Kolmogorovian axioms to plausible reasoning, as well as it makes no sense to apply them to electromagnetic field strength.

"There is no place in our system for speculations concerning the probability that the sun will rise tomorrow." writes Feller. But this is what the statistics has to do, in its everyday applications. They have to tell us what is the probability that a theory is wrong given the experimental evidence, this is their job. So, in fact they have to apply plausible reasoning and apply it, intuitively. But without the educated information that they have to apply the rules of Kolmogorovian probability theory to their plausible reasoning, which is what they reject as meaningless.
 
  • #190
Ilja said:
How can there be an equivalence if the domain of applicability is completely different, and the meaning is completely different?

Axioms can model different things. So? In Baysian it models a confidence level. In the modern version of frequentest its simply abstract and you show via the strong law of large numbers for a large number of trials FAPP its in proportion to the probability. In a sense its more fundamental than Baysean - but that doesn't make it better or worse.

Thanks
Bill
 
  • #191
Ilja said:
How can there be an equivalence if the domain of applicability is completely different, and the meaning is completely different?
You use a method according to the problem you have to analysis.

For example for this problem how an axiomatics can help to solve it ?

Every morning I park my car around 8 am, in a paid parking place from 9am, several times a week I forgot to move my car to a car park (which opens at 8:30) up 10am.

I would like to calculate the probability of getting a ticket/contravention when I wake up at 10 am to move my car.

Patrick
 
  • #192
microsansfil said:
I would like to calculate the probability of getting a ticket/contravention when I wake up at 10 am to move my car.

There is not enough information to calculate the probability. You need to know, for example, the hours parking inspectors work in your area - or at least their probability of working at that time. Is it on a Sunday? Do they work Sunday's - etc etc.

Added Later
In practice solving problems like that an applied mathematician would model it on a computer using something like simula, incorporate and try factors obtained from observation until it is in reasonable agreement with the level of accuracy required - if that level of accuracy is possible - it may not be.

Thanks
Bill
 
Last edited:
  • #193
bhobba said:
Axioms can model different things. So? In Baysian it models a confidence level. In the modern version of frequentest its simply abstract and you show via the strong law of large numbers for a large number of trials FAPP its in proportion to the probability. In a sense its more fundamental than Baysean - but that doesn't make it better or worse.
If the domain of applicability of approach 1 is much greater than that of approach 2 this makes approach 1 not only different but better.

Whenever you have real physical frequencies, you can also apply plausible reasoning considering them. Thus, you can apply Bayesian probability where you have frequencies. But you cannot apply frequentism in plausible reasoning about things which do not have frequencies. This makes no sense.

This is like applying the Maxwell equations only to static electric fields. This would be stupid, and not simply a "different thing".
 
  • #194
Ilja said:
If the domain of applicability of approach 1 is much greater than that of approach 2 this makes approach 1 not only different but better..

To cut to the chase the claim is the Bayesian domain is better. This is the precise claim the people in the link, as well as myself, doubt. It is not better, for example, in calculating the distribution of offspring in a survival model. Or is a frequentest view the best way to model confidence level in decision theory. You mentioned the probability of GR being true. Obviously probability in that instance is modelling a confidence level.

We seem to be loosing sight however this is a thread on QM - not Bayesian vs frequentest probability. We already have a section in this forum for that.

The point I was making is shut-up and calculate is compatible with either view.

Thanks
Bill
 
Last edited:
  • #195
bhobba said:
There is not enough information to calculate the probability. You need to know,
you need to have a methodology do not give by the axiomatic :

1/ I look at the statistics (number of cars in default of payment, number of cars actually penalized in 1 hour, etc.)
or
2/ I look at the instructions of the police (sidewalks length inspected in 1 hour, number of personnel assigned to tickets, the tolerance, etc.) to buid a prior.

Patrick
 
  • #196
microsansfil said:
you need to have a methodology do not give by the axiomatic :

1/ I look at the statistics (number of cars in default of payment, number of cars actually penalized in 1 hour, etc.)
or
2/ I look at the instructions of the police (sidewalks length inspected in 1 hour, number of personnel assigned to tickets, the tolerance, etc.) to buid a prior.

That would be a start. Whether is would be a good enough model depends purely on how accurate you want its predictions.

But I can't follow your point - in such a case it wouldn't matter one bit which view of probability you took - its finding a good model that's relevant.

Thanks
Bill
 
  • #197
bhobba said:
its finding a good model that's relevant.
What do you call a model in this context ? what is a good model that's relevant ?

The formulation of a statistical model using Bayesian statistics has the feature of requiring the specification of prior distribution for any unknown parameters. Statistical models are also part of the foundation of Bayesian inference (starting with a prior distribution, getting data, and moving to the posterior distribution).

A Posteriori ∝ Vraisemblance * A Priori
P(θ|y ) ∝ P(y |θ)P(θ)

The most we can hope to do is to make the best inference based on the experimental data and any prior knowledge that we have available, reserving the right to revise our position if new information comes to light.

Patrick
 
  • #198
I think the original issue has been addressed. Time to close this thread.

Thanks everyone!
 

Similar threads

Back
Top