On the myth that probability depends on knowledge

In summary, the conversation discusses the concept of objective probabilities and how they relate to knowledge. It is mentioned that objective probabilities are properties of an ensemble, not of single cases, and that they can be understood in frequentist terms as the frequency of an event occurring in the limit of infinite trials. The idea of forgetting knowledge and its effect on probabilities is also discussed, with one participant strongly disagreeing and another questioning the definition of "objective probabilities."
  • #1
A. Neumaier
Science Advisor
Insights Author
8,638
4,684
Demystifier said:
A. Neumaier said:
It is a myth believed (only) by the Bayesian school that probability is dependent on knowledge.

You cannot change the objective probabiltiies of a mechanism by forgetting about the knowledge you have.

Lack of knowledge results in lack of predictivity, not in different probabilities.
I strongly disagree, but elaboration would be an off topic.
Please elaborate it here!
 
Last edited:
Physics news on Phys.org
  • #2
Someone will have to explain what "objective probabilities" are. If you begin with the assumption that there are probabilities that would be agreed upon by every observer, I suppose you automatically make them independent of knowledge by postulating that all those observers have the same knowledge.
 
  • #3
This thread title made me laugh, so I'll bite.

What is the objective probability that the gas molecules in a box of air are in configuration x? Given that the gas molecules were in a definite state in the past, can the "objective" answer be anything other than [tex] \delta(x - x_{\mbox{actual}}(t)) [/tex] (schematically) ?

I'm genuinely curious what people think.
 
  • #4
Objective probabilities in an experiment can be understood in frequentist terms, as the frequency in which some event would occur in the limit as the number of trials of the experiment went to infinity, with the factors that you wish to control (because you want to probability of A given some facts B about the conditions) being the same in each trial but others allowed to vary. For example, on each trial of Physics Monkey's experiment involving a box of air we might make sure that all the macroscopic conditions such as temperature and pressure and volume are identical, then in the limit as the number of trials goes to infinity, we can look at the fraction of trials where the molecules were in configuration x. This would define an objective probability that a box of air with a given temperature, pressure, volume, etc. has its molecules in configuration x.
 
Last edited:
  • #5
I would be interersted in the forum comments on the following scenario here.

Consider the PC screen you are looking at.

It has M pixels, which can take on N colours.

This limits us to a finite number of states of the screen.

Some of these states offer information, some do not.

The first question of interest is what is the entropy change in passage from one screen state to another, since there is zero energy change involved.

The second question is more subtle.

For any pixel the presence of any colour (except one) implies a signal, which implies information. It is possible to draw up an error correcting scheme to obtain the 'correct' pixel colour for any colour except one.
A black colour implies either that the signal specifies no colour or that the signal is absent for some reason (ie no connection). It is not possible to distinguish in this case.
 
  • #6
Physics Monkey said:
This thread title made me laugh, so I'll bite.

What is the objective probability that the gas molecules in a box of air are in configuration x? Given that the gas molecules were in a definite state in the past, can the "objective" answer be anything other than [tex] \delta(x - x_{\mbox{actual}}(t)) [/tex] (schematically) ?
Probablilties are properties of an ensemble, not of single cases. The probability ot throwing with a given die a 1 is an objective property of the particular die, not one of a single throw of it.

Thus in your case, there is no x_actual, since there are many boxes of air, and what is actual depends on the box, but the probability does not.
 
  • #7
Stephen Tashi said:
Someone will have to explain what "objective probabilities" are. If you begin with the assumption that there are probabilities that would be agreed upon by every observer, I suppose you automatically make them independent of knowledge by postulating that all those observers have the same knowledge.

Agreements are part of science, not of the knowledge of a particular observer.

The probability of decay of any particular radioactive isotope is a well-defined, measurable quantity,
independent of what observes know about this isotope.
 
  • #8
Studiot said:
The first question of interest is what is the entropy change in passage from one screen state to another, since there is zero energy change involved.
The entropy change is zero, since both states have zero entropy. One cannot assign entropy to a particular realization, one can assign it only to the ensemble of all screens likely to be encountered.
 
  • #9
JesseM said:
Objective probabilities in an experiment can be understood in frequentist terms, as the frequency in which some event would occur in the limit as the number of trials of the experiment went to infinity, with the factors that you wish to control (because you want to probability of A given some facts B about the conditions) being the same in each trial but others allowed to vary.

In addition to making an assumption about nature (that the factors you wish to control and others that are "allowed to vary" combine to produce a definite probability) the frequentist definition also puts all observers (or at least all those whose opinion we value) in the same state of knowledge. The factors that they wish to control and those that they allow to vary are "givens". Using terms borrowed from other physical theories, these observers are in a privileged frame of reference.

As to the mathematics, I compare it to the following very ordinary situation: Let ABC be a right triangle with right angle BCA. Let BC = 3. Does the length of the hypotenuse depend on our knowledge of side CA or does it have some "objective" length no matter what we know or don't know? On the one hand, you can argue that the statement "Let ABC be a right triangle..." specifies we have a specific right triangle and that it's hypotenuse must therefore have an objective length regardless of our state of knowledge. On the other hand, you can argue that the length of the hypotenuse is a function of what else is known about triangle.

As to dealing with any problem of forgetting information, the situation with Bayesian probability is no worse than the situation with triangles. In the above situation, suppose that we are given that CA = 4 and then you "forget" that fact. Does the hypotenuse go from being 5 to being unknown? A reasonable practical answer could be yes. For example, if someone read you a homework problem and included the information that CA =4 and then said. "No, wait. I told you wrong. Forget that. The side CA wasn't given." would you keep thinking that the hypotenuse must be 5?
 
  • #10
JesseM said:
Objective probabilities in an experiment can be understood in frequentist terms, as the frequency in which some event would occur in the limit as the number of trials of the experiment went to infinity, with the factors that you wish to control (because you want to probability of A given some facts B about the conditions) being the same in each trial but others allowed to vary. For example, on each trial of Physics Monkey's experiment involving a box of air we might make sure that all the macroscopic conditions such as temperature and pressure and volume are identical, then in the limit as the number of trials goes to infinity, we can look at the fraction of trials where the molecules were in configuration x. This would define an objective probability that a box of air with a given temperature, pressure, volume, etc. has its molecules in configuration x.

An infinite number of experiments never seemed to me like a very reasonable way to build a physical theory. And sometimes we don't get more than one experiment!
 
  • #11
Physics Monkey said:
An infinite number of experiments never seemed to me like a very reasonable way to build a physical theory. And sometimes we don't get more than one experiment!
But the hypothetical infinite number of trials is just meant to define the "true value" of probability that our measurements are supposed to approach--by the law of large numbers, the more actual trials you do, the more unlikely it is that the measured frequency differs from the "true" probability by more than some small amount ε. Similarly the "true value" of a particle's mass would be its precise mass to an infinite number of decimal places, our experiments can never give that but we nevertheless need to assume that such a true mass exists in order to talk about "error" in actual measured values.
Physics Monkey said:
An infinite number of experiments never seemed to me like a very reasonable way to build a physical theory. And sometimes we don't get more than one experiment!
Even a Bayesian can't say anything very useful about probability based on only one experiment, in that case the "probability" depends greatly on your choice of prior probability distribution, and the choice of what prior to use is a pretty subjective one.
 
Last edited:
  • #12
A. Neumaier said:
Probablilties are properties of an ensemble, not of single cases. The probability ot throwing with a given die a 1 is an objective property of the particular die, not one of a single throw of it.

Thus in your case, there is no x_actual, since there are many boxes of air, and what is actual depends on the box, but the probability does not.

So introduce an ensemble. How about letting the ensemble be a set of boxes with the same fixed initial condition and perfectly elastic walls. Is what I wrote now what you would call the objective probability?

And besides, who are you to say that I cannot think about probabilities for a single case. You are just declaring that the Bayesian school is wrong by fiat. But what would you say to the standard sort of gambling example. Imagine I offer you the following game. I'll roll one die, but I don't tell you anything more about the die except that it is 6 sided. You can pick either {1} or {2,...,6} and if your set comes up then you get a big wad of money. Assuming you like money, which set would you choose? The choice to the go with {2,...,6} in the absence of other information is a form of probabilistic reasoning with only a single event.
 
  • #13
one can assign it only to the ensemble of all screens likely to be encountered.

What would that be?
 
  • #14
A. Neumaier said:
Probablilties are properties of an ensemble, not of single cases.
I think this depents about the physical or information theoretical interpretation of "probability" you subscribe to. It sounds very biased to me.

One should ask what is the whole point of the probability measure in the first place?

Either you just define some measures, decide some axioms and you've got just some measure theoretic definition - some mathematics, but then what?

Or you see it as a way to determine the odds of a possible future, in the context of inductive inference. As a guide for further action. In this case, the ensemble makes no sense. The ensemble is a descriptive view, it is completely sterile as a tool for placing bets on the future.

I think we can all agree that the question isn't to discuss axioms of probability theory. The question is what value they have in realistic situations, where we need to make decisions based upon incomplete informaiton. The main value of probability is not just statistics or book keeping. Not in my book.

I haven't had time to read up on anything yet but I noticed Neumaier referring to someone (whittaker something?) the derived the probabiltiy axioms starting from expectations. In that context I'll also note that cox, jaynes and others also derived probability as somewhat unique rules of rationala inference. This does tie probability to inductive inference.

A. Neumaier said:
The probability ot throwing with a given die a 1 is an objective property of the particular die

But this idea only works for classical dices; ie. where all observers agree on the dice in the first place. It's an idealisation.

/Fredrik
 
  • #15
A. Neumaier said:
Agreements are part of science, not of the knowledge of a particular observer.

Science is nothing but a group of interacting, and negotiating special observers called scientists. As we know established science is not necessarily always right, or eternally true. Science is always evolving and REnegotiated among the active group of scientists.

So scientific knowledge, is really nothing but the negotiated agreements of a group of observers. But the point is that this consensus is still not objective, it can only be judged from a particular observer, or another competing observer group. There IS no outside, or external perspective from which scientific agreements are judged.

This is why, technically is still knowledge of a particular observer (or just agreement of a GROUP of observer).

/Fredrik
 
  • #16
For me the whole purpose of probability, is that it is a measure of the odds, or propensity conditional upon the given situation. The question to which probability theory is the answer (in the inductive inference view) is that it is that the mathematical framework to rationally rate degrees of belief, and thus the rational constraints on any random rational action in a game theoretic scenario.

This renders the measure completely observer depdenent, where the observers IS the "player", and the one placing bets and taking risks.

The only problem is of course that the above well known view, is only classical. Ie. it only works for commuting sets of information, which are combined with classical logic.

We need the corresponding generalisation to rational actions based upon the corresponding "measure" that is constructed from "adding" non-commuting information sets. All this does not need any ensembles or imaginary "repeats". Instead the EXPECTATIONS on the future, are inferred from some rational measure of the futures based on the present.

In the classical case it's just classical statistics and logic.

The quantum case is confused, but it's some quantum logic form of the same. But there is no coherent understanding of it yet. I think this roots a lot of the confusion.

/Fredrik
 
  • #17
I've got my own view and don't claim to be a pure bayesian but I'll throw in my cents.

JesseM said:
Even a Bayesian can't say anything very useful about probability based on only one experiment, in that case the "probability" depends greatly on your choice of prior probability distribution, and the choice of what prior to use is a pretty subjective one.

As I see it, the choice of prior is connected to the individual interaction history. The prior has evolved. However for any given windows, clearly the remote history is ereased.

If there is NO history at all, I'd say not even the probability space makes sense. In this sense even the probability SPACE can fade out and be ereased. This concerns what happens to all points in state space that are rarely or never visited in the lifespace of a system - are they still real, or physical?

/Fredrik
 
  • #18
Physics Monkey said:
An infinite number of experiments never seemed to me like a very reasonable way to build a physical theory. And sometimes we don't get more than one experiment!

Agreed. This is why any reasonable, theory must produce and expectation of the future, given the PRESENT. Without infinite imaginary experiments and ensembles.

Then one asks what is the purpose of this expectation? Is the PURPOSE just to compare frequencies of historical events, in retrospect? No. That has no survival value. I think the purpose is as and action guide.

This mean that it does in fact not matter, if the expectations are met or not. They still constrain the action of the individual system holding it. Just look at how a poker game work. Expectations rules rational actions. It doesn't matter if the expectations are "right" in retrospect, because then there are new decisions to make. You always look forward, not back.

/Fredrik
 
  • #19
people may want to read a professional physics philosopher's attempt to analyse this:

What is Probability?

I think he has it wrong that the Everett solution is a good one, my personal view is that this 80+ years of fumbling the understanding/acceptance of an ontological probability in QM has prevented, what will be seen in retrospect, as quite simple scientific progress. But the paper is at least an honest and deeply thought out argument.
 
  • #20
Physics Monkey said:
An infinite number of experiments never seemed to me like a very reasonable way to build a physical theory.

Infinity is well approximated in practice by sufficiently large numbers.

With fewer observations one simply gets less accurate results - as always in physics.
Physics Monkey said:
And sometimes we don't get more than one experiment!

Applying probability theory to single instances is foolish.
 
  • #21
Physics Monkey said:
So introduce an ensemble. How about letting the ensemble be a set of boxes with the same fixed initial condition and perfectly elastic walls. Is what I wrote now what you would call the objective probability?
Yes, if the gas is deterministic, and hence determined by the initial condition.
Physics Monkey said:
And besides, who are you to say that I cannot think about probabilities for a single case. You are just declaring that the Bayesian school is wrong by fiat. But what would you say to the standard sort of gambling example. Imagine I offer you the following game. I'll roll one die, but I don't tell you anything more about the die except that it is 6 sided. You can pick either {1} or {2,...,6} and if your set comes up then you get a big wad of money. Assuming you like money, which set would you choose? The choice to the go with {2,...,6} in the absence of other information is a form of probabilistic reasoning with only a single event.
I am not that interested in money to accept your hypotheses. You may think of probabilities of single cases - these are very subjective, though. They have nothing to do with the probabilities used in physics.

In any case, since I don't know the properties of your 6-sided die, assigning probabilities is completely arbitrary. Unless I assume that the die is just like one of the many I have seen before, in which case I assign equal probabilities to each outcome, because I substitute ensemble probabilities for ignorance.
But if your die had painted 1 on each side, my choice of 2:6 based on my assumption would be 100% wrong.

Thus probabilities are based on _assumptions_, not on _knowledge_.
 
  • #22
Studiot said:
What would that be?

Different people will probably make different assumptions, and hence get different objective probabilities, depending on their assumptions. Thus the choice of assumption introduces a degree of subjectiveness - the _only_ subjectiveness in the whole setting.

But in physics, the assumptions are part of the scientific consensus, and hence there is no choice. To describe a thermodyneamic equilibrium state of a chemical system, say, you _have_ to use the grand canonical ensemble, otherwise you don't get an equilibrium state.

Therefore in physics, probabilities are objective while in gambling they aren't.
 
  • #23
I don't follow the relevence of assumptions to my examples.

In one of them (black state) knowledge is zero but the state is as valid as any other.
 
  • #24
Fra said:
I think this depents about the physical or information theoretical interpretation of "probability" you subscribe to. It sounds very biased to me.
But it isn't. It follows directly from the mathematical definitions. Probabilities are never assigned to a single event but always to the sigma-algebra of all events - in physics language: to the ensemble.
Fra said:
One should ask what is the whole point of the probability measure in the first place?
It specifies the relevant properties of the ensemble.
Fra said:
Either you just define some measures, decide some axioms and you've got just some measure theoretic definition - some mathematics, but then what?
Then you have specified an ensemble in a mathematically fully satisfying way.
Fra said:
Or you see it as a way to determine the odds of a possible future, in the context of inductive inference. As a guide for further action. In this case, the ensemble makes no sense. The ensemble is a descriptive view, it is completely sterile as a tool for placing bets on the future.
The ensemble is the set of items you want to use as bacvkground assumption for predicting the future.
The statistical tools then allow you to estimate properties of the ensemble from a limited number of instances you have seen, assuming these are representative for the complete ensemble.
Fra said:
I think we can all agree that the question isn't to discuss axioms of probability theory. The question is what value they have in realistic situations, where we need to make decisions based upon incomplete informaiton. The main value of probability is not just statistics or book keeping. Not in my book.
Probability theory provides a rational languague for reasoning about uncertainty. But to apply it in gambling, one needs to make lots assumptions. These are justified in cases where the ensemble defining the gamble is either very well determined by physical or legal constraints, or if one has seen so many realizations that one can be confident that what was seen is representative of what is to come. In these cases the probabilities are reliable. In all other cases they are not.
Fra said:
I haven't had time to read up on anything yet but I noticed Neumaier referring to someone (whittaker something?) the derived the probabiltiy axioms starting from expectations. .
Paul Whittle, Probability via expectations. A very nice book with many editions.
 
  • #25
Fra said:
Science is nothing but a group of interacting, and negotiating special observers called scientists. As we know established science is not necessarily always right, or eternally true. Science is always evolving and REnegotiated among the active group of scientists.

So scientific knowledge, is really nothing but the negotiated agreements of a group of observers. But the point is that this consensus is still not objective, it can only be judged from a particular observer, or another competing observer group. There IS no outside, or external perspective from which scientific agreements are judged.

This is why, technically is still knowledge of a particular observer (or just agreement of a GROUP of observer).

/Fredrik
With such a usage of the terms, the terms themselves become meaningless.
 
  • #26
Studiot said:
I don't follow the relevence of assumptions to my examples.

In one of them (black state) knowledge is zero but the state is as valid as any other.

You had asked what would
the ensemble of all screens likely to be encountered be, and I answered that.

The probability of encountering the black state is objectively determined by whatever ensemble you assume. It has nothing to do wiith knowledge.
 
  • #27
A. Neumaier said:
With such a usage of the terms, the terms themselves become meaningless.

I disagree. This is how science works, and it works fine, even if not perfect

/Fredrik
 
  • #28
Fra said:
I disagree. This is how science works, and it works fine, even if not perfect

/Fredrik

Science has a claim of objectivity (and is valued precisely because of that), while with your terminology, nothing is objective.
 
  • #29
The question is what the ensemble IS (ie. how the mathematical abstractions map to physical states)
A. Neumaier said:
The ensemble is the set of items you want to use as bacvkground assumption for predicting the future.
This makes sense. I'd prefer to use this as the starting point of the ensemble.

Ie. The background of information or whatever, from which we infer the future is called ensemble. It's properties remains to find out though, by adding constraints of rationality - rather than postulating axioms. Even if the result is the same or similar, the understanding is different.
A. Neumaier said:
The statistical tools then allow you to estimate properties of the ensemble from a limited number of instances you have seen, assuming these are representative for the complete ensemble.
I think there is no other choice but to assume that they are representative as you say. The limited instances we have is IMO more fundamental than the ensemble, since the true ensemble is never available for decision making anyway. This is why I dislike it in the sense it's usually used or ensembles of identically prepared systems.

I skimmed some of your writings and it seems you also objected to this. But I haven't been able to read more of your thermal view.

In my view, the gaming perspective and or decision making upon incomplete information is the most realistic perspective that represents in my view the problem I address.

Sometimes science is described as descriptive. This is a special case. Even descriptive scientific knowledge, determines human behaviour. You can infer from the action and behaviour of anything or anyone, what they think they know.

/Fredrik
 
  • #30
A. Neumaier said:
Science has a claim of objectivity (and is valued precisely because of that), while with your terminology, nothing is objective.

I agree.

Strictly speaking nothing can be know by an incompelte observer, like me and you, to be objective. What someone else (some superobserver) knows, is irrelevant to our decision making.

The idea of "Science as objective eternal truth", and laws of nature as eternally true, is something one can certainly debate. I think such a view is very much an illusion, and belongs to the past. It is a modern form of realism (structural realism) that persists even into QM and GR.

However I am very much against this attitude.

"People who appeal to fixed conceptions of necessity, contingency and possibility are simply confused"
-- Charles Sanders Peirce

Although, I must admit I understand your position. As you are a mathematician, your perspective is not unexpected. But I respectfully disagree with you there.

/Fredrik
 
  • #31
Fra said:
I agree.
And since, according to you, nothing is objective, the word has lost its descriptive meaning. It can be applied nowhere. This shows that your notion of objectivity is not the standard one. Mine is.
 
  • #32
Fra said:
The question is what the ensemble IS (ie. how the mathematical abstractions map to physical states)
They are mapped in the usual informal way.
Fra said:
I think there is no other choice but to assume that they are representative as you say. The limited instances we have is IMO more fundamental than the ensemble, since the true ensemble is never available for decision making anyway.
In practice, the mathematical ensemble is fundamental, once the situation is a bit complex. We make simplifying modeling assumptions all the time, and these determine the ensemble. Whereas the data we have to fit the parameters of the ensemble changes in amount and value, and hence cannot be taken to be fundamental.

At least that's how science proceeds.
 
  • #33
The probability of encountering the black state is objectively determined by whatever ensemble you assume. It has nothing to do wiith knowledge.

You are making flat assertions without substantiation, mathematical or otherwise sir!

One could 'assume' anything one liked.

But making assumptions casts serious doubt on the validity of any (derived?) result.

There were no assumptions in my scenario.
 
  • #34
A. Neumaier said:
And since, according to you, nothing is objective, the word has lost its descriptive meaning. It can be applied nowhere. This shows that your notion of objectivity is not the standard one. Mine is.

We can still communicate right? So there IS indeed an "effective objectivity". But the difference is that in my perspective, this is emergent and evolving. In particular it's a result of negotiating interactions between subjective views.

Objectivity is the result of interactions. So there sort of is effective objectivity.

I think the difference is that you think of it as a forcing, timeless constraint, in which all subjective views are related as per some relativity principle. I instead thing that the symmetry transformations are emergent.

I think the difference is if you see observer invariance as "forcing constraints", or if you see it in terms of observer democracy, where the constraints are instead emergent.

All we admittedly DO need, is some form of FAPP type objectivity, it does not NEED to be rigid. This objectivity I never denied.

I also agree that your notion of objectivity, is indeed more common than mine.

/Fredrik
 
  • #35
Studiot said:
You are making flat assertions without substantiation, mathematical or otherwise sir!

One could 'assume' anything one liked.

But making assumptions casts serious doubt on the validity of any (derived?) result.

There were no assumptions in my scenario.

Indeed. It is impossible to derive anything without making assumptioons. But
the quality of one's predictions depend crucially on which assumptions are made.

If you have a die in which the six is replaced by another copy of 1, the ignorance assumption of equal probabilities for 1:6 will simply be faulty, although it is generally considered to be the right choice.
Independent of our knowledge, in this particular case, only the probability distribution 2:1:1:1:1:0 is correct, and hence objectively true.

In general, only correct assumptions lead to good predictions.

The purpose of scientific theories is therefore precisely to discover and inform about the assumptions that lead to the best predictions, and are in this sense objective. In particular, we learn which ensembles are most appropriate for which kind of physical situation.
 

Similar threads

Back
Top