On the myth that probability depends on knowledge

In summary, the conversation discusses the concept of objective probabilities and how they relate to knowledge. It is mentioned that objective probabilities are properties of an ensemble, not of single cases, and that they can be understood in frequentist terms as the frequency of an event occurring in the limit of infinite trials. The idea of forgetting knowledge and its effect on probabilities is also discussed, with one participant strongly disagreeing and another questioning the definition of "objective probabilities."
  • #71
skippy1729 said:
PPS I would appreciate any reference to "objective" Bayesian probability theory.
I had given a link to Wikipedia where both the subjective and the objective variant are mentioned.
 
Physics news on Phys.org
  • #72
DaleSpam said:
Yes, Bayesian statistics can be applied to an ensemble, but they can also be applied to other situations. It is more general. From the wikipedia link and your comments I still can't tell exactly what you are referring to specifically when you say _subjective_ probability and why you think it is not relevant in physics. Are you just concerned about making bad subjective assessments in the prior probability?

Objective = independent of any particular observer, verifiable by anyone with the appropriate understanding and equipment.

Subjective = degree of belief, and such things, which cannot be checked objectively.

Bayesian statistics with an unspecified prior to be chosen by the user according to his knowledge is subjective statistics. It doesn't make user-independent predictions.

Bayesian statistics with a fully specified model, including the prior, is objective statistics.
One can check its predictions on any sufficiently large sample. Of this kind is the statistics in physics. The ensemble is always completely specified (apart from the parameters to be estimated).
 
  • #73
A. Neumaier said:
If there is only a single event, it depends on what is actually the case whether switching is a better option, and no risk analysis will help you if your choice was wrong.

A risk analysis is based upon the assumption that the distribution of the prize is uniform, so that you gain something from the disclosed information. This assumes an ensemble of multiple repetitions of the situation.

For a correct probability estimation beforehand, no "multiple" (infinite?!) repetitions of the situation are required. The subject can make an objective analysis based on the given information, even though for the quiz master the chance is 0 or 1 because he already knows the result.
As a matter of fact, the "probability" of what actually is, is always 1 - That's not really "probability". :-p
 
  • #74
harrylin said:
For a correct probability estimation beforehand, no "multiple" (infinite?!) repetitions of the situation are required. The subject can make an objective analysis based on the given information, even though for the quiz master the chance is 0 or 1 because he already knows the result.
If the probabilities depend on the person it is a subjective probability.

For the person doing the analysis, though the interest may be in predicting a single case, the objective probability refers to the probability in the ensemble analyzed, and not to the single unknown case. For in the latter case, the probability of a future event would depend on the particular past data set used, which (a) is strange and (b) would make it again a subjective probability.
harrylin said:
As a matter of fact, the "probability" of what actually is, is always 1 - That's not really "probability". :-p
I disagree. The Kolmogorov axioms for a probability space are satisfied.
 
  • #75
A. Neumaier said:
If the probabilities depend on the person it is a subjective probability.
Any subjective estimations by that person don't play a role; only the available information. It's objective (although not "invariant") in the sense that the calculation is according to standard rules of probability calculus and everyone (except you?) agrees about that calculation.
For the person doing the analysis, though the interest may be in predicting a single case, the objective probability refers to the probability in the ensemble analyzed, and not to the single unknown case. For in the latter case, the probability of a future event would depend on the particular past data set used, which (a) is strange and (b) would make it again a subjective probability. [...]

I'm afraid that I can't follow that... this is like any other "take a marble without putting it back and then take another one" probability calculation. Future probabilities can depend on past actions, according to standard and objective rules of calculation.

Now, is that objective or subjective? That isn't the topic of this thread, but a quick sample from dictionary.com of the common meaning of words tells me that such calculations are definitely objective and not subjective:

- Objective: not influenced by personal feelings, interpretations, or prejudice; based on facts; unbiased: an objective opinion.

- Subjective: belonging to the thinking subject rather than to the object of thought; pertaining to or characteristic of an individual; personal; individual: a subjective evaluation.

I omitted "existing in the mind" as objective opinions and evaluations also exist in the mind - that isn't helpful. :smile:

Harald

PS I now see that you posted similar definitions; necessarily we cannot but agree on that point.
 
  • #76
A. Neumaier said:
Objective = independent of any particular observer, verifiable by anyone with the appropriate understanding and equipment.

Subjective = degree of belief, and such things, which cannot be checked objectively.

Bayesian statistics with an unspecified prior to be chosen by the user according to his knowledge is subjective statistics. It doesn't make user-independent predictions.

Bayesian statistics with a fully specified model, including the prior, is objective statistics.
Thanks, now I clearly understand what you mean by subjective. You are correct that specifying a good prior can be a tricky business and that different users will often make different choices in priors which makes it subjective in your terminology.

Frequentist statistical tests often reduce to a Bayesian test with an ignorance prior. In your definition Bayesian statistics with an ignorance prior would be objective since it is user-independent.

However, what if we are not completely ignorant at the beginning? What if we have some knowledge that is not shared with other users? Why should the user-dependent (subjective) state of knowledge not lead to user-dependent priors and therefore user-dependent predictions about the outcome of some physical experiment?

A. Neumaier said:
One can check its predictions on any sufficiently large sample. Of this kind is the statistics in physics. The ensemble is always completely specified (apart from the parameters to be estimated).
On any sufficiently large sample the prior is irrelevant and only the data matters. So over an ensemble, even with subjective priors, the Bayesian approach gets user-independent (objective) posteriors.
 
Last edited:
  • #77
DaleSpam said:
Thanks, now I clearly understand what you mean by subjective. You are correct that specifying a good prior can be a tricky business and that different users will often make different choices in priors which makes it subjective in your terminology.

Frequentist statistical tests often reduce to a Bayesian test with an ignorance prior. In your definition Bayesian statistics with an ignorance prior would be objective since it is user-independent.

However, what if we are not completely ignorant at the beginning? What if we have some knowledge that is not shared with other users? Why should the user-dependent (subjective) state of knowledge not lead to user-dependent priors and therefore user-dependent predictions about the outcome of some physical experiment?
Specifying the prior defines the ensemble and hence makes the probabilities objective - no matter whether the prior is good or poor. The quality of the prior is a measure not of objectivity but of matching reality.

In most cases, one has two different ensembles: the model ensemble and the ensemble to which the model is supposed to apply. The second ensemble is usually unknown since part of it lies in the future, and often the future uses of a model are not even precisely known. Quality measures the gap between these two ensembles.

If the model is silent about the prior then the probabilities are subjective since different users may choose different priors and then get different predictions.

If the application is silent about precicely which events it should be applied to then the probabilities are subjective since different users may apply it to different scenarios and then get different results.

If the application is as single instance then the probabilities are 0 or 1, and only someone who knows the answer or guesses it correctly can have a correct model of the situation.

In physics (which is my concern in this thread), the physical description of a system completely specifies the ensemble, both of the model (the governing euations and boundary conditions) and of the application (the experimental setting). Thus both the predicted and the observable probabilities are objective. Whether one or both of therm may be unknowwn at particular times to particular people is completely irrelevant.

This objectivity is the strength of scientific practice in general, and of physics in particular. It allows anyone with access to the necessary information and equipment check the quality of any particular model with respect to the application it is supposed to describe.
DaleSpam said:
On any sufficiently large sample the prior is irrelevant and only the data matters. So over an ensemble, even with subjective priors, the Bayesian approach gets user-independent (objective) posteriors.
But your ''sufficiently large'' may have to be far larger than mine.
 
Last edited:
  • #78
harrylin said:
Any subjective estimations by that person don't play a role; only the available information. It's objective (although not "invariant") in the sense that the calculation is according to standard rules of probability calculus and everyone (except you?) agrees about that calculation.
Bayesian techniques need both available information _and_ a prior. If the prior is not specified, it may depend on the persons subjective estimate, and calculations need not agree.

Thus if one gives strict rules for how to determine the prior from prior information (this is the case in the bayesian applications to animal breeding I had cited before), the calculated Bayesian estimates are objective.

In all other cases, the calculated Bayesian probabilities are subjective.
 
  • #79
A. Neumaier said:
Specifying the prior defines the ensemble and hence makes the probabilities objective - no matter whether the prior is good or poor. ... This objectivity is the strength of scientific practice in general, and of physics in particular. It allows anyone with access to the necessary information and equipment check the quality of any particular model with respect to the application it is supposed to describe.
OK, I am fine with all of this. Your stance is even more acceptable to me than I had thought previously since you allow specified non-ignorance priors to encode available knowledge.

I don't see how it supports your claim that probability (in physics) does not depend on knowledge, but I agree with what you are saying.
 
  • #80
DaleSpam said:
O
I don't see how it supports your claim that probability (in physics) does not depend on knowledge, but I agree with what you are saying.
The model probabilities depend on the model, not on knowledge. Given the defintion of an ideal gas
(say) and specified values of P,, V, T, everything is determined - independent of the knowledge of anyone.

The application probabilities depend on the application, not on knowledge. Given the definition of the experimental arrangement specifying the application, everything is determined - independent of the knowledge of anyone.

So all probabilities encountered in physics are objective and knowledge independent.

What depends on knowledge is the assessment of how well a model fits an application, and hence the choice of a particular model to predict in a particular application. But this has nothing to do with probability, since it holds as well for deterministic models.
 
  • #81
A. Neumaier said:
The model probabilities depend on the model, not on knowledge. Given the defintion of an ideal gas
(say) and specified values of P,, V, T, everything is determined - independent of the knowledge of anyone.

The application probabilities depend on the application, not on knowledge. Given the definition of the experimental arrangement specifying the application, everything is determined - independent of the knowledge of anyone.

So all probabilities encountered in physics are objective and knowledge independent.

What depends on knowledge is the assessment of how well a model fits an application, and hence the choice of a particular model to predict in a particular application. But this has nothing to do with probability, since it holds as well for deterministic models.
Sorry about this, I wasn't clear in my point above. My point is that the prior contains the knowledge, so if you are specifying the prior you are fixing the knowledge.

Suppose you have some quantity x and you want to determine if x depends on y or not. If you do not let y vary then you cannot claim that you have shown that x does not depend on y.

You claim that probability does not depend on knowledge, but knowledge is contained in the prior, and you require a specified prior. Similarly, when you said "anyone with access to the necessary information and equipment" you are fixing the knowledge. Since you are not allowing knowledge to vary you cannot make any conclusions about the dependence of probability on knowledge.

If you want to examine the dependence of physical probabilities on knowledge you must allow the priors and the information to vary across users.
 
Last edited:
  • #82
So all probabilities encountered in physics are objective and knowledge independent.

I have already said that I agree with much of what you posted.

However I maintain that your statements are too narrow.

Your response to my structural engineering examples clearly indicate you have no idea what a bridge assessment or limit state design theory involves.

Both are part of applied physics and properly represented in PF.

Since this is a Quantum section how about these questions

What is the probability that the Higgs will be discovered before the end of 2011?

Suppose I had asked a similar question in 1933

What is the probability that the positron will be discovered before the end of 1933?
 
  • #83
DaleSpam said:
Sorry about this, I wasn't clear in my point above. My point is that the prior contains the knowledge, so if you are specifying the prior you are fixing the knowledge.

You claim that probability does not depend on knowledge, but knowledge is contained in the prior, and you require a specified prior. Similarly, when you said "anyone with access to the necessary information and equipment" you are fixing the knowledge.

By the same argument, deterministic models would depend on knowledge. So if you insist on the correctness of your argument, why emphasize it in the probabilistic case but not in the determinstic case?

Moreover, a model may have a very unrealistic prior. In this case, probabilities depend - according to your view - on arbitrary assumptions or on misinformation rather than knowledge.

On the other hand, with my usage of the terms, everything is clear and unambiguous.
 
  • #84
Studiot said:
Your response to my structural engineering examples clearly indicate you have no idea what a bridge assessment or limit state design theory involves.
I have worked with structural engineers and am familiar with FORM and SORM techniques for limit state analysis, and with variations and alternatives for the assessment of reliability. This has no bearing on the theme.

Engineers calculate probabilities based on models applying to a large ensemble of cases parameterized by some parameters, and then specialize for a particular case by fitting the observed properties of a bridge to the model. The resulting parameter defines a subensemble of all conceivable bridges with characteristics matching the concrete bridge in question, and the safety probability refers to this ensemble, not to the specific bridge.
Studiot said:
What is the probability that the Higgs will be discovered before the end of 2011?

Suppose I had asked a similar question in 1933

What is the probability that the positron will be discovered before the end of 1933?
In both cases, the answer is 0 or 1, and can be known only after the fact.
 
  • #85
and can be known only after the fact

This is the whole crux of my point.

You still have no idea what bridge assessment involves.

You are faced with the following scenario:-

You are presented with a specific bridge over a ravine. Not

a subensemble of all conceivable bridges with characteristics matching the concrete bridge in question,

As the Engineer you are asked

Will the bridge collapse if I drive my lorry over it?

This represents a one off unique situation and you have to make an assessment ie a subjective decision to allow for the fact that all the facts are not ( and probably cannot be ) known.

You did not read my post correctly either.

Studiot-
limit state design

A.Neumaier-
limit state analysis

Are you not familiar with the difference between analysis and the more difficult process of synthesis (or design)?
 
  • #86
In both cases, the answer is 0 or 1, and can be known only after the fact.

One of the direct consequences of this statement, if true, has deep philosophical implications because it implies determinism.
That is that any point in time the future is completely determined with a probability of either 1 or 0.
 
  • #87
Studiot said:
One of the direct consequences of this statement, if true, has deep philosophical implications because it implies determinism.
That is that any point in time the future is completely determined with a probability of either 1 or 0.

I would go farther, and say that such statements *assume* determinism, in the sense that it is taken as a postulate, and thus cannot be proven or disproven.
 
  • #88
A. Neumaier said:
By the same argument, deterministic models would depend on knowledge. So if you insist on the correctness of your argument, why emphasize it in the probabilistic case but not in the determinstic case?
No reason, except that the deterministic case is off topic and obvious.

A. Neumaier said:
Moreover, a model may have a very unrealistic prior. In this case, probabilities depend - according to your view - on arbitrary assumptions or on misinformation rather than knowledge.
Certainly, you could also make arithmetic errors or typographical errors, or you could misapply a formula, or you could use wrong formulas. Any time you use misinformation or misuse information in physics you will get nonsense. I don't think that is terribly interesting other than pedagogically.

A. Neumaier said:
On the other hand, with my usage of the terms, everything is clear and unambiguous.
Yes, but your definition is not the only valid and accepted definition of probability. Your claim is only true if you require probabilities to be defined only over ensembles. In that case I agree that the posterior probability does not depend on the prior so in that case you are indeed correct that probability does not depend on knowledge. Under the more general definition of probability the posterior can depend on the prior in any case where you do not have a sufficiently large number of observations.
 
Last edited:
  • #89
A. Neumaier said:
Bayesian techniques need both available information _and_ a prior. If the prior is not specified, it may depend on the persons subjective estimate, and calculations need not agree.

Thus if one gives strict rules for how to determine the prior from prior information (this is the case in the bayesian applications to animal breeding I had cited before), the calculated Bayesian estimates are objective.

In all other cases, the calculated Bayesian probabilities are subjective.

The case example I gave is objective since it has no subjective estimate as input. And what (nearly?) everyone calls "the probability" in that case depends on knowledge - take it or leave it. :smile:
 
  • #90
A. Neumaier said:
Thus if one gives strict rules for how to determine the prior from prior information (this is the case in the bayesian applications to animal breeding I had cited before), the calculated Bayesian estimates are objective.
This is different from the fixed-prior case. Here, instead of having a fixed prior you have a family of priors with some hyper-parameters which are uniquely specified by available information. Note that in this case the probabilities are objective (user independent), but they do depend on knowledge.
 
  • #91
SpectraCat said:
You are the one who started telling Varon (on the interpretations poll thread I think) about how the position of a particle does exist, but is not well-defined (you used the term fuzzy) until a measurement is made. What do you use to describe the existence of the particle position prior to the measurement if you don't use |psi|^2?
You misunderstood what I said. Saying that a particle has a fuzzu position means that it actually _has_ this position independent of any measurement, but that its value is meaningful only up to an accuracy determined by the uncertainty relation. The position is given not by |psi|^2 but by xbar=psi^*x psi, with an absolute uncertainty of sqrt(psi^*(x-xbar)^2 psi).

Measuring the position gives a value statistically consistent with this and the measuring accuracy, but does not change the fact that the position remains fuzzy. You cannot read from your meter that the position is at exactly x.
 
  • #92
Studiot said:
You are presented with a specific bridge over a ravine. [...]
As the Engineer you are asked
Will the bridge collapse if I drive my lorry over it?
Whether you answer ''with 75% probability'' or ''with 10% probability'', nobody can verify whether your answer was correct when the bridge collapsed, or didin't collapse, upon driving the lorry over it.
And if you answer ''with 99% probability'' and you conclude that you better not drive, the answer can again not be checked.

This makes it clear that your answer is not about this bridge collapsing when you drive over it now,
but with the ensemble of all possible lorries and bridges matching the characteristics of your model as derived from your input data.
Studiot said:
This represents a one off unique situation and you have to make an assessment ie a subjective decision to allow for the fact that all the facts are not ( and probably cannot be ) known.
As far as it is applied to a particular situation, you always have a subjective probability, which is not verifiable by checking against reality.
Studiot said:
You did not read my post correctly either.
Are you not familiar with the difference between analysis and the more difficult process of synthesis (or design)?
I am familiar with it. But the bridge example is one of analysis, not of design. And though I know about limit state design, I was not directly involved in that. Thus I deliberately changed the wording. However, it is not _so_ different from limit state analysis, as it involves the latter as a constraining design condition. So it is part of the total optimization problem to be solved. I have been involved in the design of devices facing uncertainty by other methods; see, e.g., p.81ff of my slides http://arnold-neumaier.at/ms/robslides.pdf
 
  • #93
Studiot said:
One of the direct consequences of this statement, if true, has deep philosophical implications because it implies determinism.
That is that any point in time the future is completely determined with a probability of either 1 or 0.
It doesn't imply determinism, since no dynamical law is involved in it. It only implies (or assumes, depending on what you regard as given) that after something happened, it is a fact, independent of the future.
 
  • #94
DaleSpam said:
No reason, except that the deterministic case is off topic and obvious.
It is not off-topic since it serves to clarify the issue, and it is as obvious in the probabilisitc case as in the deterministc case, hence there is no reason to emphasize it in the latter case. It doesn't add any useful insight into the nature of probability.
DaleSpam said:
Yes, but your definition is not the only valid and accepted definition of probability. Your claim is only true if you require probabilities to be defined only over ensembles. In that case I agree that the posterior probability does not depend on the prior so in that case you are indeed correct that probability does not depend on knowledge. Under the more general definition of probability the posterior can depend on the prior in any case where you do not have a sufficiently large number of observations.
But in that case, the probability is subjective, and not checkable by anyone.

Thus according to the customary criteria, it is not part of science.
 
  • #95
DaleSpam said:
This is different from the fixed-prior case. Here, instead of having a fixed prior you have a family of priors with some hyper-parameters which are uniquely specified by available information. Note that in this case the probabilities are objective (user independent), but they do depend on knowledge.

They do depend on the selected parameters, which is part of the specification of the ensemble.

Of course, the model reflects knowledge, prejudice, assumptions, the authorities trusted, assessment errors, and all that, but that's the same as in _all_ modeling. Hence it is not a special characteristics of probability.
 
  • #96
As far as it is applied to a particular situation, you always have a subjective probability, QUOTE]

Loud applause all round.

That is the point everyone has been trying to make to you. Subjective probability has a place in physical science.

Further there exist a range of probabilities, useful in science, between the values 0 and 1.

which is not verifiable by checking against reality.[/

You test your assessment by driving over the bridge.

My specific examples separately addressed two different points. (1) Uncertainty and (2)objective v subjective.

Limit State theory (analysis or design) is a real world example of applied science attempts to allow for inevitable uncertainty in an objective way. There is no subjectivism whatsoever in this theory. It has been highly successful in increasing design eficiency.

Bridge assessment contains a specific subjective component as a formal part of the process. An extra factor is introduced called the condition factor. This is a subjective derating factor, not present in normal limit state or other analysis methods. (Assessment does not necessarily use limit state theory.)
 
  • #97
Studiot said:
Subjective probability has a place in physical science.
No, since it is not testable.
Studiot said:
You test your assessment by driving over the bridge.

Whether the assessment was ''with 75% probability'' or ''with 10% probability'', nobody can verify whether the statement was correct after you tried to drive over the bridge. Thus it cannot be regarded as a test.
 
  • #98
OK, so we have laid one ghost.

You have not disgreed that there is room, even a necessity, for a subjective component to probability in applied science.


Now for the second one.

You mentioned several times that a probability value exists for something whether the observer knows this value or not.

I agree.

Similarly a probability value exists whether the observer tests, or can test or not.
 
  • #99
Studiot said:
You test your assessment by driving over the bridge.

Yes, exactly.

This is also the gaming analogy. When driving over the bridge, you are placing best, you are taking risks. But this is how nature works. All you ever do, is place your bets and play the game. Along the game you shall then learn and revise your expectations as feedback is arrived.

However, sometimes fatal things happens. Driving over the bridge can be fatal. But this is also part of the game.

The predictions from this game is that only the players that are rational and good guessers and gamers, will survive. So the systems we observer in nature, are then likely to comply to these rationality constraints. But they are not FORCED to them. In fact evolution depends on mistakes and variation.

So subjective probabilites are not tested in the descriptive sense. But they don't need to. Their sole purpose are in evaluating the most rational action (think some action principle). But these "inference systems" that are somewhat subjective are subject to evolution and selection, and anywhere near equilibrium conditions this may yield predictions of expected behaviour (actions) of subsystems in nature; just assuming rationality in their way of placing bets based upon subjective probabilitis.

I think if you take the "rationality constraints" to be exact, and forcing, then the difference to this view and neumaiers "objective constraints" is almost nil.

But the problem is that even the effectively objective constraints are observer dependent and in particular scale dependent. So the only consistent stance as far as I am concerned, is to allow for evolution and selection here and understand that the subjective perspective is what is needed to understand how the effective objective has emerged. Without that, it just is what it is. An ad hoc choice for not particular reason.

The evolutionary picture has a power the deductive way hasn't - to provide a mechanism to understand effective objectivity from a democratic system of subjective views as they interact (equilibrate).

/Fredrik
 
  • #100
Studiot said:
You have not disgreed that there is room, even a necessity, for a subjective component to probability in applied science.
In the art of using science, not in science itself. Subjective probability is a guide to action in single instances, but not a scientific (testable) concept.

Studiot said:
You mentioned several times that a probability value exists for something whether the observer knows this value or not.

Similarly a probability value exists whether the observer tests, or can test or not.

The latter sort of existence is meaningless. In the same sense, ghosts exist (subjectively) no matter whether it can be tested.
 
  • #101
Fra said:
Umm... I'd say physics (and natural science in general) is ALL about us learning ABOUT nature, what we can say about nature.

''us learning'' is the subject of psychology, not of physics. The subject of physics is the objective description of the kinematics and dynamics of systems of Nature.
 
  • #102
A. Neumaier said:
''us learning'' is the subject of psychology, not of physics.

In the case of and observer = human scientist, that's of course correct. I agree.

But like I've argued, the subjective interpretation would make no sense if it was all about human observers. Science is FAPP objective in terms of human-human comparasions.

All human scientists will agree upon the description of nature in the sense physicists talk about. We agree there.

But THE physics is about how one subsytems of the universe, "learns" about the states and behaviour of the other subsystems. It's about how the state of a proton, encodes and infers expectations of it's environment (fellow observers, such as other neutrons, electrons etc), and how the action of the proton follows from rationality constraints in this game.

This will have testable predictions for human science, and it may help understand how interactions are scaled as the observer scales down from human laboratory device to a proton which is then a proper inside observer (except WE humans, observe this inside observer form the outside (the lab)).

So the physics analogy, is that the action of a proton is similarly a game. The action of the proton is based upong it's own subjective expectations of it's envionment. It tests this by acting ("driving over the bridge"). A stable proton in equilibrium will have a holographically encoded picture corresponding to external reality. But a system not in equilibrium or in agreement with heavly evolve and changes it's state, sometimes it even decomposes and is destroyed.

This is the "learning" I'm talking about. But it's actually analogous to how science works. So the analogies is still good, but the real thing is one subsystem of the universe makes inferences about it's physical environment. We humans are like very MASSIVE observing system that observes these inside observers interacting. So human science IS like a DESCRIPTION of the inside game. BUT as we also consider cosmological models, this assymmetry does not hold, and we are forced to consider that human scientists are indeed also inside observers playing a game not JUST descriptive scientists. Except of course on a cosmo scale clearly all EARTHBASED human scientists will still indeed agree upon science.

So nothing of what I say threatens the integrity and soundness of science. On the contrary does it deepen in.

/Fredrik
 
  • #103
A. Neumaier said:
Originally Posted by Studiot
Subjective probability has a place in physical science.

No, since it is not testable.

It is testable: humans are testable!
 
  • #104
lalbatros said:
It is testable: humans are testable!

There is a difference between testing a human and testing the assertion that a particular bridge will collapse with 75% probability when a particular truck crosses it at a particular time. The latter is impossible and proves that the statement has no scientific content.
 
  • #105
Fra said:
In the case of and observer = human scientist, that's of course correct. I agree.
In the case of a machine, it is a matter of artificial intelligence, not of physics.

Physics is about interpreting experiments in an observer-independent way.
 

Similar threads

Back
Top