Einstein simultaneity: just a convention?

In summary: The 2nd postulate is a physical reality. So, in SR, the 2nd postulate is the only one that is physically real.In summary, according to the two postulates, the speed of light is constant and the electromagnetic and mechanical laws are isotropic, while time dilation is physically real and length contraction is an interpretation.
  • #106
granpa said:
But on what basis do you say that it is equally likely?

on the basis that there are 2 sides. if you knew nothing else about it you would still expect to be able to predict the outcome 50% of the time just by choosing at random.
I can either win the lottery, or I can not win. There is also "two sides" to that issue. If I have no idea what my chances of winning are, shall I use that as a reason to conclude my chances are 50-50 in lieu of new information? Or shall I simply say I have no idea what the chances are, and the fact that there are only two possibilities gives me no help whatsoever, because I have no reasonable basis for using that information?
so its called the bayesian probability.
Call it what you like, but it still doesn't mean much of anything.
 
Physics news on Phys.org
  • #107
if you predict whether you will win the lottery by choosing yes or no at random then you will win 50% of the time.

what part of 'the probability is unknown' did you not understand?
 
  • #108
granpa said:
if you predict whether you will win the lottery by choosing yes or no at random then you will win 50% of the time.
That is true in all situations and is unrelated to the issue of the likelihood of the two events. You said "on what basis do you say that it is equally likely? on the basis that there are 2 sides." I assumed the "it" was which side will occur, not what is the frequency that you can be right. Even if there are 99 sides, I can be right 50% of the time by randomly selecting between 1 or 2-99 to bet on. Or I can predict a coin flip 33% of the time by choosing randomly between heads, tails, and that it will end up on its side. None of that has anything to do with "the number of sides" that can occur, it is just a way to manipulate a winning chance via a betting strategy. What's the relevance?
 
Last edited:
  • #109
it is true in all situations, that's why when you have nothing else to go on you fall back on it.

its called bayesian probability. look it up.
 
  • #110
Ken G said:
theories are nothing but a way to unify the information in the existing observations. When we forget that, we land in all kinds of hot water, throughout history.

A theory does indeed unify existing observations, but it is not merely that. In doing so, it predicts the results of other experiments, be it in the currently well tested regime or beyond it. If it did not, then it wouldn't be a theory. It would be only knowledge. To unify a set of observations is to find a pattern in them. Take the Fibonacci series for example. We can look at the first hundred terms and guess that [tex] a_n = a_{n-1} + a_{n-2}[/tex]. That would be our theory. Using our theory, we can predict what the 101st term is. But we do not claim that we know what it will be. Say after looking at a billion terms, our theory still works. Then we will expect with some confidence that the (billion + 1)th term will be according to our theory. I'm sure you also would expect it to work for the (billion + 1)th term. One would indeed be surprised if it didn't. Surprise is just the act of learning something that is contrary to your expectations. Again, I don't see any value at all in not expecting anything at all. Is it only to avoid being surprised? Imagine that you lived in the 19th century, and someone told you that you should not expect position to be a meaningful concept in untested regimes and they gave you no further reason than the fact that those regimes have not been tested. The revolution of quantum mechanics was understanding why it is not a meaningful concept, i.e. by analyzing the process of observation, and thereby also discovering exactly where it is a valid approximation and where it is not. Just doing the double slit experiment and being surprised is not a revolution.
Ken G said:
... I think it is that former approach that has led to all these "revolutions" in scientific thought, that were never really revolutions at all-- just comeuppances when we made assumptions we had no business making in the first place. Revolution is not a natural part of science, it is an indication of something pathological in how we are going about the process.

Yes! they were revolutions! The fact that relativity was a revolution has nothing to do with whether we believed Newtonian physics to hold at arbitrary speeds or not before relativity. Whatever we may have believed or expected before 1905, relativity would still have been a revolution. It was not a "comeuppance", it was a deep and radical analysis of our concepts of length, simultaneity etc.

Ken G said:
But on what basis do you say that it is equally likely? On the basis of experience, of prior observations of symmetric objects. If you hand someone a shoe, should they also expect it is equally likely to end up on the sole or the top, or would it be natural to adopt no expectation at all until some experience was built up around objects of that shape? Aren't you simply using what you know about symmetries to build your expectation, not any kind of "null hypothesis"? Why not? What if it was a shoe instead of a coin?

Say I have never seen a coin before. I know nothing about tossing coins other than the fact that it will either land heads or tails. Then I am allowed to take a superficial look at the coin, and then asked to predict the chances of it landing heads. Because I do not have any information that would allow me to choose one side over the other, It follows logically that the best prediction I can make is to assign equal chances to heads and tails. If I did anything else, It would either be arbitrary, or I would have to assume things I did not know.

If the coin was a shoe, a superficial examination of its shape would tell you that the top and bottom are distinguishable. But, if you knew nothing else, nothing about gravity, no previous experience about the general mass distribution in shoes, absolutely nothing else, then of course that information would be redundant to your predictions because the relevance of the information is not known to you. It would be as useless as knowing that the president of America is George Bush, as far as your predictions for the toss are concerned. So you wouldn't expect it to land on its sole any more than the other side. And, believe it or not, that is the most logical expectation based on the information you have. Any other expectation would go against logic. This expectation is not claiming anything absolute about the shoe. The more information you have, the closer your expectations will be to what will actually happen.

The only case in which "I have no expectation at all" would be a valid position is if you did not even know what the possible outcomes of the toss are, i.e. you don't have any information at all. In that case, the question of what the outcome of tossing the shoe is equivalent to "what is the outcome of experiment A?". It is a meaningless question.

"Why does the Universe exist?", I have no expectations of what the answer to that question will be, since I don't even know what the question means, and I don't know what an answer to the question could be. It is meaningless to me.

Ken G said:
Even in the case of gravity, we can say that "the physics of gravity on Earth" does not work on Mars! However, since we have experience already with gravity in various different situations in the solar system, we have already equipped it with a capacity to be applied on Earth or on Mars. We already put that into our theory, based on observation, it was never something that we just knew had to be right.

I didn't say we knew it had to be right. In fact, in the very next line I said, "both could be wrong".
 
  • #111
dx said:
So you wouldn't expect it to land on its sole any more than the other side. And, believe it or not, that is the most logical expectation based on the information you have. Any other expectation would go against logic.
Actually, it's not that simple; the problem of a priori probabilities is a significant philosophical issue -- this is a common and often useful convention, but it's far from clear that it would be the "most logical expectation". And it's not strictly necessary anyways -- one can view the scientific process as merely determining which theories have the stronger Bayes factors, rather than trying to determine which theories are most probable. i.e. science seeks to accumulate evidence, not to uncover truth.
 
Last edited:
  • #112
Ken G said:
Then you can answer, is the centrifugal force something real?
Are any forces 'real'? What about resolving forces into components, or decomposing it into contributions from different sources?



There is no such paradox in the twin "paradox"
That's why it's more properly called the twin pseudoparadox -- there is a flaw in the reasoning that leads to the contradiction. If you are talking about anything other than the (fallacious) lines of reasoning that lead to the conclusion that each twin sees the other one age less, then you are not talking about the twin paradox; you're talking about something else.


Of course. Yet it falls to us to make that call anyway, constantly, both as teachers and as we ourselves try to obtain the most facile understanding of how reality works.
I can't decipher any content in this. Anyways, the thing I've learned both from experts and my own experience is that for any subject, it is best to understand it from many different points of view. This way, we can select a point of view most suited to the problem of interest -- and even better, we can transfer between different points of view, so as to apply a wider variety of methods to the problem.

But they are not very different ideas, expressly because we are dealing entirely with coordinate homeomorphisms here.
I'm not.

We must have a way to connect observations to coordinates.
That's called a "coordinate chart". And anyone can use whatever coordinate chart they want, consider things relative to more than one chart, or even not use a coordinate chart at all.

but can be proven to need modification by any single significant failure.
Such a thing cannot be proven for exactly the same reason that a theory cannot be proven true. And even a single significant failure usually isn't enough to yield convincing evidence that a modification of a theory. For example, equipment failure or improper experimental procedure are usually more likely 'explanations' of a significant failure.
 
  • #113
granpa said:
it is true in all situations, that's why when you have nothing else to go on you fall back on it.

its called bayesian probability. look it up.

There's still no relevance to the probability of a theory working.
 
Last edited:
  • #114
dx said:
A theory does indeed unify existing observations, but it is not merely that. In doing so, it predicts the results of other experiments, be it in the currently well tested regime or beyond it. If it did not, then it wouldn't be a theory.
Which does it need to do, predict within the well tested regime, or beyond it? If I have a theory that predicted in the tested regime, then I can build bridges with it. Exactly why do I need to be able to extend my predictions beyond that regime? I would call that nothing but the formation of a hypothesis, and we don't need a theory to do that, we can try anything we like.

To unify a set of observations is to find a pattern in them.
Exactly my point-- the pattern is in the data, so it lives in the tested regime. Extensions outside that regime are either part of that pattern, in which case they are not outside that regime, or they don't, in which case they are. We don't know in advance what the regime is, but we still have no need to make any guesses about what the regime is. There's simply no need for it, you are never going to use the theory to do anything in a regime in which it has not been tested (other than to form a hypothesis, but then there is no need to "expect" anything, you are just deciding what experiment you think should be done. Indeed, "expecting" results tends to lead you to not bother with the experiment.)
Take the Fibonacci series for example. We can look at the first hundred terms and guess that [tex] a_n = a_{n-1} + a_{n-2}[/tex].
But that's the definition of the Fibonacci series. It's only a theory if you don't know where that series comes from (this is the problem we have in physics). If the series is coming from observations of some kind where you are not just getting out what you put in, then you have no reason to expect the form will continue indefinitely. If it worked for a million terms, it seems probable it will work for many thousands more, because why should they be special, but what about a billion more? No reason to expect that, unless you are just getting out what you put in (as in the Fibonacci series itself).

Again, I don't see any value at all in not expecting anything at all. Is it only to avoid being surprised?
It is to avoid fooling oneself. Feynman has a great quote that science is about learning how to not fool yourself, given that you are the easiest person you can fool. I see science falling into that same trap, it is not taking its own principles far enough if it keeps causing us to fool ourselves into false expectations, and then having "revolutions" later on.
Imagine that you lived in the 19th century, and someone told you that you should not expect position to be a meaningful concept in untested regimes and they gave you no further reason than the fact that those regimes have not been tested.
I would say that is exactly the way science should be done, yes. Granted, it is no simple matter to define what is meant by an "untested regime", but for now we'll simply allow there is such a concept even if we can't be terribly precise about what it is.
The revolution of quantum mechanics was understanding why it is not a meaningful concept, i.e. by analyzing the process of observation, and thereby also discovering exactly where it is a valid approximation and where it is not. Just doing the double slit experiment and being surprised is not a revolution.
I agree, except I would simply call that the great discovery of quantum mechanics. That is was also a revolution was all our own fault. Columbus "discovered" the New World for Europe, its existence was not a "revolution". The ancient Greeks knew there was some 15,000 miles of ocean out there, so for anyone to "expect" it to be empty would just be guessing.
Whatever we may have believed or expected before 1905, relativity would still have been a revolution. It was not a "comeuppance", it was a deep and radical analysis of our concepts of length, simultaneity etc.
But it was very much a comeuppance, indeed that is the main reason Poincare did not discover it himself. He saw it as some kind of mathematical trick, he couldn't believe that it was a description of how reality actually worked. That's what I mean about putting the "cart" of expectations before the "horse" of observing reality.

But I understand what you mean that we can use the word "revolution" to simply mean "very important discovery that gave us a very new tool for understanding reality", that is just not the sense of the word I'm using-- I mean "revolution" as "a throwing off of the previous power structure, an unseating of what was expected to hold"-- a connotation of "the King is dead, long live the King". Normally, when we encounter the error of holding preconceived expectations that we are loathe to part with, we expect to be dealing with some religious authority-- not scientific authority. Yet here we see the only difference is in how tightly the preconceived notions are held, versus how willing we are to part with them when observations warrant it-- the basic attitude is still the same.
Say I have never seen a coin before. I know nothing about tossing coins other than the fact that it will either land heads or tails. Then I am allowed to take a superficial look at the coin, and then asked to predict the chances of it landing heads. Because I do not have any information that would allow me to choose one side over the other, It follows logically that the best prediction I can make is to assign equal chances to heads and tails. If I did anything else, It would either be arbitrary, or I would have to assume things I did not know.
I claim your analysis is using the symmetry of the coin, and that's why it seems "arbitrary" to do anything else. But if you know the coin has a symmetry, you are indeed using knowledge of the coin. Write that same argument but for a conical hat.
So you wouldn't expect it to land on its sole any more than the other side.
True, but that would not lead me to expect a 50-50 chance, it would lead me to simply say I have no meaningful way to assess the probability. Probability requires a great deal of knowledge about what variables are outside your control-- if you don't even know that, it is a meaningless concept.
The only case in which "I have no expectation at all" would be a valid position is if you did not even know what the possible outcomes of the toss are, i.e. you don't have any information at all. In that case, the question of what the outcome of tossing the shoe is equivalent to "what is the outcome of experiment A?". It is a meaningless question.
I don't agree, all experiments can have two possible outcomes-- a particular one, and anything else. Shall we start with the assumption, then, that any outcome you can name has a 50-50 chance of happening, on the grounds that we have no other information about the probabilities of "all other outcomes"? We always have to group outcomes, there's no absolute sense of "the possible outcomes of an experiment". Even if you are flipping a coin, there is the location of every other particle involved in that experiment. You can say you don't care about them, so you are grouping outcomes. So am I in the above.
I didn't say we knew it had to be right. In fact, in the very next line I said, "both could be wrong".
The point there is that at first glance, we may think we are saying something fundamental about the theory gravity that it works on Earth and on Mars. But we are not, we are saying something fundamental about the observations we already had that we used the theory of gravity to unify. We observed what aspects of a planet control its gravity, and built a theory that reflected that. So when we look at other planets and find the theory works, it is because we put "other planets" right into the theory. If the "other planet" is a neutron star, we get a breakdown, and if it's like the planets we built the theory for, we don't.
 
  • #115
True, but that would not lead me to expect a 50-50 chance,

who said the probability was 50%? i would say the expectation is 50%. the probability is unknown. bayesian probability is not the same as probability. though after enough trials it will approach it.
 
  • #116
phyti said:
Using the 2nd postulate, c is constant..., you can derive the same results in SR, with one exception. Time dilation is physically real, length contraction is an interpretation.
The 1st postulate was a philosophical preference.

The 1 postulate is not only a philosofical preference!

I passed my last 2 years to demostrate the Isotropy of the one - way speed of light and now i have to publish my teorethical results.
It would absurd, but i had to use the theory of tachions to show that the first postulate is a real thing!
 
  • #117
Hurkyl said:
Are any forces 'real'?
Exactly. My question to you came in response to your claim that centrifugal force confusions arise from understaning how to do coordinates, but not understanding what they represent. What do they represent? They are means of manipulating quantitative information, how can you do the manipulation correctly and still be "missing something", as you appear to suggest?

If you are talking about anything other than the (fallacious) lines of reasoning that lead to the conclusion that each twin sees the other one age less, then you are not talking about the twin paradox; you're talking about something else.
Correct, I"m talking about something else-- something that remains an issue even after you know how to do relativity (what else would be interesting?). That "paradox" is that the two observers can do everything right and still come up with a very different answer as to the "cause" of the age difference. That's normally an accepted aspect of relativity, but my point is, it doesn't need to be so.
I can't decipher any content in this. Anyways, the thing I've learned both from experts and my own experience is that for any subject, it is best to understand it from many different points of view.
Quite so. Then from that perspective, you may interpret this entire thread as asking the quesiton, "what is the formulation of relativity that uses only postulates that we have really established observationally, i.e., postulates that are not subject to being overturned with new observations unless we somehow did the existing observations wrong."
This way, we can select a point of view most suited to the problem of interest -- and even better, we can transfer between different points of view, so as to apply a wider variety of methods to the problem.
Right-- the "most suited" aspect of this approach is that it is most suited to not making claims on reality that we haven't the vaguest idea are true (nor have any "Bayes factors" to apply).

I'm not.
So you are contesting my mathematical proof that coordinate automorphisms on R^n extend trivially to topolological automorphisms on X, and therefore the "distinction" you draw does not really exist?

That's called a "coordinate chart". And anyone can use whatever coordinate chart they want, consider things relative to more than one chart, or even not use a coordinate chart at all.
All you are doing is naming the action, but any such naming doesn't change the point that a coordinate chart means nothing in physics unless you can make a connection with an observable. So no, anyone cannot use any coordinate chart they want-- they must be able to describe its connection with clocks and rulers, or other measuring devices, or it just isn't meaningful physics. That's why a coordinatization is a homeomorphism on the topological space that comes complete with automorphisms onto other coordinatizations, all of which extend trivially to automorphisms on the topological space.

Such a thing cannot be proven for exactly the same reason that a theory cannot be proven true. And even a single significant failure usually isn't enough to yield convincing evidence that a modification of a theory. For example, equipment failure or improper experimental procedure are usually more likely 'explanations' of a significant failure.
Although that's all true in principle, in practice that isn't the way we conceptualize our art. Although it can be argued that the Michelson-Morely experiment was of no significance until it was reproduced, for just those reasons, the way we describe the progress of science is quite different. Take it up with the legacy of Einstein, as it includes his famous quote "No amount of experimentation can ever prove me right; a single experiment can prove me wrong." The literal truth of this is not really the point, I would say the significance of the remark is that physics lives in little boxes called "appropriate regimes", so you can have a hundred experiments in one regime and learn nothing about some other one, until a single experiment is done in that other regime. My goal is to recognize this right up front in how we postulate our science, basically so that we can really keep better track of what we are actually doing-- thereby eliminating the problem of "revolutions" (in the sense I'm using it not dx's more general meaning of any significantly new discovery).
 
Last edited:
  • #118
granpa said:
who said the probability was 50%? i would say the expectation is 50%. the probability is unknown. bayesian probability is not the same as probability. though after enough trials it will approach it.
You may be correct that probability is different from expectation (I only know the latter as a result-weighted version of the former), but what point are you making about observations in physics?
 
Last edited:
  • #119
Chatman said:
It would absurd, but i had to use the theory of tachions to show that the first postulate is a real thing!
How does a theory show something is real? I thought observations were the only things capable of that.
 
  • #120
Obviously it's impossible to mesure experimentally the one - way speed of light, but i demonstrate its constancy and isotropy using the power of theoretical demonstration by absurd and i discovered various inconsistancies with the empirical evidences given by the simple concept of cause and effect, action and reaction in the third principle of dynamic.

That, only if the one way speed of light c would be anisotropic.
 
  • #121
Ken G said:
Which does it need to do, predict within the well tested regime, or beyond it? If I have a theory that predicted in the tested regime, then I can build bridges with it.

A theory cannot set arbitrary limits on itself. That can only be done when it is understood as an approximation to another more general theory. Consider my example of the Fibonacci series. We notice a pattern that seems to hold for all the terms we have looked at. So the formula [tex] a_n = a_{n-1} + a_{n-2} [/tex] will be our theory which unifies all the observations we have made. We might have looked at the terms from 200 to 300. Then we would expect that it will hold for the term 301, because that's what our information is suggesting to us. By the same logic we will expect it to hold for the terms 5000, 5001 etc too. We have no information that tells us that it is less likely to hold in the ~5000 regime than in the ~400 regime.

If we've tested the theory in the 10s and the 30s, then we believe it will hold in the 20s as strongly as we believe that it will hold in the 500s. That's because we have no information that distinguishes them. There are various things that will make the 20s more likely than the 500s. For example, we may believe from other experience that theories only become inaccurate gradually. We may believe in continuity in some general sense, but it is important to realize that that is extra information, which, if used, is part of your theory.

Ken G said:
Exactly why do I need to be able to extend my predictions beyond that regime? I would call that nothing but the formation of a hypothesis, and we don't need a theory to do that, we can try anything we like.

Let's say that what we expect has consequences. Maybe we want to send some machine to another galaxy. Then we will construct it according to what our theories tell us, although we're not sure if everything will work the way we expect. Just because we don't know doesn't mean that trying some arbitrary hypothesis is as good as going with the theories we currently have.


Ken G said:
But that's the definition of the Fibonacci series. It's only a theory if you don't know where that series comes from (this is the problem we have in physics).

Thats what I meant. I was considering a situation where the terms of the Fibonacci series are outcomes of an experiment.

Ken G said:
If the series is coming from observations of some kind where you are not just getting out what you put in, then you have no reason to expect the form will continue indefinitely. If it worked for a million terms, it seems probable it will work for many thousands more, because why should they be special, but what about a billion more?

Whats different about a thousand more and a billion more? There's no difference. Both are outside the tested regime and you have no information about them. If your expectations had consequences then the best thing is to stick to the pattern you noticed, i.e. the theory. Again, like I said before you may have some general ideas about how things become inaccurate only gradually, but that idea would also be part of your theory.

Ken G said:
It is to avoid fooling oneself. Feynman has a great quote that science is about learning how to not fool yourself, given that you are the easiest person you can fool. I see science falling into that same trap, it is not taking its own principles far enough if it keeps causing us to fool ourselves into false expectations, and then having "revolutions" later on.
I would say that is exactly the way science should be done, yes. Granted, it is no simple matter to define what is meant by an "untested regime", but for now we'll simply allow there is such a concept even if we can't be terribly precise about what it is.

I think what you're trying to say is that we should be aware that the concepts and theories we use are not absolute truths, and also be aware of where they have been tested and where they have not been etc. Is that right?

Ken G said:
I claim your analysis is using the symmetry of the coin, and that's why it seems "arbitrary" to do anything else. But if you know the coin has a symmetry, you are indeed using knowledge of the coin. Write that same argument but for a conical hat.

No, my analysis did not use the symmetry of the coin because the relevance of the symmetry is not known to me. For that I would have needed to know the details of Newtonian mechanics, gravity, center of mass etc. That is all information that I did not have. All I knew was that there were two possible outcomes, heads and tails. With no other information, I would have to expect both equally.

Ken G said:
True, but that would not lead me to expect a 50-50 chance, it would lead me to simply say I have no meaningful way to assess the probability. Probability requires a great deal of knowledge about what variables are outside your control-- if you don't even know that, it is a meaningless concept.

I was talking about expectation, not probability. I don't know if you think probability is something absolute about the system or not, but we don't need to go into that. There is always a meaningful way to form expectation. The way is this - "take all the information you have into account, and nothing else".

Ken G said:
I don't agree, all experiments can have two possible outcomes-- a particular one, and anything else. Shall we start with the assumption, then, that any outcome you can name has a 50-50 chance of happening, on the grounds that we have no other information about the probabilities of "all other outcomes"? We always have to group outcomes, there's no absolute sense of "the possible outcomes of an experiment".

The fact that you can group all possible outcomes into "this" and "everything else" has nothing to do with what you should logically expect. If that's all the information you had, then yes, you should assign an equal expectation to both. But, if you knew that there were six possible outcomes, then you cannot ignore that information. If I told you that there were some number from 1 to 10 elephants in the cage, then you would assign equal expectation to each number from 1 to 10 because you don't have any other information. You cannot group it into 1 and {2,..,10}, and then assign equal expectation to those sets because that would mean that you're ignoring some of the information that you were given. The best possible expectation given a certain amount of information is the expectation that takes all information into account, and nothing else.

Ken G said:
Even if you are flipping a coin, there is the location of every other particle involved in that experiment. You can say you don't care about them, so you are grouping outcomes.

It's not that I don't care about them, its that I don't know about them. If I did know about them, then my expectation will take them into account. I think your familiarity with coins and the immense experience you have that could be relevant is preventing you from thinking clearly. Remember, all I know is that there are two possibilities. That information may not be true. But given the information, the logical expectation is to assign equal expectation to both.

Ken G said:
If the "other planet" is a neutron star, we get a breakdown, and if it's like the planets we built the theory for, we don't.

Which has nothing to do with what you should expect. When you say you expect something you are not saying that it is true. If you didn't know the relevant difference between a neutron star and a normal planet, then you should expect the same for both. If you had to send some kind of machine to do experiments to a neutron star and a planet, and if you didn't know what the difference was, you would build it according to your current expectations (which are based on your current knowledge). Any random hypothesis would not be just as good, because it would be ignoring information that is available to you.
 
  • #122
Hurkyl said:
Actually, it's not that simple; the problem of a priori probabilities is a significant philosophical issue -- this is a common and often useful convention, but it's far from clear that it would be the "most logical expectation"

In this case it is that simple. This is all the information you have :

1. The experiment has two possible outcomes.

Let's leave aside the question of whether you should expect anything at all. If your life depended on it, then the best thing is to expect them equally. Any other expectation would not be justified logically. There is no way to logically assign unequal expectations to the two outcomes without using some information that is not contained in (1).

Hurkyl said:
And it's not strictly necessary anyways -- one can view the scientific process as merely determining which theories have the stronger Bayes factors, rather than trying to determine which theories are most probable. i.e. science seeks to accumulate evidence, not to uncover truth

Science seeks to unify experience, and to answer meaningful questions about things one has not experienced yet. The second depends on the first.

Say a theory A unifies a certain domain of your experience. Then that theory will have something to say about questions about that domain which can be meaningfully asked within the structure of that theory. Given that you believe in the theory, you should logically "expect" what it tells you. You may not believe that the theory applies in that case. And if you have any reasons for that belief, then it is also part of your theory. Your expectations are ideally products of the sum of your knowledge.

I don't know what you mean by "which theories are most probable". If you mean which theories have the most evidence, then yes that is exactly what we are trying to find. We are interested in the ideas that have the most support.
 
  • #123
dx said:
If we've tested the theory in the 10s and the 30s, then we believe it will hold in the 20s as strongly as we believe that it will hold in the 500s.
Why would we do that? It simply isn't logical. If you have some information about the series that tells you the 500s are no different from the 10s, then you have something more than a physical theory actually gets to have. That's why a mathematical series is not actually a good example of what we are talking about-- in math, you only get out what you put in, though the trick is to figure out everything you put in without realizing it.
Maybe we want to send some machine to another galaxy. Then we will construct it according to what our theories tell us, although we're not sure if everything will work the way we expect. Just because we don't know doesn't mean that trying some arbitrary hypothesis is as good as going with the theories we currently have.
I agree that an "arbitrary" hypothesis would not be a good guide-- our theories do give us a guide for making hypotheses that are likely to be of use. That's quite a bit different from the expectation that our theories will work, however. In your example, our "expectation" would be that our theories will provide no more than a basis to conduct an experiment. What we expect is to be surprised, and to require modifications. But the key point is, we will try to apply theories that are actually within their regime of applicability, or we are just kidding ourselves.

Take the Wright brothers. Yes they did rely on theory, and yes they did have to do a lot of trial-and-error modifications of that theory, but the theory they were using was a theory based on fluid mechanics quite similar to air passing over a wing. They were not using equations about high-viscosity fluids and applying them to air flows, for example, simply for lack of anything better. Had they been doing the latter, their "expectations" wouldn't be worth a plugged nickel. They had to know something experimental about air to make the expectations useful.

Thats what I meant. I was considering a situation where the terms of the Fibonacci series are outcomes of an experiment.
In any such situation, knowledge of the first 30 numbers affords very little information about the 500th. We could do the same thing in the Earth's atmosphere, say. We could do an experiment where we travel upwards by 100 feet, measure the density, write it down, then go up another 100 feet. If we know nothing about the atmosphere from other experiences with it, then we'd have nothing to go on expect a very obvious if approximate pattern that would govern about the first 50 numbers in that sequence. By your logic, it is then natural to expect it to apply also to the next 500 as well. Of course it will actually break down completely in this example, but my point is that this is what we should actually expect-- the example is typical.

Similarly, you could write down the energies of the first 30 levels in hydrogen, and see the pattern. Will that apply to the 500th? Probably not, you'd need very low densities and weak fields. Again, breakdown of the pattern is the wiser expectation in general.

Whats different about a thousand more and a billion more? There's no difference.
See above.

I think what you're trying to say is that we should be aware that the concepts and theories we use are not absolute truths, and also be aware of where they have been tested and where they have not been etc. Is that right?
Yes, the main idea would be to equip every theory with a sense of what regime it has been tested in, by experiment. That way, no theory would ever be considered independent from the experimental data it unifies. Just think of how much wild and useless speculation over the years, and even today, we could avoid with that approach. No "determinism", no "landscape", no "quantum suicide"-- just the experiments and the theories that explain them.
No, my analysis did not use the symmetry of the coin because the relevance of the symmetry is not known to me.
Then your analysis can be applied to a conical hat as well as to a coin. Is it any good for a conical hat?

For that I would have needed to know the details of Newtonian mechanics, gravity, center of mass etc. That is all information that I did not have.
No, you'd just need a fairly straightforward understanding of symmetry.
The fact that you can group all possible outcomes into "this" and "everything else" has nothing to do with what you should logically expect.
No, we'll always have to make such groupings. You may think "heads" plus the location of every person in the room is an obviously different grouping than tails and such, but that's just your grouping. I can just as easily group heads and me in any position but one with all the tails, and leave the other possibility as heads and me in one position. Why is that a "wrong" way to divide the outcomes? Your logic applies to that grouping as well as the other-- why would it not?
 
  • #124
Ken G said:
Why would we do that? It simply isn't logical. If you have some information about the series that tells you the 500s are no different from the 10s, then you have something more than a physical theory actually gets to have.

No, we don't have any information that tells us that the 500s are different from the 10s. That's the point. The only information you have is what you have observed. And that information is not telling us that the 500s are different from the 10s. We're considering an idealized experiment where the observations in that experiment is all we know. The question is, what can we expect (and to what degree) about further observations (based only on that information)?

To make it concrete,

1. You start out with no knowledge at all (about anything whatsoever).
2. You make a series of observations (physical observations, not mathematical. the Fibonacci series is ubiquitous in nature, and many conceivable observations could result in it.)

1 -> 1
2 -> 1
3 -> 2
4 -> 3
etc. say up to a million.

Then there's a meaningful question that we may ask, i.e. million + 1 -> ?

The pattern we noticed could be extended to make a prediction of what it would be. Now, the question is, to what degree do we believe that prediction? "I don't care" is not an option. We must choose on a scale from "I don't believe in it at all" to "I believe it completely". What do you think? (the only information we have is the set of observations up to a million. Physical observations, not mathematical!)


Ken G said:
I agree that an "arbitrary" hypothesis would not be a good guide-- our theories do give us a guide for making hypotheses that are likely to be of use. That's quite a bit different from the expectation that our theories will work, however.

You seem to still be confusing "expecting our theory will work" with "I know the theory will work". That is not the sense in which I'm using the word expect. Expectation can have degree. You can expect strongly, weakly, completely, not at all, or anything in between.

Ken G said:
In your example, our "expectation" would be that our theories will provide no more than a basis to conduct an experiment. What we expect is to be surprised, and to require modifications. But the key point is, we will try to apply theories that are actually within their regime of applicability, or we are just kidding ourselves.

You should expect to be surprised in all cases where you don't have perfect knowledge, i.e. all the time. But that does not mean you can't expect certain things to certain degrees. They are complementary. If your expectation that the plane will fly on a scale of 0-1 is 0.9, then your expectation that you will be surprised is 0.1.

Ken G said:
They were not using equations about high-viscosity fluids and applying them to air flows, for example, simply for lack of anything better. Had they been doing the latter, their "expectations" wouldn't be worth a plugged nickel. They had to know something experimental about air to make the expectations useful.

But to know that your expectations are not useful, you have to try them out first. When you're creating a machine to send into some unknown regime, you don't have any information about that regime. This is the situation where what you expect becomes crucial. You may not have the chance to do it again, so what should you do based on what you already know?


Ken G said:
By your logic, it is then natural to expect it to apply also to the next 500 as well. Of course it will actually break down completely in this example, but my point is that this is what we should actually expect-- the example is typical.

Yes, without further information that's what you should expect.


Ken G said:
Then your analysis can be applied to a conical hat as well as to a coin. Is it any good for a conical hat?

Again, you're confusing reality with information. Whether the analysis is any good in the sense of whether it is an accurate picture or not is not the question. You have no way of knowing that other than obtaining more information. The most logical expectation does not necessarily match reality. The point is to make use of all information you have. There are even cases where a logical analysis of the information you have will lead you to expect something that is completely false. But it is still the best you can do with that information.

Ken G said:
No, you'd just need a fairly straightforward understanding of symmetry.

Which is not information I possess. I've said this many times. You can use only that information that you have. In my example, the only information you have is that there are two possible outcomes, and also the general shape of the coin. You don't understand symmetry, i.e. you don't know the relevance of the shape for the experiment you are performing. You don't even know what "tossing" means. That's why the symmetry cannot tell you anything at all about the outcomes.

Ken G said:
I can just as easily group heads and me in any position but one with all the tails, and leave the other possibility as heads and me in one position. Why is that a "wrong" way to divide the outcomes? Your logic applies to that grouping as well as the other-- why would it not?

You can divide the outcomes in that way as long as you don't forget what each group consists of, since that would be throwing away information. Please read the following carefully.

1. You have a box with some balls in it.
2. There are 10 balls, each with a number from 1-10 on it.

This is the information you have. Now if you pick one without looking (i.e. you're not collecting any more information while you're picking), then you should assign the same expectation to each of the balls. You cannot group them into {1} and {2,...,10} and then assign equal expectation to these sets, because this grouping does not represent the same information you had before. You would have to forget some information, i.e. the fact that the second group has more than one ball in it.
 
  • #125
dx said:
Then there's a meaningful question that we may ask, i.e. million + 1 -> ?
But the issue we've been talking about is not the million+1 case, that's clearly a case where you have no reason to think you crossed a regime boundary. I'm saying that you should have no expectation at all about the billionth term in the series, unless you also have some reason to expect it will continue to follow the trend (as in the case of a mathematical example, which is why they are not appropriate to physics analogies). Consider instead the levels of a real hydrogen atom, for example.
We must choose on a scale from "I don't believe in it at all" to "I believe it completely". What do you think? (the only information we have is the set of observations up to a million. Physical observations, not mathematical!)
I already gave two examples of that, and why my expectation would be "I don't believe in it at all", but here we're talking about the billionth entry, not the million+1.

You can expect strongly, weakly, completely, not at all, or anything in between.
I'm fine with the degrees, I'm saying the appropriate degree is determined not by any element of the theory itself, but only by the degree to which the regime you are considering connects with regimes you have already experimented on. Gravity is a good example of this. We know pretty well how it works on scales of boulders and planets, but we already got one surprise extending it to the density of a neutron star, and another surprise extending it to the scale between galaxy clusters. We have no idea how it works on scales smaller than boulders, say for atoms, but my money says we should have "no confidence at all" that our current theory can treat the gravity in an atom-- should we ever be able to observe that.
Yes, without further information that's what you should expect.
My examples already show why that expectation would be baseless, as they are quite generic examples. You can add the third example of gravity, as above. Granted, gravity worked well going from a solar system to a galaxy cluster, and that's a significant increase over orders of magnitude. But the theory was built to unify observations where M/R never went above what you find in the solar system. If you exceed that by about 8 orders of magnitude, you come to the M/R for a black hole, or for the whole observable universe, both of which appear to give significant deviations from Newton's theories. We should have expected that, since M/R exceeds our experimental experience by 8 orders of magnitude.
Whether the analysis is any good in the sense of whether it is an accurate picture or not is not the question.
It certainly is the question, it's precisely the question.
The most logical expectation does not necessarily match reality.
Then on what basis do you claim it is the "most logical expectation"? What do you think that phrase means?

The point is to make use of all information you have. There are even cases where a logical analysis of the information you have will lead you to expect something that is completely false. But it is still the best you can do with that information.
Not if the best you can do with the information is recognize that it is insufficient to draw any conclusions whatsoever. That is the actual logical thing to do.
You can use only that information that you have.
Obviously, the point is, one way you can use that information is to say you cannot say anything useful.
In my example, the only information you have is that there are two possible outcomes, and also the general shape of the coin. You don't understand symmetry, i.e. you don't know the relevance of the shape for the experiment you are performing.
Then you cannot use shape information at all, and your argument has to apply just as well for a conical hat as for a coin. Does it?
You don't even know what "tossing" means. That's why the symmetry cannot tell you anything at all about the outcomes.
If that is true, you should have no expectation at all.
You can divide the outcomes in that way as long as you don't forget what each group consists of, since that would be throwing away information. Please read the following carefully.

1. You have a box with some balls in it.
2. There are 10 balls, each with a number from 1-10 on it.

This is the information you have. Now if you pick one without looking (i.e. you're not collecting any more information while you're picking), then you should assign the same expectation to each of the balls. You cannot group them into {1} and {2,...,10} and then assign equal expectation to these sets, because this grouping does not represent the same information you had before.
Sure it does, the grouping loses no information, it merely groups it. You have already done such a grouping, when you assumed I can distinguish a "1" from a "2" without knowing the orientation of the ball. You group all "1" results together, regardless of orientation. So how do you know there "really are" 10 possibilities here? You are the one who has imposed that on the experiment, the actual experiment will generate a virtual infinity of distinguishable outcomes. I'm saying that the way you do the grouping does not appear anywhere in your argument, so I am free to group all 1's with the 2's except for 1's in a single precise orientation when I remove it from the box, for example. What about your logic precludes that?

You would have to forget some information, i.e. the fact that the second group has more than one ball in it.
I don't need to forget that information, I am well aware of it. So what, what about your logic requires there be an equal number of balls in each group? To say that, you must be assuming that each ball is equally likely, then using that assumption to reason that we should expect each ball to be equally likely. That is precisely the argument you are giving.
 
  • #126
Ken G said:
But the issue we've been talking about is not the million+1 case, that's clearly a case where you have no reason to think you crossed a regime boundary. I'm saying that you should have no expectation at all about the billionth term.

Why the billionth? Why not the trillionth? Give me precisely the piece of information from your set of observations that tells you where exactly you start "not believing at all". How do you determine that boundary?

Ken G said:
It certainly is the question, it's precisely the question.

No, it's not. You will not know whether your expectation matches reality satisfactorily until you make an observation to test it. But question was, "what should you expect before you make the observation?". Once you make the observation, you will have more information, which you use to update your expectations. But before you make the observation, there is no way to tell if your expectation will match reality. That in no way prevents you from using the information you have to make a guess. Even when you're predicting within the regime, you're still guessing. It may be a well supported guess, but it's still a guess.
Ken G said:
Then on what basis do you claim it is the "most logical expectation"? What do you think that phrase means?

The phrase means the expectation that follows logically from the information you have.

Ken G said:
Not if the best you can do with the information is recognize that it is insufficient to draw any conclusions whatsoever. That is the actual logical thing to do.

We are not drawing conclusions! We are assigning degrees of belief to possibilities. If the information was sufficient to make a conclusive deduction, then you will "know", not "expect". We are considering what we should do when we don't have enough information to solve the problem deductively. We can realize that the information is not sufficient to tell for sure, but that doesn't mean we cannot use the information we have to make a guess. And the best guess would be the one that takes into account all the information we have.

Ken G said:
Obviously, the point is, one way you can use that information is to say you cannot say anything useful.Then you cannot use shape information at all, and your argument has to apply just as well for a conical hat as for a coin. Does it?

You don't know whether you can say anything useful until you do the experiment. The idea is to make the best of what you have. It turns out that in the case of the coin it's accurate, and in the case of a conical hat its not. But you cannot tell before you do the experiment.

Ken G said:
If that is true, you should have no expectation at all.

Why not? How do you know whether the information you have is enough for an accurate picture or not until you do the experiment? You cannot decide before the experiment that "I don't have enough information to say anything useful, so I won't say anything". You cannot know how accurate your picture of reality is before the experiment.

Ken G said:
Sure it does, the grouping loses no information, it merely groups it.

The grouping by itself doesn't lose information. But when you assign equal probability to the two groups you are ignoring the fact that you have some information about the difference between the two groups. Assume that I know that the coin will land heads or tails. Then I will expect them equally because I don't have information that tells me that I should expect one or the other more. But If you group the balls as {1} and {2,..,10}, then you do have information that allows you to distinguish between them, i.e. the fact that the second group has more than one possibility. If you assign equal probability to the groups now, you would be ignoring the fact that you have relevant information that distinguishes between the two groups.

Ken G said:
You have already done such a grouping, when you assumed I can distinguish a "1" from a "2" without knowing the orientation of the ball. You group all "1" results together, regardless of orientation. So how do you know there "really are" 10 possibilities here?

No, the orientation of the ball etc. are not information I have, nor am I collecting it. I am only concerned with the number of the ball. And I know that there are 10 balls. That is the information I have. I'm not imposing that on the experiment, I just know it. Maybe someone told me. I am concerned with using that information to form beliefs. Not belief as in "I know". But belief as in "I believe this to a certain degree, given this information".
Ken G said:
I don't need to forget that information, I am well aware of it. So what, what about your logic requires there be an equal number of balls in each group? To say that, you must be assuming that each ball is equally likely, then using that assumption to reason that we should expect each ball to be equally likely. That is precisely the argument you are giving.

No it's not. I don't need to group anything at all. I have been told that there are 10 possible types of balls, and nothing else. That is information that I may know from a previous experiment, or whatever. And I also know that when I perform a particular experiment, the possible outcomes are {1,2...,10}. The experiment does not measure the orientation of the balls. I only look at the number on the ball. So {1,2..,10} is the set of outcomes. There can be no other possible outcome that is not in this set. I am only looking at the number. What I look at is precisely the outcome of the experiment.
 
Last edited:
  • #127
dx said:
Why the billionth? Why not the trillionth? Give me precisely the piece of information from your set of observations that tells you where exactly you start "not believing at all". How do you determine that boundary?
It is a difficult boundary to determine, most likely we need a theory about that too. I'd draw on a combination of the dynamical range of previous successes and the precision of those successes to build expectation about the appropriate regime for sustaining precision going forward. I never said it was easy, but I did say that we have to do precisely this all the time. If you drive a car, for example, you know you could kill yourself, and you know the chances get worse at higher speed and in poorer conditions. Every time we drive, the conditions are a little different than they have ever been before. If it gets extremely foggy, those are very different conditions indeed-- so at what point do you judge the conditions are unsafe, on the grounds that your past safety record is not applicable in the new conditions? We are forced to make such determinations all the time, I am merely applying the same principle to physical theories.

But before you make the observation, there is no way to tell if your expectation will match reality. That in no way prevents you from using the information you have to make a guess.
Certainly, but what you are failing to do is to assess the validity of your guess. Some things really are pure guess, and have no more value than a guess. We need to recognize when that is the case, as it avoids the "revolution" problem. Every physics "revolution" was actually just a case of someone guessing wrong who had no business expecting to be right, pure and simple-- no revolution there when you see it in that light.

Even when you're predicting within the regime, you're still guessing. It may be a well supported guess, but it's still a guess.
Labelling everything a guess is of no value, calling something a guess means you have a low opinion of its likelihood of being correct. That's how most people use the term, and how I am using it too.
We can realize that the information is not sufficient to tell for sure, but that doesn't mean we cannot use the information we have to make a guess. And the best guess would be the one that takes into account all the information we have.
Yes, but I'm saying, what if the "best guess" really still ends up being just a guess? What good is it to identify it as the "best guess", if the best guess is still completely worthless? That's what I'm saying, I'm not disputing what the best guess is, only what the meaning of a best guess is. We always need to supplement the best guess with some concept of confidence in that guess, or it is a truly useless concept. No one is forcing us to guess at all-- the sensible approach is to test, not to guess. Ergo, we should set up tests that best narrow down the possibilities, completely irrespective of what any meaningless version of an "expectation" would say.

Let's return to the coin and/or the conical hat problem. I think I now see what you and granpa were saying. If there are two outcomes that you have chosen to distinguish, then your "best guess" is that if you pick one, you have a 50% chance of being right, if you have no other information. That's true, but that's very different from saying that the object is expected to demonstrate a 50/50 distribution of outcomes over many trials. The latter would not be the logical expectation, barring some other information (such as the symmetry of a coin versus a hat).

Furthermore, the 50% chance of being right only works if you make your choice at random. But if you follow some theory to arrive at your expectation, it is no longer a random choice, and therefore you can no longer say that you have a 50% chance of guessing correctly in the absence of any other information beyond the theory you used. I'm not sure why the whole Bayesian argument was brought up as relevant in the first place, but these are two important limitations to bear in mind whatever that purpose was.

Let me give another example to underscore this. Let's say there are three coins, two pennies and a quarter, in a jar, and someone is going to shake that jar until one coin comes out. You know nothing about the coins except that two are worth 1 cent and one is worth 25 cents. You are to receive the coin that shakes out-- how much money do you expect to receive, say over the course of 100 repeated trials? Now, if you pick randomly between "I get a penny" and "I get a quarter", you will be right half the time, and it makes no difference at all anything else about the experiment. If you can distinguish the pennies into penny #1 and penny #2, then you may instead decide to randomly choose between receiving each of the three coins, and now you'll be right 1/3 of the time, again independent of any other information you may have about the experiment. However, nothing that has been said so far may be used to form a meaningful expectation value for the amount you'll receive in 100 trials. There is simply no expectation that is not a pure guess, and logically you should not expect any such guess to converge on something correct over any number of trials. It simply doesn't mean a thing.
You don't know whether you can say anything useful until you do the experiment. The idea is to make the best of what you have.
But why is forming an expectation making the best of what you have? What possible benefit are you deriving from that? If the expectation is of no usefulness, then it being "best" among all the useless ways to form an expectation isn't saying much.
It turns out that in the case of the coin it's accurate, and in the case of a conical hat its not. But you cannot tell before you do the experiment.
Right, so it is an expectation that means nothing at all, there's no point in even forming it prior to the experiment.
Why not? How do you know whether the information you have is enough for an accurate picture or not until you do the experiment? You cannot decide before the experiment that "I don't have enough information to say anything useful, so I won't say anything". You cannot know how accurate your picture of reality is before the experiment.
Yet, as I pointed out in the driving in a fog example, we are called upon to do precisely that, all the time. We always need to have a concept of the confidence we can place in our expectations. What I'm saying is, we've been doing a particularly bad job of that in science when we treat all predictions on an equal footing, and that has led to all the "revolutions"-- and all the misunderstandings about the fallibility of science.

Assume that I know that the coin will land heads or tails. Then I will expect them equally because I don't have information that tells me that I should expect one or the other more.
That is not the logical expectation, you are asserting that we should expect a flat distribution after many trials. That will generally only be true in situations that exhibit a symmetry of some kind, and if you have no knowledge of any such symmetry, you should reject that expectation on the grounds that it is not generic. Instead, you should expect an unequal distribution after many trials, but you don't know which outcome will be the favored one, or by how much-- you simply have no expectation there at all. You still have a 50% chance of picking the right outcome if you do so randomly, but that tells you something about your guessing strategy, and nothing about the experiment.
But If you group the balls as {1} and {2,..,10}, then you do have information that allows you to distinguish between them, i.e. the fact that the second group has more than one possibility. If you assign equal probability to the groups now, you would be ignoring the fact that you have relevant information that distinguishes between the two groups.
Yet I can still be right 50% of the time by choosing randomly between those groups. That much is no different than if I randomly choose between 1-5 and 6-10, my chance of being right is 50% either way, completely independently from anything about the experiment other than I can associate any outcome with either one or the other of those groups. And I still can say nothing about what I expect the distributions after many trials to look like over the {1, 2-10} grouping versus the {1-5,6-10} grouping, other than the latter will have at least as high a left/right ratio, and likely higher, than the former. That's all that can be said, it is simply wrong to expect 1-5 to come up 5 times more often than 1, and it will generally not hold in the absence of a symmetry that leads us to expect it.

So {1,2..,10} is the set of outcomes.
If you choose it to be so. I can just as easily say there are two outcomes, {1} and {2-10}. That is the "information" I can go on, and apply all the same logic as your coin situation. If you doubt that, imagine a 20-sided die with two indistinguishable 1's and eighteen indistinguishable 2's on it, but I don't know the die has 20 sides. How many outcomes shall I count in that situation? That's all I've done with my grouping, and I will apply all your logic to that situation. Why would you expect that logic to work any better in your situation than in mine? Is yours more generic somehow?
 
Last edited:
  • #128
Ken G said:
Every physics "revolution" was actually just a case of someone guessing wrong who had no business expecting to be right
Prove it.
 
  • #129
Hurkyl said:
Prove it.
Fair enough, I'll be happy to. Of course it depends on what any individual would label a "revolution", so I'll just restrict to some of the more uncontroversial ones and we can always extend the list. My charge is to cite why someone was guessing wrong about something they had no reason to expect to be right about:

1) The geocentric universe: This was based primarily on concepts of gravity that would make a stationary point at the center special in some way, along with the absence of stellar parallax, indicating a stationary Earth if the stars are not too far away. So the guess was made that gravity really did pick out a special point at the center, and the stars really were not that far away. Neither of those guesses had the slightest shred of supporting evidence, they had no business expecting them to be right, they merely served a purpose of unifying the existing data at the time. As I've said, one must try a little harder when the goal is not just to unify the data, but also do it in a way that does not introduce unnecessary and unwarranted guesses.

2) Determinism: this is an element of Newton's formulation of physics, and worked extremely well in a wide array of situations ranging from gas beakers to the cosmos. However, it was never more than a useful model, and of course there is no way to test if determinism is "real", because no observations have suitable precision to be able to make that claim. It was just a guess that the universe "really is" deterministic-- and some would say not a very good guess at all. All we can really say is that application of deterministic models works well in situations in which they have been shown to work well, depending on the goal of the model and the details of the application. Extending it to a philosophical truth about reality, as by DesCartes and so forth, was a pure guess and it is no surprise it has produced nothing of value in our understanding of our place in things. We had no business making that leap, and still don't, as we cannot support it.

3) Special Relativity: the subject of this thread. I maintain that we had no idea that Newton's laws would be extendable to arbitrary relative velocity, so it should come as no surprise that they cannot. It was pure assumption, with zero observational backing, that reality did not embed a characteristic velocity scale that would be reflected in its dynamics. One does not assume that the absence of evidence is an evidence of absence, and we had no business making the guess that we could.

4) General Relativity: it was pure guess that "action at a distance" was really a physically plausible thing. All we could really say is that whatever was mediating that action, it was happening very fast, and with some ability to accommodate motions in the future that were similar to motions in the past (both constant velocity and constant acceleration are in effect "accommodated" by gravity to mimic instantaneous response). It was pure guess that such an accomodation would extend to all types of dynamics, as required by action-at-a-distance. (As I recall, even Newton himself was bothered by that assumption--he himself did not even expect it to hold true!) We had no business thinking action-at-a-distance was a fundamentally real property of the universe, we only knew that it worked in the situations tested, much like low-speed Newtonian mechanics worked without requiring a characteristic speed be embedded into reality.

5) Quantum mechanics: it was pure guess that the dynamics that ruled the cosmos could also rule an atom in a similar way. We had no idea what the scales of the forces would be, or if new forces would appear, or even completely new physics (like wave mechanics). It was a totally new regime on a vastly different scale and nothing similar was used to constrain any of Newton's mechanics, so there's no reason it should obey Newton's mechanics.

6) Wave/particle duality and the quantization of light: it was pure guess that we had no business making that just because we observed a clear difference in the behavior of macroscopic particles and waves, that this clear distinction would survive at all scales. Indeed, it is quite common for physicists today to take the opposite default assumption-- that everything that appears different on one scale can be unified at a deeper or higher energy scale. Neither of these assumptions have any basis, they are just guesses, and it was pure guess that waves and particles were fundamentally unconnected, just as it is pure guess today that the strong force and gravity are fundamentally connected. (The search for such a unification is good science-- the expectation that it is there is not.)

7) Spin: classical mechanics does not allow for particles to have internal degrees of freedom that store angular momentum, so it may be viewed as a "revolution" when it was discovered that they do. Again, classical mechanics never said anything about the internal degrees of freedom of fundamental particles, it was totally uncharted territory, and we had no business expecting the absence of strange new properties like spin because it was only our minds that separated particles from the rest of the universe. We now see the connections between the properties of particles and the symmetries of the universe, it was pure guess that there would be no such connection.

8) Dark matter: the easiest of all. If we use light to track matter, it is pure guesswork that we won't miss anything important. We never had any reason to expect the universe would not contain dark matter, the only real surprise is why the amount is not completely different from the amount of baryonic matter.

9) Dark energy: it was a complete guess that the gravity that works in galaxies would also work over the vast scales of the whole universe. We had no business guessing that gravity could only come from matter, simply because the only gravity we had seen came from matter. Again this is the difference between including what you have seen in your theory, and expecting that you have not left something out. What a sillly thing for science to do.

10) Evolution: let's get a non-physics topic in here. It was pure guess that species had to be created by a supreme power, we had zero evidence for that scenario. So the discovery that natural processes could lead to speciation was just that-- a discovery. Seeing it as a "revolution" that threw off the old power structure is simply a recognition that the Emperor had no clothes, a fact I am asserting we should simply build into our understanding of how science works, until there is no need to see everything as a "revolution" instead of what it really is-- another piece of the puzzle.
 
Last edited:
  • #130
Ken G said:
Certainly, but what you are failing to do is to assess the validity of your guess. Some things really are pure guess, and have no more value than a guess.

Again, you have no way of telling whether your guess has any value or not before experiment. The validity of the guess can only be assessed with the information you receive from the experiment. If you use all the information you have to form the best possible opinion that the information allows(in cases when must form an opinion), then you are doing the most logical thing. You may find after your experiment that your guess was not valid to an acceptable degree, and the new information will be used to update your guess. Of course, in all this the guesses also have degrees of belief that are determined by your current information.

Ken G said:
Labelling everything a guess is of no value, calling something a guess means you have a low opinion of its likelihood of being correct. That's how most people use the term, and how I am using it too.

I've used the words "expectation", "probability", "guess" and "opinion". If you don't like any of those words, then suggest a new one. The idea is "the opinion/guess/expectation/conclusion that is most supported by the information you have."

Ken G said:
Yes, but I'm saying, what if the "best guess" really still ends up being just a guess? What good is it to identify it as the "best guess", if the best guess is still completely worthless?

How do you know that the best guess is worthless? If guesses are worthless why do you make them? Guesses are not random fantasies. They are, ideally, logically drawn opinions that use all the information that is available to you. When you guess something, do you know whether it is a useless guess? No. You only know that after experiment, i.e. testing the guess. You can guess that a girl likes you, based on conversations/gestures/expressions etc. You are using the information available to make that guess. It could be wrong, but you don't know that until you ask her. You cannot decide that the guess is worthless. You can definitely believe to a certain degree that a particular guess has a low likelihood based on your information, but you cannot decide for certain that it is "wrong" or "useless" or "worthless".

Ken G said:
That's what I'm saying, I'm not disputing what the best guess is, only what the meaning of a best guess is. We always need to supplement the best guess with some concept of confidence in that guess, or it is a truly useless concept.

That's exactly what I've been trying to say.

Ken G said:
No one is forcing us to guess at all-- the sensible approach is to test, not to guess.

There are countless cases where a guess is crucial. In fact, every time you apply physics to predict something, you are guessing. Even in the tested regime. The guesses in the tested regime are very well supported, so your confidence in them is high, but they're still guesses. And by guess I don't mean "they have a low likelihood of being correct" as you seem to think. There are many guesses you make everyday that you believe almost as fact, and have good reason to do so.

Ken G said:
Let me give another example to underscore this. Let's say there are three coins, two pennies and a quarter, in a jar, and someone is going to shake that jar until one coin comes out. You know nothing about the coins except that two are worth 1 cent and one is worth 25 cents. You are to receive the coin that shakes out-- how much money do you expect to receive, say over the course of 100 repeated trials? Now, if you pick randomly between "I get a penny" and "I get a quarter", you will be right half the time, and it makes no difference at all anything else about the experiment.

The problem with examples like this is that our familiarity with them misleads us. Let us analyze this one closely. Here's the information you have.

1. The jar has 3 coins.
2. two are worth 1 cent and one is worth 25 cents.

The experiment is to choose one coin, or shake out one coin. You have no information about this process of choosing, other than the fact that it must result in anyone of 3 coins. So you must expect each equally. So even though you cannot distinguish between the two pennies, you must still expect to get a penny 2/3 of the time.
Ken G said:
If you can distinguish the pennies into penny #1 and penny #2, then you may instead decide to randomly choose between receiving each of the three coins, and now you'll be right 1/3 of the time, again independent of any other information you may have about the experiment.

See above.

Ken G said:
However, nothing that has been said so far may be used to form a meaningful expectation value for the amount you'll receive in 100 trials. There is simply no expectation that is not a pure guess, and logically you should not expect any such guess to converge on something correct over any number of trials. It simply doesn't mean a thing.

Yet a meaningful expectation can be formed. Before you do the 100 trials you have a certain amount of information. That information can be used to form a meaningful guess (not an arbitrary guess, a guess that uses the information you have). You are likely to obtain information during the 100 trials that could considerably change your guess, but before you do the trials you have to go by the information you have. The detailed analysis of problems like this, to determine the guess that uses all the relevant information you have and nothing more and also how strongly you must believe in this guess, is difficult generally, but it can be done.

Ken G said:
But why is forming an expectation making the best of what you have? What possible benefit are you deriving from that?

You are deriving whatever benefits that the information you have can give you. It may not be much, or it may be a lot. You don't know until you try.

Ken G said:
If the expectation is of no usefulness, then it being "best" among all the useless ways to form an expectation isn't saying much.Right, so it is an expectation that means nothing at all, there's no point in even forming it prior to the experiment.

Again, you don't know if the expectation is "of no usefulness". You have made it as useful as it can be by using the information you have. What more can you ask? If you don't want to form any expectation at all before the experiment, then you are saying you don't care about the information that you currently have. You want to have certain answers to everything, and you will only believe something once it has been verified exactly, which of course can never be done. So, in effect, you are saying physics is useless, since physics itself is just a system of belief that is based on experimental information; various beliefs that you believe to varying degrees due to varying degrees of informational support.

Ken G said:
We always need to have a concept of the confidence we can place in our expectations. What I'm saying is, we've been doing a particularly bad job of that in science when we treat all predictions on an equal footing, and that has led to all the "revolutions"-- and all the misunderstandings about the fallibility of science.

You may be right. I don't know what scientists in the 19th century thought about the validity of their theories.
Ken G said:
If you choose it to be so. I can just as easily say there are two outcomes, {1} and {2-10}. That is the "information" I can go on

You cannot choose what information you have. The information is given to you, or has been collected by you before. If you know that there are 10 numbered balls, you cannot just say, "I choose to go by the information that there are only two outcomes".

Ken G said:
Imagine a 20-sided die with two indistinguishable 1's and eighteen indistinguishable 2's on it, but I don't know the die has 20 sides. How many outcomes shall I count in that situation? That's all I've done with my grouping, and I will apply all your logic to that situation.

If I don't know that there are 20 sides, then I cannot use that information to form my beliefs. If all I knew was that if I pick up your die and look at it, I will see either 1 or 2, then I would assign equal likelihood to both. You must realize that until after you've done this experiment, you don't know to what degree these beliefs are accurate.
 
Last edited:
  • #131
Not if the best you can do with the information is recognize that it is insufficient to draw any conclusions whatsoever. That is the actual logical thing to do.

you can't draw any conclusion about the probability but you can draw a conclusion about the expectation.

the expectation is not a guess. bayesian probability has been precisely mathematically defined.
 
  • #132
Ken G said:
Fair enough, I'll be happy to.
You're missing something very important -- justifications for your assertions. What criteria are you applying to judge, for example, that we had no business making an inferences based on all of the empirical support for the wave nature of light.

I notice that you're also making presumptions even in your critiques -- for example
it was pure guess that "action at a distance" was really a physically plausible thing. All we could really say is that whatever was mediating that action,​
you seem to have presumed that all actions are 'mediated'. Why is that justified?
 
  • #133
granpa said:
you can't draw any conclusion about the probability but you can draw a conclusion about the expectation.
The mean is something 'about the probability'.

bayesian probability has been precisely mathematically defined.
Yes -- the a posteriori values are defined in terms of the a priori values. Without assigning a priori probabilities, you cannot have a posteriori values.

Of course, the Bayes factor generally wouldn't require such a choice. Is that to what you're referring?
 
  • #134
The mean is something 'about the probability'.

the expectation is not the same as the mean. that's where you are going wrong.

if someone offers me a bag with an unknown number of black and white marbles in it and asks me to predict what color marble i would draw from it at random then my expectation is that i can predict it 50% of the time by choosing black or white at random. the probability is unknown but if enough random people make me this offer with enough random bags of marbles then i expect the probability to average 50%. its the probable probability. that is totally different from the probability.

but as i keep drawing marbles from anyone bag i gain experience. suppose i keep drawing black marbles. then with each draw the bayesian probability (the expectation) that i will draw another black marble goes up. with an infinite number of trials the bayesian probability=the probability.
 
Last edited:
  • #135
granpa said:
The mean is something 'about the probability'.

the expectation is not the same as the mean.
I'm having a mental blank then -- the mean is the only technical meaning for the word 'expectation' that I can imagine at the moment. Could you spell out for me what you mean?
 
  • #136
the mean is the only technical meaning for the word 'expectation' that I can imagine at the moment.

https://www.physicsforums.com/showthread.php?p=1738079&highlight=expectation#post1738079

"But on what basis do you say that it is equally likely?

on the basis that there are 2 sides. if you knew nothing else about it you would still expect to be able to predict the outcome 50% of the time just by choosing at random. but notice that 50% is not the probability. the probability is unknown. the probability could be 100% or 0% or anywhere in between. if its not the probability what is it? the logical thing to call it would be the expectation but that word is already taken. so its called the bayesian probability."
 
  • #137
dx said:
Again, you have no way of telling whether your guess has any value or not before experiment.
Of course we do, and we do it all the time. Every single moment of our lives is a unique experiment, it has never happened before and never will again. Yet we are not forced to live our lives with "no way of telling" the value of the predictions we use to function.

The validity of the guess can only be assessed with the information you receive from the experiment.
I'm talking about justifiable confidence in a "guess" (your usage), not "validity".

I've used the words "expectation", "probability", "guess" and "opinion".
As I said, "guess" carries the connotation that the prediction is not at all reliable. That is the common usage, if you mean something else by it, it is you who need the new word. Further, I have noted that the concept of "expectation" is completely useless without an associated concept of "justifiable confidence in the expectation". That seems clear enough in how we function daily.

How do you know that the best guess is worthless?
Again, we are called upon to make such assessments constantly all the time. Why object now? If I ask you, what is your best guess for the team that will win the World Series in 2020, what is your answer? What is the value of that "expectation", and what odds would you take? How can you decide that, if you claim there's no way to assess the usefulness of the expectation?

If guesses are worthless why do you make them?
That's the point. We should not-- unless we are also willing to assess the degree of reliability of the guess. Otherwise, in betting we would lose our money, and in science, we would engender misconceptions about what science can tell us.

When you guess something, do you know whether it is a useless guess? No.
I can certainly evaluate the usefulness of the guess, and am called on to do so all the time.
There are countless cases where a guess is crucial. In fact, every time you apply physics to predict something, you are guessing.
Again, you are using the term "guess" as if it had no different meaning for predicting the World Series winner in 2020 versus predicting that a ball wiill fall if I release it. I am not using "guess" that way, I am using the standard meaning of the term.

And by guess I don't mean "they have a low likelihood of being correct" as you seem to think.
I am well aware you are not using the standard meaning of the word. The real question is, why do you deny that the reliability of a prediction can be assessed? Why, when the reliability can be assessed to be low, do you still think it is important to form an "expectation"? That makes no sense, at some point the reliability is so low that there is simply no use for the prediction in the first place, except as a kind of "benchmark" to know when a theory has broken down. That, for example, is how one debunks astrological predictions.

There are many guesses you make everyday that you believe almost as fact, and have good reason to do so.
And none of them count as "guesses" in the standard usage.
You have no information about this process of choosing, other than the fact that it must result in anyone of 3 coins. So you must expect each equally.
No, there is no such requirement on your expectations. Indeed, it is far more reasonable to expect different frequencies of occurence, but different in an unknown way. For example, if I say I will do 3 million trials, with Poisson noise of something like 1,700 outcomes, and I allow you to win the quarter if it either appears within 5,000 outcomes of 1 million occurrences, or if it does not, on what basis can you say it is logical to choose the former? There is no basis for that expectation, if you know nothing about how the coin is chosen in each trial.
So even though you cannot distinguish between the two pennies, you must still expect to get a penny 2/3 of the time.
But if all I tell you is that the outcome is either a penny or a quarter, you would apply your logic to expect to get a penny 1/2 the time. I, on the other hand, claim that is a meaningless expectation.

Before you do the 100 trials you have a certain amount of information. That information can be used to form a meaningful guess (not an arbitrary guess, a guess that uses the information you have).
The problem is, calling a guess "meaningful" any time it is not "arbitrary" is an extremely low standard for meaning, and in practice will be a worthless standard. It's not so much a problem of how little information you possess, or even how much information you don't possess, it's more an issue of how little information you possesses about the information you don't possess. A probability is meaningful when you have clear information about what you don't know, but when you don't even have that, there is no meaning to a probability estimate. When there is no meaning to a probability estimate, there is also no usefulness to making predictions-- other than as benchmarks to tell when a prediction failed (again as an assessment of a predictive scheme, like astrology).

Again, you don't know if the expectation is "of no usefulness".
I don't see why you hold to that position, in contradiction to our almost daily applicaton of its inverse.
You have made it as useful as it can be by using the information you have.
Of course, and that usefulness might well be squat, as in the case of predictions overturned by scientific "revolutions".

What more can you ask?
I can ask to restrict to expectations that actually have some merit behind them, like the ones made by a person building a bridge, or by the Wright brothers as they tried to make an airplane-- and unlike the expectation that reality is fundamentally deterministic based on the success of deterministic models in various situations, or the expectation that reality actually fragments into "many worlds" based on the success of models involving unitary time evolution of closed systems in between couplings to an experimental apparatus. These are classic examples of expectations that are useless by virtue of their unreliability.

If you don't want to form any expectation at all before the experiment, then you are saying you don't care about the information that you currently have.
It means that I do not think the information I have is useful enough to form a useful expectation. If that's your meaning of "don't care", then yes.

So, in effect, you are saying physics is useless, since physics itself is just a system of belief that is based on experimental information; various beliefs that you believe to varying degrees due to varying degrees of informational support.
I'm certainly not saying physics is useless, on the contrary I'm pointing out a key requirement for it to be useful-- the ability to gauge the reliability of an expectation. It is your version that would be useless, wherein all we can ever do is form the best guess we can from the information we have, and know nothing about the reliability of that expectation until we do the experiment.

You may be right. I don't know what scientists in the 19th century thought about the validity of their theories.
A science historian might see it differently, but it is my general impression that scientists throughout history, from the Greeks right up to today, have tended to perceive the "body of scientific knowledge" with a considerable degree of certainty. That was valid then, as now, only insofar as we are keeping track of the experimental regimes we actually have direct knowledge of. In other words, we must always recognize the difference between the statements "to be wrong, that theory would require some experiment that hasn't been done to come out different from X", versus saying, "to be wrong, that theory would require some experiment that has been done to come out differently from X, where X is the result we got."
 
  • #138
Hurkyl said:
You're missing something very important -- justifications for your assertions. What criteria are you applying to judge, for example, that we had no business making an inferences based on all of the empirical support for the wave nature of light.
My criterion for judging that is the absence of a justification for concluding the inverse. In other words, who needs such a criterion more, the person who imagines that successfully modeling some dynamics with wave mechanics implies that light is fundamentally a wave and not a particle(which is what I presume you mean by "wave nature", because it was obvious that light exhibits wave properties), or the person who notes that such success demonstrates no such thing? It is not my position that requires such a criterion-- I am pointing to the absence of a justification for the competing idea. Ergo, it is actually your claim that I have insufficient justification that is the claim with insufficient justification here-- my stance is simply the skeptical one.

I notice that you're also making presumptions even in your critiques -- for example
it was pure guess that "action at a distance" was really a physically plausible thing. All we could really say is that whatever was mediating that action,​
you seem to have presumed that all actions are 'mediated'. Why is that justified?
Pick any word you like-- all I mean by "mediated" is that the action "acts" in some way, which has to be instantaneous in the case of action at a distance. It hardly seems a "presumption" that an action must act.
 
Last edited:
  • #139
granpa said:
on the basis that there are 2 sides. if you knew nothing else about it you would still expect to be able to predict the outcome 50% of the time just by choosing at random. but notice that 50% is not the probability. the probability is unknown.
Right, the 50% tells you everything about your guessing strategy (split all the possible results into two distinct classes and guess one of the two at random), and nothing at all about either the experiment or even the possible outcomes of the experiment, other than that it is possible to distinguish them into two exclusive all-encompassing groups and choose the correct group 50% of the time with that strategy.
if its not the probability what is it?
It's just the result of a choosing strategy, call it a game theory. It doesn't connect to the experiment under study in any useful way.
 
  • #140
i never said otherwise.
 
Back
Top