"Classical Physics Is Wrong" Fallacy - Comments

In summary: As we learn more about the universe, our theories will need to keep up.This is something that new theories always need to address.In summary, the new theory must explain the old stuff, but it does not mean that it will "converge" at some point to the old one. There is no connection between the two things.
  • #71
Ziang said:
Physicists say that SR expressions have been verified with accelerators. So, if there is any theory that gives a different expression, then it will not get along with results from the accelerators. This means that there is no way for a more accurate theory exists.
A more accurate theory must reduce to the SR expressions under the conditions covered by the accelerator experiments. It may diverge from SR in other conditions.
 
Physics news on Phys.org
  • #72
Ziang said:
I agree with the author. However, he wrote, "...if there are more general and more accurate theories beyond QM, SR/GR, ...".
Physicists say that SR expressions have been verified with accelerators. So, if there is any theory that gives a different expression, then it will not get along with results from the accelerators. This means that there is no way for a more accurate theory exists.
These theories give you answers to questions SR cannot answer.
SR cannot tell you what happens with gravity. GR can. In the absence of gravity, GR is identical to SR.
SR cannot tell you what happens if fields are quantized. QFT (quantum field theory) can. If the quantization is irrelevant, QFT is identical to SR.
 
  • Like
Likes Dale
  • #73
Dale said:
By working the math. A lot of things that we may verbally or philosophically consider different are mathematically equivalent. You may think that you are drawing an important distinction between Newtonian and Relativistic mechanics, but in the v<<c limit this distinction is only in your mind and does not appear in either the math or experiment.
The distinction does not appear in either the math or experiment may not be sufficient to conclude that they are the same theory or same fundamental concepts or same assumptions.
 
  • #74
Ziang said:
The distinction does not appear in either the math or experiment may not be sufficient to conclude that they are the same theory or same fundamental concepts or same assumptions.

This is a very puzzling statement.

The argument isn't about "fundamental concepts" of "same assumptions". They are not! The postulates of SR is distinctively different than the classical Galilean transformation. That isn't the issue!

But when you have derived, under a certain limit, of the same mathematical form, then that theory can be logically shown to be able to reproduce all the results of that came out of the mathematical form!

I'm a bit surprised that this is even an issue. This is done in mathematics all the time! We manipulate our differential equations, for example, so that they can be in one of the known forms that results in one of the special functions. As soon as the mathematical form matches a known form, the work is done is showing what the solutions and the behavior of the solutions should be, because it has already been solved!

Zz.
 
  • Like
Likes Dale
  • #75
Ziang said:
The distinction does not appear in either the math or experiment may not be sufficient to conclude that they are the same theory or same fundamental concepts or same assumptions.
It is sufficient to conclude that the differences between the concepts and assumptions are scientifically, experimentally, and physically irrelevant. The remaining differences are only in our mind, not in nature, the assumptions do not matter to nature regardless of how important we perceive them to be. SR with v<<c is physically the same as classical physics in every way which can be tested.

This is related to the difference between a theory and an interpretation. If you have general questions about the difference between a theory and an interpretation, please start a new thread rather than detract too much from this thread.
 
Last edited:
  • #76
Thank you for the Insight article.

I apologise to everyone if this is a dumb question, but would you say Newton's laws would further reduce to Kepler's laws under some limiting case?
 
  • #77
Hypercube said:
Thank you for the Insight article.

I apologise to everyone if this is a dumb question, but would you say Newton's laws would further reduce to Kepler's laws under some limiting case?

No. Kepler's law is an "application" of Newton's laws, the same way the kinematics of a mass sliding down an inclined plane is an application of Newton's laws.

Zz.
 
  • Like
Likes Hypercube
  • #78
Classical theory. Wrong but by so very, very little in everyday situations it is not worth the extra trouble using the correct theory.
And can we say the current theories are correct? As far as we know they are but as far as they knew at the end of the 19th century the classical theories were. Let us hope not, as where would be the fun in that?
 
  • #79
Dr Whom said:
Classical theory. Wrong but by so very, very little in everyday situations it is not worth the extra trouble using the correct theory.
And can we say the current theories are correct? As far as we know they are but as far as they knew at the end of the 19th century the classical theories were. Let us hope not, as where would be the fun in that?

Since when is the limit of an approximation is considered to be "wrong" when it is used in that limit? That's like saying there is no point in even doing linear algebra or many-body physics, because these are all approximations (even valid approximations). If that's the case, then every single thing we use now is wrong, because I can guarantee you that no one has every managed to completely solve any of the many-body equations that described the behavior of your semiconductors, the behavior of current in a conductor, etc... etc.

The problem here is that among the general public, the word "wrong" has a very strong and distinct connotation. When something is wrong, you don't use it, or you don't do it. It is very black-and-white. Yet, this is not what is meant here, and in that sense, classical physics is definitely not wrong. So in that context, claiming that classical physics is wrong, is in fact, wrong!

Zz.
 
Last edited:
  • Like
Likes Dale
  • #80
Classical theories are approximations, not limits. Ergo, saying classical physics is wrong is wrong is wrong!
 
  • #81
Dr Whom said:
Classical theories are approximations, not limits.
If you have an "approximation", this suggest that it is an approximation of something. The aim of physical theories is to describe observations. If they do, they are good theories. In certain limits, classical theories work fine and they are good theories in those limits. What is meant by a theory being recovered as a limit of a different theory is that predictions agree to leading order as some model parameter is taken to zero. This is standard nomenclature. When the parameter is not exactly zero, but relatively close, yes you can use the classical theory as an approximation of the full theory, but this is conceptually different.
Ergo, saying classical physics is wrong is wrong is wrong!
This statement just makes it clear that you have missed the entire point of the discussion, which is that what laymen consider the word "wrong" to mean is fundamentally different.
 
  • #82
Dr Whom said:
Classical theories are approximations, not limits
Yes, they are. Did you not read the article?

Also, approximations are not inherently wrong in science.
 
Last edited:
  • #83
Closed temporarily for moderation. Hope to reopen soon.

Edit: the thread is reopened after major cleanup. There will be no further discussion of flat Earth theory, which is a conspiracy theory not a scientific theory and thus forbidden by the rules. Everyone must find a different way of making their points than dragging this forum into a discussion of conspiracy theories. The scientific approximation of small sections of the Earth as flat was already addressed fully in post 2.
 
Last edited:
  • Like
Likes russ_watters and fresh_42
  • #84
Ibix said:
If you are traveling at 30mph down the road and a car passes you going 30mph faster, you can calculate its road speed using Newtonian or Einsteinian relativity. You will get the same answer to the precision you can plausibly measure. You need to take into account the velocity variation from flies crashing into the front of the car long before you need to care about Einstein. So simplifying the maths and using Newton isn't wrong.
Simplifying the math by using Newton is not wrong. But the justification you have given depends on Newton being wrong. Because you estimate the error you make by using Newton instead of Einstein. Without acknowledging that this is an error (and this can be an error only if Newton is wrong) you cannot estimate it, thus, it makes no sense to claim that it is small even compared with the flies.

Arguing that a given approximation is correct presupposes that one acknowledges it is only an approximation, and that there is something else which gives better results. And once another theory gives better results, it is clear that the approximation itself has to be wrong.
 
  • #85
Maximilian said:
and that there is something else which gives better results.
This is a fallacy based on an idealisation. When the uncertainties in the incoming variables are much larger than any difference between the two theories, they are for all purposes equivalent in the given limit and there is no possibility to say that one is ”better” in that limit.

Maximilian said:
And once another theory gives better results, it is clear that the approximation itself has to be wrong.
This is also a bit naive and idealised. First of all, in the limiting case the other theory is not ”better”. Both theories agree perfectly up to experimentl precision. Second, if you are using ”wrong” in the global sense that is not what the article is about.
 
  • #86
Orodruin said:
This is a fallacy based on an idealisation. When the uncertainties in the incoming variables are much larger than any difference between the two theories, they are for all purposes equivalent in the given limit and there is no possibility to say that one is ”better” in that limit.
This is also a bit naive and idealised. First of all, in the limiting case the other theory is not ”better”. Both theories agree perfectly up to experimentl precision. Second, if you are using ”wrong” in the global sense that is not what the article is about.
True and false, right and wrong are by their nature global. It is about the existence of an error, not about the size of the error.

And even if there is some equivalence in the limit, it does not change anything. We do not live in the limit, only quite close to this limit, and in every distance from the limit, however small, we can say which is better. Maybe we cannot measure the difference, but it exists, and we know, from regions far away from the limit, which theory is better.

One should not change the language without necessity. There are enough words to explain that in some circumstances an approximation is viable, appropriate, acceptable, accurate enough, and so on. To use, instead, words which have a different, global meaning, like true and false, right and wrong, distorts the language, makes it less precise.

Classical mechanics is wrong. It is falsified by a lot of observations and experiments. Point. That it remains nonetheless useful, as an approximation, is fine, but in no way changes the fact that it is wrong.

And it is the recognition that it is wrong which motivates scientists to find better theories, theories which will hopefully be closer to the truth. If we would not accept that not only classical mechanics, but even GR and QFT are wrong (which follows from the infinities and singularities they have), there would be no point searching for a theory of quantum gravity.
 
  • #87
Maximilian said:
Because you estimate the error you make by using Newton instead of Einstein.
Do it the other way round then. In either case the difference is one part in ten to the fifteen. At that level of precision the car isn't a rigid system with a single velocity. Ok, we could be talking about the velocity of the centre of mass of the car, but that is neither constant nor even a straight line. And it may or may not be the relevant velocity depending on why we want to know the relative velocity.

The whole conceptual basis of the question falls apart long, long before I need to worry about whether Einstein or Newton gives a more precise prediction. So both give the same answer to any precision it makes sense to ask. Because we are well within the domain of applicability of Newtonian theory.
 
  • #88
Ibix said:
Do it the other way round then. In either case the difference is one part in ten to the fifteen.
Yes, but this is not the question.

If I say that SR can be applied even to a car, given that the error we make by using SR would be only one part in ten to the fifteen, in comparison with classical mechanics, what would this be? Obviously nonsense.
Ibix said:
The whole conceptual basis of the question falls apart long, long before I need to worry about whether Einstein or Newton gives a more precise prediction. So both give the same answer to any precision it makes sense to ask. Because we are well within the domain of applicability of Newtonian theory.
But you have no chance to find a domain of applicability of Newtonian theory, without acknowledging that Newtonian theory is wrong. Before 1905 where was no such animal as a domain of applicability of Newtonian theory, because it was not known that Newtonian theory is wrong. It appeared only after it was clarified that Newtonian theory is wrong, that means, that there exist regions outside the domain of applicability.
 
  • #89
Maximilian said:
Without acknowledging that this is an error (and this can be an error only if Newton is wrong)

The word "error" has a precise technical meaning when speaking of approximations. That meaning does not have the implications you are claiming here.

Maximilian said:
Arguing that a given approximation is correct

Nobody is arguing that the Newtonian approximation is "correct". Nor is anyone arguing that it is "wrong". The whole point is that "correct" and "wrong", binary categories, are not useful categories to use when talking about scientific theories and approximations. Much more useful are "less accurate" and "more accurate", which again have precise technical meanings in terms of how much error (using the technical meaning of that term, as above) there is in your predictions vs. the actual data.

If you have not read the Asimov essay that @Nugatory linked to in post #2, I strongly suggest that you do so, because it does a great job of discussing exactly this point and dispelling the kind of common confusion you are displaying here.
 
  • #90
Maximilian said:
Newtonian theory is wrong, that means, that there exist regions outside the domain of applicability.

If this is what you mean by "wrong", then you are using the word in a very different sense from its usual sense.
 
  • Like
Likes russ_watters
  • #91
Maximilian said:
True and false, right and wrong are by their nature global. It is about the existence of an error, not about the size of the error.
No, this can only be correct in a philosophical sense and that has absolutely nothing to do with what is being discussed. Unless you are afraid that a bridge will break down because the engineers building it did not account for relativistic corrections you really have no case here because that is what the article is about. You are building a strawman argument. Empirical science is not about being right or wrong, it is about finding as good of a description of how nature behaves as possible.

Maximilian said:
If we would not accept that not only classical mechanics, but even GR and QFT are wrong (which follows from the infinities and singularities they have), there would be no point searching for a theory of quantum gravity.
This statement is just absurd. You are essentially saying that if wehad accepted Newtonian mechanics as false there would have been no point in developing relativity.
Maximilian said:
But you have no chance to find a domain of applicability of Newtonian theory, without acknowledging that Newtonian theory is wrong. Before 1905 where was no such animal as a domain of applicability of Newtonian theory, because it was not known that Newtonian theory is wrong. It appeared only after it was clarified that Newtonian theory is wrong, that means, that there exist regions outside the domain of applicability.
Again. Strawman and a failure to understand what the article is about.
 
  • #92
PeterDonis said:
If this is what you mean by "wrong", then you are using the word in a very different sense from its usual sense.
Given that I'm not a native speaker, this is imaginable. I know that there is also a moral meaning, right or wrong, which is not present for true and false, but in a physics discussion and in particular in the article this plays no role. The google translator gives "falsch" as the main translation, which backtranslates into "wrong, false, incorrect, counterfeit, mistaken, erroneous". This does not look like my use would be "a very different sense".
PeterDonis said:
The word "error" has a precise technical meaning when speaking of approximations. That meaning does not have the implications you are claiming here.
Explain the difference. If I use an approximation instead of the correct theory, the consequence is a difference between my computation and the value the correct theory would give. This difference is part of the error I make, not? There are, of course, also other sources of error, but this error is the one relevant if one discusses an approximation.
PeterDonis said:
Nobody is arguing that the Newtonian approximation is "correct". Nor is anyone arguing that it is "wrong". The whole point is that "correct" and "wrong", binary categories, are not useful categories to use when talking about scientific theories and approximations. Much more useful are "less accurate" and "more accurate", which again have precise technical meanings in terms of how much error (using the technical meaning of that term, as above) there is in your predictions vs. the actual data.
I disagree. I think these binary notions, which distinguish the theories as a whole, are very important.
PeterDonis said:
If you have not read the Asimov essay that @Nugatory linked to in post #2, I strongly suggest that you do so, because it does a great job of discussing exactly this point and dispelling the kind of common confusion you are displaying here.
I have read it, and even quoted it in one of the many deleted postings.
To quote again: "John, when people thought the Earth was flat, they were wrong. When people thought the Earth was spherical, they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together."

This statement is using "wrong" in the same sense I have used it. What I argue against is to name "When people thought the Earth was spherical, they were wrong" a fallacy.
Orodruin said:
No, this can only be correct in a philosophical sense and that has absolutely nothing to do with what is being discussed. Unless you are afraid that a bridge will break down because the engineers building it did not account for relativistic corrections you really have no case here because that is what the article is about. You are building a strawman argument. Empirical science is not about being right or wrong, it is about finding as good of a description of how nature behaves as possible.
First, I completely disagree philosophically. Science is about right and wrong (better true or false, to avoid the moral aspects which are not present). Empirically falsified theories are rejected because they are false. That they may be nonetheless used for approximate computations is fine and useful, but the scientific problem to find a theory which is not falsified remains.
Orodruin said:
This statement is just absurd. You are essentially saying that if we had accepted Newtonian mechanics as false there would have been no point in developing relativity.
What is absurd is your interpretation, because I'm saying exactly the opposite. If we had accepted Newtonian mechanics as true, and therefore ignored the open problems which had the potential to cause doubt (like MMX, Mercury perihelion) there would have been no point in developing relativity.
Orodruin said:
Again. Strawman and a failure to understand what the article is about.
No. The argument of the article simply fails to prove what is claimed to be proven, namely that "classical mechanics is wrong" is a fallacy.

All what it shows is that "There is somehow a notion that SR, GR, and QM have shown that classical physics is wrong, and so, it shouldn’t be used." But this is very different from "classical physics is wrong" being a fallacy. Classical physics is wrong, as any approximation, but it can be used as an approximation whenever the approximation error is sufficiently small.
 
  • #93
Maximilian said:
it is clear that the approximation itself has to be wrong.
I disagree completely. What justification do you have for calling an approximation “wrong” when many experiments clearly show that the approximation is valid? You are not the judge of right and wrong in science: experiment is. In the domain where the approximation matches experiment it is scientifically validated. It is demonstrably not wrong.
 
  • #94
Maximilian said:
Classical physics is wrong, as any approximation, but it can be used as an approximation whenever the approximation error is sufficiently small.
Special relativity is only valid where gravitational effects are negligible. So it is wrong by your definition.

General relativity breaks down somewhere on the way to the inside of a black hole. So it is wrong by your definition.

Quantum theory breaks down somewhere on the way to cosmological scales because the cosmological constant is tiny. So it is wrong by your definition.

The same will be true of a successor theory. Either it will predict its own breakdown (like relativity) or it won't (like Newton). And there will always be a regime we've never tested it in. So it will be wrong by your definition. Or so we will have to suppose.

In fact, all of scientific theory is wrong by your definition. This does not seem like a helpful definition to me.
 
  • Like
Likes Asymptotic
  • #95
Dale said:
I disagree completely. What justification do you have for calling an approximation “wrong” when many experiments clearly show that the approximation is valid? You are not the judge of right and wrong in science: experiment is. In the domain where the approximation matches experiment it is scientifically validated. It is demonstrably not wrong.
It is demonstrably wrong, because else it would not be named an approximation but a viable physical theory. If it is not, but only useful as an approximation of some other viable physical theory, that means that it has been falsified by some empirical observation.
That it may be used, under some circumstances, as an approximation does not make a wrong theory true. And even in cases where the approximation is good as an approximation, one can compute (at least in principle) the difference between the approximation and the viable theory. This is, according to the viable theory, the error made by the approximation. It is non-zero, else it would not be named approximation, but exact solution. So we can be sure that we have a non-zero error.
We use the approximation because we know (or have estimated) that the error is below the accuracy we need in the application in question.
I judge the use of language. The words used have a meaning, and the meaning clearly gives some information about what experiments have told. Namely, that a theory which is used only as an approximation, and which is not named a viable theory, has been falsified by observation (or is not viable for other reasons, like internal inconsistency, as QFT on curved background, or because of infinities like GR.)

Ibix said:
Special relativity is only valid where gravitational effects are negligible. So it is wrong by your definition.
General relativity breaks down somewhere on the way to the inside of a black hole. So it is wrong by your definition.
Correct.
Ibix said:
Quantum theory breaks down somewhere on the way to cosmological scales because the cosmological constant is tiny. So it is wrong by your definition.
In my interpretation, it does not have to extend to cosmological scales, but breaks down for other reasons, but this is off-topic here, so correct too.
Ibix said:
The same will be true of a successor theory. Either it will predict its own breakdown (like relativity) or it won't (like Newton). And there will always be a regime we've never tested it in. So it will be wrong by your definition. Or so we will have to suppose.
No. The existence of regimes not tested is irrelevant, because this does not make a theory wrong. Anyway we cannot prove by observation that it is true. But it is quite probable that theories like QG or TOE people think about developing now will be theories which can be easily seen to be false.
Ibix said:
In fact, all of scientific theory is wrong by your definition. This does not seem like a helpful definition to me.
No. Only our actual scientific theories are wrong. That's why we have to search for better theories. Which is what many physicists are doing.

If one, instead, cares only about making sufficiently accurate predictions for observable things, there is certainly no need for quantum gravity or a GUT or TOE, and all that research is simply throwing away money.
 
  • #96
Maximilian said:
If I use an approximation instead of the correct theory, the consequence is a difference between my computation and the value the correct theory would give. This difference is part of the error I make, not?

No. You have two mathematical machines that generate predictions. You compare each prediction with the actual data. The difference between the prediction and the actual data is the error. That difference will never be zero; but if one theory is more accurate than another (such as General Relativity being more accurate than Newtonian gravity), then the more accurate theory will have a smaller error. No theory ever has a zero error.

Maximilian said:
these binary notions, which distinguish the theories as a whole

But they don't. No theory makes predictions which exactly match the data. So there is no way to sort them into binary categories. The best you can do is rank them along a continuum of how accurate their predictions are.

Maximilian said:
This statement is using "wrong" in the same sense I have used it

No, it isn't, because you are using "wrong" as a binary category. Asimov is using "wrong" as a continuum; he is saying some people are more wrong or less wrong than others. A binary category doesn't work like that; the only two possibilities are "wrong" and "not wrong". That's not what Asimov is describing.

Maximilian said:
Empirically falsified theories are rejected because they are false.

Some theories are rejected because all of their predictions are so different from the actual data that they are not useful at all. But Newtonian gravity and Newtonian mechanics are not like that. Once again, there is no sharp boundary where a theory becomes "false". There are no binary categories here.

Maximilian said:
If we had accepted Newtonian mechanics as true, and therefore ignored the open problems which had the potential to cause doubt (like MMX, Mercury perihelion) there would have been no point in developing relativity.

You are correct that Newtonian mechanics was not accepted as "true". But it also was not deemed "false" when relativity was discovered. You are mistakenly assuming that those are the only two possibilities. They're not.

Maximilian said:
else it would not be named an approximation but a viable physical theory

Then in your terminology, all theories are approximations. General Relativity is an approximation. Quantum field theory is an approximation. All of these theories make predictions which do not exactly match the data. They just make predictions which are closer to the data (smaller error). The only reason we don't commonly refer to GR and QFT as approximations is that we don't have any other theories that are more accurate than they are. But that is not expected to be true forever.

Maximilian said:
one can compute (at least in principle) the difference between the approximation and the viable theory. This is, according to the viable theory, the error made by the approximation.

No, it isn't; "error" means something else. See above.
 
  • Like
Likes Orodruin
  • #97
PeterDonis said:
No. You have two mathematical machines that generate predictions. You compare each prediction with the actual data. The difference between the prediction and the actual data is the error.
That means you use the word "error" simply for something very different and irrelevant here. This is what is done if we want to find out which theory is better. This has been (for the theories discussed here) long ago, during the last millennial. What is discussed here is the question if a theory which is known to be wrong in many different domains can be nonetheless used as an approximation in some other domains. To find out the answer it is reasonable to compare its predictions with those of a theory known to be better everywhere.
PeterDonis said:
But they don't. No theory makes predictions which exactly match the data.
Of course, there is a measurement error. But you cannot use the measurement error to compute if a theory which is already falsified can be nonetheless used in some domain as an approximation. See the article itself. There was no list of experimental data. There was a computation of the error made by classical physics using not experiment, but SR.
PeterDonis said:
So there is no way to sort them into binary categories. The best you can do is rank them along a continuum of how accurate their predictions are.
Sorry, you can. For the best theory, the one which is not yet falsified, you have no information about how accurate it is. All what you have is information about the accuracy of particular experiments, or particular measurement devices. But it is not yet falsified, that means it is as true as possible for a physical theory, which anyway remains hypothetical forever. An information how accurate the predictions of the theory are you have only for theories which have been already falsified. And then the accuracy of the theory is defined by comparison with a theory not yet falsified.

Ok, if you are not sure if certain experiments have falsified which theories, you can also assign degrees of plausibility to theories, using Bayesian probability. But these probabilities are also only probabilities if the theory is true or not. So, this gives only a continuous degree of our knowledge if the theory is true or not. The basic subdivision remains binary.
PeterDonis said:
No, it isn't, because you are using "wrong" as a binary category. Asimov is using "wrong" as a continuum; he is saying some people are more wrong or less wrong than others. A binary category doesn't work like that; the only two possibilities are "wrong" and "not wrong". That's not what Asimov is describing.
But this is nice playing with words, which is what one can expect from a writer. So, I agree with the part where he uses "wrong". And the use of "wronger" is a nice word game, not more, but is about something different, namely the error made by the approximation. If there is an error, one is wrong. But of course it matters a lot how big this error is. And in this part I also agree with him.
PeterDonis said:
Some theories are rejected because all of their predictions are so different from the actual data that they are not useful at all. But Newtonian gravity and Newtonian mechanics are not like that.
Indeed. You have to add to the reasons for rejection that using them instead of the true theory would not give any advance in computation. And this is, essentially, the main point the old theories are used yet - they define mathematically, computationally much simpler ways to compute the results.
PeterDonis said:
You are correct that Newtonian mechanics was not accepted as "true". But it also was not deemed "false" when relativity was discovered. You are mistakenly assuming that those are the only two possibilities. They're not.
No. A theory is either true or false. It may be unknown if it is true, but usually it is known to be false. And if it is false, the question appears if it nonetheless useful for approximations. It is that simple.
PeterDonis said:
Then in your terminology, all theories are approximations. General Relativity is an approximation. Quantum field theory is an approximation. All of these theories make predictions which do not exactly match the data. They just make predictions which are closer to the data (smaller error). The only reason we don't commonly refer to GR and QFT as approximations is that we don't have any other theories that are more accurate than they are. But that is not expected to be true forever.
Not all theories are approximations, but those we have today are. This is not expected to be true forever. There maybe, in principle, theories where we don't know that they are false. But I think the TOE - that hypothetical quantum theory which unifies GR and SM - will not yet be of this type. Simply because a continuous field theory has no chance. So, during the rest of my life it will remain so.

To find true theories is the aim of science.
 
  • #98
Maximilian said:
Explain the difference. If I use an approximation instead of the correct theory, the consequence is a difference between my computation and the value the correct theory would give. This difference is part of the error I make, not? There are, of course, also other sources of error, but this error is the one relevant if one discusses an approximation.

I disagree. I think these binary notions, which distinguish the theories as a whole, are very important.
I don't understand how you can hold these two positions simultaneously. You seem to get that an error is quantifiable and "wrong" as you are using it is binary, so they can't be the same thing, can they? Unless your usage is that all non-zero "error" is "wrong", in which case the word "wrong" has no value because it is covered by the much more useful word "error".

I suppose definitions are conventions, so people can agree or agree to disagree. For all this arguing it is tough to see why this matters; why you couldn't just say "I understand how the words are being used but prefer a different way" and leave it at that?

Maybe your issue is a philosophical issue with the goal of science? The search for an ultimate Truth? [edit: per your previous post, it appears so] Perhaps what you may be missing is that even if scientists believe they are searching for an absolute Truth, that belief is of no relevance. Why? Because it is inherently impossible to know if they've found it. So it doesn't alter the practical assumption that all theories are wrong. Which - again - means you may as well use "wrong" in a relative sense so that the word is useful. Otherwise a statement like "that theory is wrong" is pointless/redundant.
 
  • #99
Maximilian said:
It is demonstrably wrong, because else it would not be named an approximation
Nonsense. The naming convention doesn’t demonstrate anything. The experimental results are the relevant demonstration, and in the classical domain the approximation is experimentally valid. You can call it an approximation, a limit, or a flubnubitz, and what you call it does not change the experimental facts that validate it.

Maximilian said:
That it may be used, under some circumstances, as an approximation does not make a wrong theory true.
What makes a theory valid is whether or not it matches the result of experiments. Not whether or not it approximates some other theory.

Maximilian said:
And even in cases where the approximation is good as an approximation, one can compute (at least in principle) the difference between the approximation and the viable theory.
You have the purpose of this calculation backwards. The purpose of computing the difference between Newtonian mechanics and relativity in the classical domain is to establish the validity of relativity. Newtonian mechanics is already validated by experiments in that domain, and therefore relativity must show that any disagreements between it and Newtonian mechanics are less than the experimental precision.
 
Last edited:
  • #100
Dale said:
Yes, they are. Did you not read the article?

Also, approximations are not inherently wrong in science.

PeterDonis said:
If this is what you mean by "wrong", then you are using the word in a very different sense from its usual sense.

I disagree. And this is just a question of semantics. It is completely standard to say that Newtonian physics is demonstrably wrong.
 
  • #101
russ_watters said:
Maybe your issue is a philosophical issue with the goal of science? The search for an ultimate Truth? [edit: per your previous post, it appears so] Perhaps what you may be missing is that even if scientists believe they are searching for an absolute Truth, that belief is of no relevance. Why? Because it is inherently impossible to know if they've found it. So it doesn't alter the practical assumption that all theories are wrong. Which - again - means you may as well use "wrong" in a relative sense so that the word is useful. Otherwise a statement like "that theory is wrong" is pointless/redundant.

So if one finds a conceptually complete and coherent theory in complete agreement with all experimental evidence, is it wrong? If you claim it is wrong, how can you prove that it is wrong?
 
  • #102
atyy said:
So if one finds a conceptually complete and coherent theory in complete agreement with all experimental evidence, is it wrong? If you claim it is wrong, how can you prove that it is wrong?
Other side of the coin: there is no way to know if it is Ultimate Truth or not, so scientists will assume it is not and keep looking.
 
  • #104
russ_watters said:
Other side of the coin: there is no way to know if it is Ultimate Truth or not, so scientists will assume it is not and keep looking.

But it is quite different from the present situation when the motivation for new theories is driven by known deficiencies in currently accepted theories.
 
  • #105
atyy said:
But it is quite different from the present situation when the motivation for new theories is driven by known deficiencies in currently accepted theories.
Yes, I suppose if some scientists believe they have found the Absolute Truth, they might decide there is no need to keep looking. Others might decide to keep chasing that next order of magnitude of precision.
Here is an example from Rindler: https://www.amazon.com/dp/0198567324/?tag=pfamazon01-20 (p108)

"Applied to such situations, Newtonian mechanics is not just slightly wrong: it is totally wrong."
Was that directed at me? I'm not sure what the purpose of that is.
 

Similar threads

Replies
11
Views
2K
Replies
19
Views
3K
Replies
4
Views
1K
Replies
87
Views
21K
Replies
6
Views
875
  • Quantum Physics
Replies
2
Views
2K
  • Mechanics
Replies
34
Views
3K
Replies
3
Views
2K
Back
Top