- #1
Fra
- 4,222
- 640
[I wasn't sure where to put this thread, it might as well have gone in philosophy of science but I choose to put it here because it seems more targeted still?]
I am curious about how different people consider the necessity of doing a riskanalysis when trying to make a fundamental model of nature and physics in particular.
Just to illustrate what I mean, some examples.
Trying to model reality is be it's nature risky business - we are often wrong, and what we "knew" was right, was LATER proved wrong, and needed revision. But consider that our life depends on it - we need an accurate model to simply survive. Then it would seem unwise to have no riskanalysis. It also seems crucial to test the most promising ideas first, and leave the less likely for later.
To be wrong is not fatal, as long as we have the capability to respone and revise our fundaments promptly. Experience from biology shows that flexibility and adaption is power, so that when for reasons beyond your control, you are thrown into a new environment, there are two choices. You adapt and survive, or you die because you fail to adapt and your old behaviour was in conflict with the new environment.
In history foundations are often revised or refined, so it still seems that flexibility is important.
So, when making new models, that apparently take many many years, a lot of both financial and intellectual investments... should we worry about what happens IF our models prove wrong? Obviously when they are wrong they are discared, but then what? Does this falsification show us HOW the old model should be modified? or does it jus tell us it's wrong and leave us without clue?
The basic question is, should a new fundamental theory include a mechanism for self-correction in a way that is not pre-determined, but rather is guided by input? So that when it's wrong (because we are wrong all the time) we do not only know so, we have also a mechanism to induce a new correction?
/Fredrik
I am curious about how different people consider the necessity of doing a riskanalysis when trying to make a fundamental model of nature and physics in particular.
Just to illustrate what I mean, some examples.
Trying to model reality is be it's nature risky business - we are often wrong, and what we "knew" was right, was LATER proved wrong, and needed revision. But consider that our life depends on it - we need an accurate model to simply survive. Then it would seem unwise to have no riskanalysis. It also seems crucial to test the most promising ideas first, and leave the less likely for later.
To be wrong is not fatal, as long as we have the capability to respone and revise our fundaments promptly. Experience from biology shows that flexibility and adaption is power, so that when for reasons beyond your control, you are thrown into a new environment, there are two choices. You adapt and survive, or you die because you fail to adapt and your old behaviour was in conflict with the new environment.
In history foundations are often revised or refined, so it still seems that flexibility is important.
So, when making new models, that apparently take many many years, a lot of both financial and intellectual investments... should we worry about what happens IF our models prove wrong? Obviously when they are wrong they are discared, but then what? Does this falsification show us HOW the old model should be modified? or does it jus tell us it's wrong and leave us without clue?
The basic question is, should a new fundamental theory include a mechanism for self-correction in a way that is not pre-determined, but rather is guided by input? So that when it's wrong (because we are wrong all the time) we do not only know so, we have also a mechanism to induce a new correction?
/Fredrik