Instrumentalism and consistency

  • I
  • Thread starter Demystifier
  • Start date
In summary: I can't even finish this sentence because I don't know what positivist would say about quantum mechanics, because I don't know what positivism is - but I know it is pretty silly.In summary, the conversation discusses instrumentalism, also known as logical positivism, as one approach to dealing with interpretations of quantum mechanics. According to this doctrine, only measurable things are considered meaningful and physical, while other concepts such as reality and hidden variables are deemed meaningless. However, the existence of measurable things such as the gravitational and electric fields calls into question the validity of this approach. The conversation also touches on the role of ontologies in shaping our understanding of the physical world, and the potential for new discoveries in the field of quantum mechanics
  • #71
Fra said:
2) Poppers simplistic abstraction of the scientific process is not
adequate because it puts all focus on the falsification event, and not
elaboration on the method behind hypothesis generation - in CONTEXT of
evolving scientific knowledge.

I don't think there needs to be any constraints on hypothesis generation beyond falsifiability.

It may be helpful to articulate approaches that have a track record of fruitfulness, BUT falsifiability (or lack thereof) is really the only thing that can be used to arbitrate between competing hypotheses when hard data is lacking to test them. Things like Occam's Razor (and other razors) are heuristic preferences that may be useful in many contexts, but can't really be applied consistently across different scientific disciplines, because a general and rigorous definition of "simplicity" is lacking. (Is positing multiple universes simpler than one?)

The human mind is a powerfully creative and beautiful thing. There should be no constraints in the scientific method on what it does in the process of generating hypotheses. There only need be constraints on how the method arbitrates between competing ideas.
 
Physics news on Phys.org
  • #72
Demystifier said:
One approach to deal with interpretations of quantum mechanics is instrumentalism, known also as logical positivism. According to this doctrine, only measurable things are meaningful and therefore physical. All other concepts such as reality, ontology, hidden variables or many worlds, i.e. things which are supposed to be there even if we don't measure them, are not measurable and hence are meaningless.

There are many physicists who claim to think that way, but is there any living physicist who really thinks that way? In other words, is there any physicist who is consistent in such a way of thinking, without ever thinking in terms of concepts which are supposed to be there even when we don't measure them? In my experience, there is no such physicist.

I take care to make a distinction between how someone operates when doing science and how someone operates when arriving at conclusions and making decisions in other areas of life. If leftovers in the fridge smell bad, a logical posivist may well toss them out as likely spoiled without proof positive that the leftovers really are spoiled, because the risks of eating them exceed the benefits.

There are other areas where conclusive proof (at the level of positivism) may be unlikely to ever be available. This is my view (as a ballistics expert who has studied the matter carefully and peer-reviewed scholarly papers on the subject) on the Kennedy assassination regarding whether Oswald acted alone. Because there is not convincing evidence of co-conspirators, a preference for Occam's razor tends to suggest a lone actor. However, since negatives are hard to prove in historical events (no conspirators) I can't say all the conspiracy theories are convincingly disproven. But there is enough unexplained evidence that I'm not sure I could criticize a logical positivist as being inconsistent if they favored a conspiracy theory, because I regard logical positivism as applying to science (natural law) rather than history (what happened in the past).

Finally, even within the bounds of science, my view is that logical positivism only requires consistency with regards to what is considered to be "true" according to the scientific method. There is no need to reject constructs which may simply be regarded as computational conveniences while maintaining neutrality regarding whether those constructs represent physical reality. Thinking about ideas as computational conveniences does not violate logical positivism, only regarding these ideas as representing physical reality raises the bar on the evidence needed.
 
  • #73
When I read or hear such subjects, I don't see so much opposite views, but opposite clans. Perhaps, physics needs all of them.

Fra said:
The insight is that reductionism works only up to a certian complexity limit, where a new way of thinking is needed AND as i conjecture NATURE itself needs a new way of interacting, in ordeer to not see chaos.
The reductionists postulate that any study of chaos consists of spliting. Implicitely, one of the resulting parts will be a measure device and expectations of its states will give a new theory linking the other data to the measures. Not the converse. This approach is universal. Is it trivial? No, at all, solid maths behind, beginning ie from the categories theory. Is it universally efficient ? probably, no. But this is another tool for diggers. We can imagine a theory of spliting parts which will become dependent in the sense where each one may under conditions measures the other ... It seems that I already know a famous one ...
 
  • #74
We might misunderstand each other here i am not sure, as this is subtle. I am also not just talking about human imagination, am suggesting this also reflects how nature works.
Dr. Courtney said:
I don't think there needs to be any constraints on hypothesis generation beyond falsifiability.
In a sense i agree, but in another senso i do not.

The constraints i envision are not fundamental constraints, they are emergent constraints. I see them as observer dependent, they emerge with the scale of self-organisiation. You can also see the constraints so that investing in testing all hypothesis, must be done according the the expected benefit. If this is not done, one can easily see that as complexity increases - the random walker will simply gert lost, and never find its way back home. So what happend? Well, the random walkre is using a MAP that is not scaled properly. The map is too large!

Just to make an analogy here. Suppose we want to understand the logic and origin of life: What you say is that the only necessary trait of an organism is its mortality. Ie. all we required from a lifeform is its ability to be vulnerable and die.

Well, to understand evolution we need a little more than that. Organisms must be able to mutate, and so in a controlled manner to preserve stability. Mutations does take place randomly but not at the lowest reductionst layer but relative to its own prior state. This is why we have stability. An evolutionary mechanism that fails to explain stability of development, is not of much use.

There is s similary view in the inferetial perspective that i apply to physics and evolution of physical law. (which i envision as dual to evolution of physical speices - ie standard model particles for example).

So falsifibility alone is not "wrong", it's just inefficient, and i am sure does not reflect how nature organises at low energy, I think we neeed to take intout account the additional emergent structures that in my view, encode the hypothesis generators.

/Fredrik
 
  • #75
Fra said:
The constraints i envision are not fundamental constraints, they are emergent constraints. I see them as observer dependent, they emerge with the scale of self-organisiation. You can also see the constraints so that investing in testing all hypothesis, must be done according the the expected benefit. If this is not done, one can easily see that as complexity increases - the random walker will simply gert lost, and never find its way back home. So what happend? Well, the random walkre is using a MAP that is not scaled properly. The map is too large!

Aah, I see. You want a method to discern which hypotheses are most worthy of being tested.

I doubt the possibility of constructing a truly objective method of picking more likely candidates for winning hypotheses prior to testing them. Right now, it's a hodge podge approach of experimenter discretion and funding processes. This approach has served science well for hundreds of years.

A lot of my success as an experimentalist has been in reading the menu right to left - testing the hypotheses not on the basis of their worthiness by some objective criteria, but more based on my ability as an experimenter to test them with available resources.

For example, of all the fish in the sea (literally) in which one could test the hypothesis of magnetoreception, colleagues and I picked four species. Whether or not they were likely to demonstrate magnetoreception was a secondary consideration. The primary consideration was that we had easy access to large numbers of these species so that a statistically significant experiment could be completed in a reasonable time period with available resources. The hypothesis of magnetoreception was supported in 3 of the 4 species. Other, more interesting hypotheses relating to specific physical mechanisms of magnetoreception in each species are harder to test. These hypotheses are probably more worthy of being tested, but being experimentally considerably more difficult, no one has done them yet.

So even if there was an objective criteria regarding the relative worthiness of a large group of hypotheses, real experimenters will always be exercising subjective judgments to balance the relative worthiness against the relative costs. My colleagues and I have learned to keep our eyes open for interesting hypotheses that fall into our laps or appear as low hanging fruit. But the low hanging fruit to one experimenter may not be low hanging fruit to another.
 
  • Like
Likes Boing3000
  • #76
Fra said:
I myself do not lack data. But wether you can breed on accessible data depends on your understanding and interpretations. HEP itself is the paradigm that the inferential system is always a classical macroscopic laboratory where we also tend to consider perturbations only. This abstraction fails badly if you picture an inside observer or cosmological observations. So no matter how successful qm and qft is, you can't consistenyly apply that paradigm to the genrral case. This was my understanding of the OT.

/Fredrik
Sure, you need theory to know what (and also how!) to measure interesting things. It's always an interrelation between theory and experiment that is much more complicated than the oversimplified view that scientists have a hypothesis that is falsified by experiments. It's however also important to remember to build theories that make predictions that can in principle be falsified. If the experiments confirm your theory, it's of course also great ;-)).

It's also true that in HEP it's not always the ever higher energies that promise the most interesting progress but also ever higher precision!

What I don't understand is the above quoted paragraph. Of course, you need a scientific (not philosophical!) interpretation of the theories you want to apply to describe a given observational situation. In the case of QT that's for me the minimal interpretation, i.e., after having formulated the mathematical structure it's Born's rule, and only Born's rule. In my opinion there is no other "meaning" of the quantumtheoretical state (represented by a statistical operator in the formalism) than the probabilistic one formulated in Born's rule. There's no scientific content in answering the question, "what's behind it". You may build some metaphysics or even religious believes in the sense of a worldview, but that's not science but personal belief.

Not only in HEP but in all of physics after all we use macroscopic apparati to observe much tinier (quantum) but also much larger (astro/cosmology) objects, because that's what we finally can observe. Nevertheless the macroscopic apparati also obey the rules of quantum theory. Classical behavior is an emergent phenomonon of multidimensional many-body systems, and it's the classical aim of condensed-matter physics to understand this behavior from the microscopic theory, and this program has been very successfull in the last 80 years or so. Nowadays the observational tools are that much refined that one can also study quantum behavior on macroscopic systems. There's no sharp "micro/macro" cut depending on system size.

It's also not clear to me what you mean by "inside observer". The observer doesn't play much of a role in an experiment. Usually, nowadays it's a physicist (or rather large collaborations of very many physicists) who analyze data stored by the various devices which have more or less automatically taking the data. That's true for HEP as well as astronomy and cosmology. E.g., the LIGO/VIRGO gravitational waves have been found by analysing data in a huge data storage but not by direct interaction of the 1000 or so physicists who perform these experiments. So there is no whatsoever direct interference between these physicists with the signal itself nor with the detectors when these data are taken, and to speculate about the influence of the consciousness of any human (or Bell's famous amoeba) on the meaning of data is at best a good science-fiction story but no science!
 
  • #77
My hunch is that part of the disagreement is simply misunderstandings.

One of the problems here is that its hard to describe this in words, and that problem is indeed mine! as i am the one trying to convey some strange multilayer thinking, and indeed its easliy misunderstood. (And at this point I have no published work on this - I wish i had! But the ambition is that it will come, but i decided long time ago that i would not even consider publishing something that is immature enough to be misunderstood, this is why at minimum i would like to have explicit results before anyone would care to read it. I do not have the problem of having to publish papers continously, so i will publish if an only if its up to my own standards.)

But there is plenty of disuss still as there are a lot of research by others that fringes upon this from different angles. But its not the inconstencies between these ideas we should focus on but on the abstractions that are the common denominator.
Dr. Courtney said:
Aah, I see. You want a method to discern which hypotheses are most worthy of being tested.
Thats part of the story yes. But WHY do i want to do this? this is the question!

It is not because i want to change research politics, it is because i see a much deeper point here. It is the key to explanatory power of internal interactions. But we can apply this to difference scales of complexity, i see this as an illustration, but it can easily be mixed up.

Note that sometimes we talk about inferences on human scientist level or social interaction level, and soemtimes at least I talk about inferences on physical (subatomic) level. sometimes on intermediate complex system level, ie. complex physical systems, but no humans.

I see now that this is confusing. But my ambition here is to highlight that we can find a scale invariant soft constructing principle here, that also is a source of intuition and insight. There is a common abstraction is the SAME in all cases. And if you see this as inferences, the inferences are the same regardless of underlying system, and its subject to the same evolutionary and self-organising mechanisms.

Dr. Courtney said:
I doubt the possibility of constructing a truly objective method of picking more likely candidates for winning hypotheses prior to testing them.
I fully agree. My point was not to find an OBJECTIVE explicit method. Objectivity is emergent only. I even claim the opposite, that an objective method is not inferrable and therefore has no place in the arguments.

Dr. Courtney said:
But the low hanging fruit to one experimenter may not be low hanging fruit to another.

Now we talk about human scientist level:

This is exactly my point! No real disagreements here! What is the important and rational action is that each reserachers will typically act according to this emergent constraint. The constraints is simply the betting rule. If the cost for reaching high hanging fruit exceeds the probable benefit, then going for the low hanging fruit is the rational choice.

And this - from another point of view - EXPLAINS why a particular researchers acts irrational from antoher perspective. In the rational player model, there exists no objective measure. Instead each player has its own measure, and the INTERACTION between them is what causes objective consensus.

Now I switch to talking about physical leve inferences:

See what I am getting at? The subjectively emergent constraints, encoded in a subsystem, has the potential (i say potential here just to be humble;) to EXPLAINS the exact nature of interactions, as seen from an external observer!

Example, to get back to HEP. The external observers is the classical laboratory with a bunch of scientists with no hair ;) The subsystems are subatomic entities "interacting". Thus for the laboratory observer to "understand/explain/infer" the interactions (read standard model) he needs to probe the system by perturbing it by prepared say "test observers" and see they interact. From the interactions mechanism are inferred. But to connect back to OT - i suggeste that taking the instrumentallist approach to its extreme, and attaching it to an arbitrary part of the universe and not JUST humans - we see an amazing possibility to understand unification at a deeper level. This also adresses many things such as emergence of intertia and mass. As in the inferntial perspective "stability" must be explains by intertia of underlying structures, this is also why the randomness are modulated by intertia, and why mutations that are NECESSARY for developemtn, does NOT destabilise the process.

To just reply to one more expected objection: So do i thinkn atoms compute probabilities and choose their actions consiously? NO. computations are just random permutations in its own internal structure, but the key is to understand the structure in such as way that we can see that it is in fact a very rational and clever betting devices. All this also suggests that calculuts needs to be scaled down to these computuers. I have not completed this thinking yet but i think the elementa here must be cosntructed from distunguishalbe states and that will become a discrete space. So the continuum models have no place in here, for this reason there will also be no divergenes. I always felt that in terms of information, the continuum embedding is confusing. Continuum physics will have to be explained as a large complexity limit. So in a way time evolutin is like a random computation process that self-organises, and i na way that there exists no immutable external view.

Do you see the vision? and the essence of my "interpretation"

If not, i think it have said more than enough already. I also learn how differentl we all think, which is a good thing, but this is yet another arugment against premature publications. As a reasoning gets complicated enough, its very hard for anyone to follow, this is why only the results matters. Only once the result is clear, interest for how and why will come. This is also natural, i work the same way. This is why i am moderatly interested in string theory for example.

/Fredrik
 
  • #78
In addition to explanations in previous post...
vanhees71 said:
What I don't understand is the above quoted paragraph. Of course, you need a scientific (not philosophical!) interpretation of the theories you want to apply to describe a given observational situation.
Agreed! :)

But we arent doing the actual science in this thread, we are just discussing things, like ways of reasoning, interpretations and approaches. And like i wrote to DrCourtney the fact that i was having several complexity scales in mind at a time causes confusion. My inability to explain this clearly is my fault. But ambition was to phrase it so that it applies both to the scientific process and the the physical intearaction process, as my conjecture is that they are the some - except there a couple of orders of magnitudes of difference in complexity. But common abstractions are acting upon uncertain information. This leads to an information processing and gambling game picture that i conjecture applies both to science and (more importantly) to particle physics.
vanhees71 said:
There's no scientific content in answering the question, "what's behind it". You may build some metaphysics or even religious believes in the sense of a worldview
You mean like other metaphysical geometrical interpretations: such as geometrical interpretation of GR and other gauge interactions?

I do not agree, except in one sense: It is not established science today, so in that way you are right. Your arguments sounds like that that of a archetype of an engineer, which uses established scientific knowledge as tools. My perspective and focus is much different. I am more interested in the tools themselves, and the process of creating scientific consensus from an inferential perspective.

What is your opinion of the scientific content of the geometrical models of physics? Imagine that Einsteins might have conjectured that the force of gravity is to be explained by curvature of spacetime, in the context of a "geometric interpretation", but couldn't first fogure out HOW to "couple" geometry to matter? This is not unlike what we discuss here. But indeed it is not "established" science until this strange idea it proves to agree with predictions. But by similar arguments, there is not scientific meaning in interpreting things as geometry. Nevertheless this has been a successful guide to understand more interactions.

In analogy i am just conjecturing that ALL forces can be explained by interaction between subjective processing agents (these are the inside observers), and these processing agents can be particles. What encodes the particular interaction, is the WAY the agents process and acts upon information.

The reason i talk about different complexity levels is that the same abstraction also applies to human interactions. This is the beauty of the "interpretation". Just like the beauty of geometrical interpretations, is that totally differentn systems can be described by the SAME geometry, and THUS its interaction properties can be learned from studying OTHER systems with same abstractions.

But again, we are not doign the real science in this thread. Yes there is underlying mathematical models for this, and yes there should come out predictions. But they arent likely to be posted in the thread here.
vanhees71 said:
It's also not clear to me what you mean by "inside observer". The observer doesn't play much of a role in an experiment.
See other post. There are "inside observers" at many scales.

An Earth based lab/detector doing cosmological observations is an inside observer.
An electron interacting with the nucleus is an inside observet of hte atom.
A quark interacting with other quarks is an inside observer.

A human based laboratory observing all the apparatous in a accelerator is for all practicaul purposes NOT an inside observer, In a way one can think of the accelerated particles as template inside observers fired into the target, but the process is still observers from a dominant environment. This is why we can with certainly objectively measure any scattering processes. This is also WHY you righfully say that the observer does not play a role - because the description is incomplete. You can not have an outside description, but observers can described other observers from ANOTHER inside view.

See the assymmetry? You might not be a fan of Lee Smolin, but check out this "cosmological fallacy" and failure of "Newtonian paradigm", and the arguments that explains the "unreasonable effectiveness of mathematics". Note that in despite of the name, even QFT fits into the Newtoninan paradigm.

All this talk is just in order to get clues to the open questions in physics. If we are happy with understanding atomic physics then this is moot. But I am motivated to understand the workings of the universe, at ALL complexity levels. And some of my insights in the failure of reductionsim comes from trying to understand also biological systems. Some explanatory models, immediately get lost in chaotical dynamical systems, so either no more progress is possible, or a new explanatory method is needed. If you want to model the chemical processes in s single cell, you end up realizig that you need to model the whole yeast population as the environemnt of the cells is MADE up by OTHER cells.

The analogy to physics is IMO striking.

Just to stay on topic, I feel i have typed too much! I sincerely think that this interpretation i have is like an extermal form of scientific instrumentalism, no steroids maybe, and this is why i started to mention this. From my perspective i think MY interpretation is the most minimal one, as i insist on a consistent inferential perspective. And a classical observers actually immediately breaks this.

/Fredrik
 

Similar threads

Replies
8
Views
484
Replies
8
Views
1K
Replies
220
Views
20K
Replies
7
Views
2K
Replies
4
Views
2K
Back
Top