Ultimate question: Why anything at all?

  • Thread starter bohm2
  • Start date
In summary, the conversation discusses the question of why there is something rather than nothing in the universe. The speaker argues that the probability of nothing existing is essentially zero, which explains why the universe exists. However, this argument is not entirely convincing and other perspectives, such as the Taoist belief that the concepts of something and nothing are relative and contextual, are also considered. Overall, the question remains a philosophical one with no definite answer.
  • #211
bohm2 said:
I find this topic both interesting and confusing. A reductionist can always argue that the reason why full reduction (really unification) is not possible at present is because the "foundational" branch (e.g. physics) is not yet complete. Or due to our own cognitive limitations ( limitation of observer).
Again I suggest a third road. It is a matter of complementarity of observables. E.g. the biological definition of "alive" for Schrodinger's cat is complementary to observing superpositions of "cat states" (which would entail statistical experiments on large numbers of identically prepared cats, which in turn would need to be cooled to near absolute zero.)

The physics can be complete (q-complete i.e. maximal) but micro-scale complementarity can also manifest on the macro-scale. I think this may be a fundamental aspect of emergence.
Thus, a reductionist can claim that because we still lack these micro-dispositions (e.g. physics is not completed) strong emergence with its dualist flavour is really an illusion.
But others suspect that the non-locality and non-separability/contextuality implied by Bell's, Kochen–Specker (KS) theorem, etc. can be interpreted as a good argument for strong emergence and bi-directional causality as argued here:
Don't get me started on Bell, EPR and non-locality. Non-separability yes, but locality in the premise of Bell's derivation is simply a means to assure a reasonable assumption of non-causal interaction between measurement processes. Focusing too much on this causation business (typically due to improper reification of the system representation) distracts from the real implications of QM entanglement.

It is again an issue of complementarity. Here it is between the observed q-correlation we define as "entanglement" and the separation of the composite system into a specific pair of component systems.

This discussion has me thinking then... of the possibility of putting some rigor into some definitions in the emergence camp by invoking complementarity. Hmmm...
 
Physics news on Phys.org
  • #212
jambaugh said:
I stray from the point here but only as a demonstration of the use of "accidental" in a context.

The issue of spontaneous vs purposeful action is in fact very on point when it comes to "why anything" cosmogenesis.

The world seems full of spontaneous happenings. Quantum fluctuations for a start.

The reductionist model of causality finds them difficult to explain. If every event must have a prior cause, then nothing can be chance, everything must be determined. Spontaneity would have to be an illusion due to our lack of knowledge of the "hidden variables".

But a systems model of causality puts it the other way round. The difficult lies more in preventing spontaneity. :smile:

The presumption is that reality begins with a potential - a set of degrees of freedom. And then constraints are imposed on this generalised freedom so as to limit its actions to more determinate paths. Dynamism begins unbound, going off in all directions (and so failing to show any particular direction). Constraints then organise this world so that the intrinsic spontaneity is channeled. It is still fundamentally there, but starts to behave in predictable fashion.

It is rather like the way a car engine explodes a gas vapour in all directions, but all the energy gets constrained to have a definite direction.

In QM language, you have indeterminacy and then the addition of constraints that "collapse" that indeterminacy - or more properly, restrict it below some chosen epistemic threshold.

We can then take this basic model of spontaneous action - uncaused fluctuations in a state of unformed potential - as the vague ground out of which a universe could develop.

Here is Peirce outlining his own philosophy on this back in 1891...
http://www.cspeirce.com/menu/library/aboutcsp/brier/mysticism.pdf

I may mention that my chief avocation in the last ten years has been to develop my cosmology. This theory is that the evolution of the world is hyperbolic, that is, proceeds from one state of things in the infinite past, to a different state of things in the infinite future.
The state of things in the infinite past is chaos, tohu bohu,21 the nothingness of which consists in the total absence of regularity. The state of things in the infinite future is death, the nothingness of which consists in the complete triumph of law and absence of all spontaneity.
Between these, we have on our side a state of things in which there is some absolute spontaneity counter to all law, and some degree of conformity to law, which is constantly on the increase owing to the growth of habit.(Vol. 8, p. 317)
 
  • #213
jambaugh said:
Don't get me started on Bell, EPR and non-locality. Non-separability yes, but locality in the premise of Bell's derivation is simply a means to assure a reasonable assumption of non-causal interaction between measurement processes. Focusing too much on this causation business (typically due to improper reification of the system representation) distracts from the real implications of QM entanglement.
I'm not sure I understand but non-separability arguably has the same consequences as predictions of QM depend on the 3N-dimensional space that get lost in the 3-dimensional representation (e.g. information about correlations among different parts of the system, that are experimentally observed are left out):
Not all the relevant information about the sub-system is contained in its density operator (obtained by partial tracing). A fraction of this information is missing, and it is contained only in the state vector of the overall system. Any separate description of parts, and then any dual description of parts and whole, then looks artificial. As a consequence, the concept of inter-level causation looks nonsensical in the highly holistic domain described by quantum mechanics. Since there is no way to separate the states of the parts from the state of the whole, it sounds absurd to call one the cause and the other the effect, as if they were two different things.
Downward Causation without Foundations
http://michel.bitbol.pagesperso-orange.fr/DownwardCausationDraft.pdf
 
  • #214
jambaugh said:
This discussion has me thinking then... of the possibility of putting some rigor into some definitions in the emergence camp by invoking complementarity. Hmmm...

Indeed this is fundamental in my view. It is the absolute key. And there is never a post where I'm not saying this. :-p
 
  • #215
jambaugh said:
Not circular and not about known or unknown cause. It is used to qualify absence of purposeful intent. A raindrop may cause a pebble to fall instigating an avalanche. Or I may decide to set one off just as my enemy is passing the road beneath. One event is accidental the other intentional. It isn't an issue of cause or lack of cause but intent or lack of intent.

Ok - let's go with 'intent or lack of intent', and extrapolate your examples.

A raindrop may cause a pebble to fall instigating an avalanche .. (lack of intent) then the rescue services swing into gear to save lives (intent).

I may decide to set one (avalanche) off just as my enemy is passing the road beneath .. (intent) because an accidental fire (lack of intent) is burning my food supply and I will most likely starve otherwise.

Both cases are legitimate. Intent from lack of intent. Lack of intent from intent.

If you accept the scientific position, can you really ascribe any special quality to your intentions ? Are they not a result of natural forces - merely an extension of the same principals that govern the raindrop causing the pebble to fall, causing the avalanche ?

Is there something special about our intentions ? Intelligence perhaps ? Caused by an unintentional, unintelligent Big Bang (no God) ? Or was it intelligent and intentional (God) ? Or don't we know, are not sure ? Bringing it back to the ultimate question - why anything at all.

A believer in an omnipotent, omniscient God would reject the possibility of accidental events all together (every leaf that falls, etc.) But they cannot then, after the fact, reverse the implication, saying the impossibility of accidental events proves God's existence. That is indeed circular, the two assertions being equivalent.

Yes, I don't disagree. 'God / not God' is not the intent (lol) of my involvement here.

If however you begin with the possibility (as in lack of asserted impossibility) of both accidental and purposeful causes and ask the question the it may be valid to inductively argue the existence of God from the existence of life, if one can show it it too improbable, even in the scale of the size and age of the universe, to be accidental. Valid in form but not, I believe valid under analysis. I've seen such arguments but they typically misrepresent physical assumptions (most often misapplying thermodynamic principles).

I stray from the point here but only as a demonstration of the use of "accidental" in a context.

I personally think that the word 'accidental' and it's fluid use thereof, goes to the heart of the context, and the point (the OP) of this thread.
 
  • #216
bohm2 said:

Thanks for the pointer to that paper. Another good exposition of the systems perspective.

I agree with pretty much all Bitbol's approach except that I think he needs to add the notion of the epistemic cut to extend causality to semiotically-organised systems.

His focus is on instances where there is downward effective cause (rather than just downward constraint). For a system to actually choose its state in this fashion, it needs some kind of internal machinery of control.

But otherwise, it is nice to see a systems take on QM, and then the generalisation of that view of causality.
 
  • #217
apeiron said:
The issue of spontaneous vs purposeful action is in fact very on point when it comes to "why anything" cosmogenesis.

The world seems full of spontaneous happenings. Quantum fluctuations for a start.

The reductionist model of causality finds them difficult to explain.
Doesn't the word spontaneous refer to unpredictable events? Aren't they called spontaneous because we can't predict or precisely explain them? Nevertheless, even so called spontaneous events are differentiated wrt, and predictable within certain bounds, wrt certain antecedent conditions, even wrt quantum experiments.

apeiron said:
If every event must have a prior cause, then nothing can be chance, everything must be determined. Spontaneity would have to be an illusion due to our lack of knowledge of the "hidden variables".
Determinism is an assumption. If one assumes it, then lack of explanatory or predictive ability is due to ignorance.

The assumption of indeterminacy doesn't seem to fit with our observations of the basically orderly, and predictable, evolution of the world, the universe.

apeiron said:
But a systems model of causality puts it the other way round. The difficult lies more in preventing spontaneity. :smile:

The presumption is that reality begins with a potential - a set of degrees of freedom.

And then constraints are imposed on this generalised freedom so as to limit its actions to more determinate paths.
Degrees of freedom wrt what? Where do the constraints come from?

In a certain view, it (everything, anything) starts with (is ultimately attributable to) a fundamental wave dynamic ... something that the behavior of any and all disturbances in any and all media at any scale have in common.

This fundamental wave dynamic is what constrains the evolution of the whole and determines the emergence of discernibly bounded systems (ie., particles and objects).

apeiron said:
Dynamism begins unbound, going off in all directions (and so failing to show any particular direction). Constraints then organise this world so that the intrinsic spontaneity is channeled. It is still fundamentally there, but starts to behave in predictable fashion.
In a certain view, a fundamental dynamics determines the bounds and possible evolution. Spontaneity isn't, in this view, intrinsic or fundamental. It just refers to our ignorance.

Why not assume a fundamental wave dynamic and see where it leads?

And by the way thanks for your and others' replies to my previous questions/comments.
 
Last edited:
  • #218
bohm2 said:
I'm not sure I understand but non-separability arguably has the same consequences as predictions of QM depend on the 3N-dimensional space that get lost in the 3-dimensional representation (e.g. information about correlations among different parts of the system, that are experimentally observed are left out):

Downward Causation without Foundations
http://michel.bitbol.pagesperso-orange.fr/DownwardCausationDraft.pdf
Hi Bohm. I glaced through Bitbol's paper and it seems like another of his I'd read entitiled "Ontology, Matter and Emergence" where he seems to try to blur the concepts of upward versus downward causation. This attempt to blur upwards versus downwards causations always bothered me regarding his work. So since you seem familiar with it, I'd like your opinion of what he's saying.

His discussion on the non-separability of quantum systems that you picked out is interesting though. Does Bitbol recognize and acknowledge the separability of classical systems? By separability I'm referring to what I would consider the very mainstream view as described by Karakostas "Forms of Quantum Nonseparability and Related Philosophical Consequences" for example:
The foregoing concise analysis delimits the fact, upon which the whole classical physics is founded, that any compound physical system of a classical universe can be conceive of as consisting of separable, distint parts interacting by means of forces, which are encoded in the Hamiltonian function of the overall system, and that, if the full Hamiltonian is known, maximal knowedge of the values of the physical quantities pertaining to each of these parts yields an exhaustive knowledge of the whole compound system. In other words, classical physics obeys a separability principal that can be expressed schematically as follows:

Separability Principal: The states of any spatio-temporally separated subsystems S1, S2, ... SN of a coumpound system S are individually well defined and the states of the compound system are wholly and completely determined by them and their physical interactions including their spatio-temporal relations...

I see Bitbol refers to the concept of Humphreys regarding the fusion of properties and seems to limit this fusion of properties to non-separable QM systems which seems very reasonable.

Do you think Bitbol's conception of downward causation in classical physics (or lack thereof) mirror's Bedau's conception of it, which is to say macro to micro causation is false? Is Bedau limiting the concept of downward causation (or "fusion" of properties) to non-separable systems?
 
  • #219
ThomasT said:
Doesn't the word spontaneous refer to unpredictable events? Aren't they called spontaneous because we can't predict or precisely explain them? Nevertheless, even so called spontaneous events are differentiated wrt, and predictable within certain bounds, wrt certain antecedent conditions, even wrt quantum experiments.

Determinism is an assumption. If one assumes it, then lack of explanatory or predictive ability is due to ignorance.

Yes, that is the question here. We have the phenomenon - spontaneous events. Then we make our models about "what is really going on"

The usual model is reductionist. Because we "know" all events are fundamentally deterministic/local/atomistic/mechanical/monistic/etc, then spontaneity is really just an epistemic issue. We only lack the hidden detail of the micro-causes.

But logically, we can also take a Heraclitean or process view of reality. Determinism/etc is emergent. All is fundamentally flux and regularity arises as a restriction on this inherent dynamism.

So you can have the same observation - some spontaneous looking event - and explain it either as secretly deterministic or instead what happens due to a lack of constraints.

I realize you don't believe QM nonlocality is a genuine issue. But for those that do, a systems view of spontaneous events now fits the evidence much better.

Bitbol's paper is an example of that approach.
 
  • #220
Q_Goest said:
Do you think Bitbol's conception of downward causation in classical physics (or lack thereof) mirror's Bedau's conception of it, which is to say macro to micro causation is false? Is Bedau limiting the concept of downward causation (or "fusion" of properties) to non-separable systems?
Yes, I think he would agree with Bedau. I interpreted him as arguing that higher-level facts or events constrain/modify or form a context for the lower level stuff, so they are not independent of the higher level, so the whole notion of upward/downward causation is misconceived especially since he also argues that while lower-level facts or events are necessary they aren't sufficient for higher-level ones. I think Bedau takes the same position? So it's all contextual. He does offer an interesting argument against panpsychism in this paper below and argues that his model can circumvent the mind-body problem without leading to panpsychism but I don't understand his argument. Maybe someone who does can explain it to me?
This possibility of “downward causation” from experience to physiology could be taken by some as mere evidence that conscious experience emerges from a neurophysiological basis in the “strongest” sense of the concept of emergence, as opposed to “weak” emergence (Bedau, 1997). But suppose we add a further constraint. Suppose we adopt a very strict criterion of emergence. Galen Strawson recently submitted this type of criterion : “For any feature of E (anything that is considered to be Emergent from the Basis B), there must be something about B and B alone in virtue of which E emerges, and which is sufficient for E” (Strawson, 2006). The problem is that, as I have suggested more and more insistantly, there is nothing specific about functions, neural tissues, or molecular structures in virtue of which conscious experience should emerge. Any loose talk of emergence of consciousness from brain processes in the name of mere correlations, or even mere experiments of mutual triggering, then appears to be ruled out by this strong criterion. From the latter negative statements, Strawson infers that conscious experience is nothing emergent at all. Combining this inference with a materialistic monistic principle, he concludes in favor of panpsychism, or rather pan-experientialism. But, then, his problem is to explain how micro-experiences “add up” to full-fledged human consciousness. Moreover, it is not easier to understand why and how an atom has elementary experience than to understand why and how a living human brain has an elaborated consciousness. Ascribing micro-experiences to atoms just seems an ad hoc additional postulate about matter. So, at this point, we are still completely stuck, with no idea whatsoever about how to handle the “hard problem” of the origin of conscious experience in an objectified nature.
Is Consciousness primary?
http://philsci-archive.pitt.edu/4007/1/ConsciousnessPrimaryArt2.pdf
 
  • #221
apeiron said:
The usual model is reductionist. Because we "know" all events are fundamentally deterministic/local/atomistic/mechanical/monistic/etc, then spontaneity is really just an epistemic issue. We only lack the hidden detail of the micro-causes.
This is the way I currently view it. With the reduction being toward ever more general, ie., fundamental, dynamics. In this view, the microcosmos isn't any more fundamental than the macrocosmos, because ontology isn't what's fundamental. Dynamics is.

apeiron said:
But logically, we can also take a Heraclitean or process view of reality. Determinism/etc is emergent. All is fundamentally flux and regularity arises as a restriction on this inherent dynamism.
If there's an inherent or fundamental dynamic, then determinism is fundamental, not emergent.

apeiron said:
I realize you don't believe QM nonlocality is a genuine issue.
I think it depends on how one defines quantum nonlocality.

apeiron said:
But for those that do, a systems view of spontaneous events now fits the evidence much better.
There's absolutely no physical evidence supporting the assumption of nonlocality. It's just based on interpretation. Anyway, I am an admitted ignorant pedestrian wrt these considerations, but I don't think it should be considered a great mystery that we can't predict certain phenomena. What is actually happening in the underlying reality is, of course, a mystery. But the fact that modern science, barely 100 years old, still can't predict lots of stuff isn't, to me, very surprising or even important.

That is, I don't think that some complicated systems view, or whatever, is necessary to establish the fact of our relative ignorance wrt a definitive qualitative apprehension of underlying ontologies and processes.

And thanks for your, and others', feedback to what must seem like a very simplistic view and questions ... from me.
 
Last edited:
  • #222
Q_Goest said:
Do you think Bitbol's conception of downward causation in classical physics (or lack thereof) mirror's Bedau's conception of it, which is to say macro to micro causation is false?

No, Bitbol politely makes it clear he rejects Bedau's approach.

But at the end of the day, according to its best supporter, it appears that “weak emergence is (nothing but) a proper subset of nominal emergence”...

...No genuine (upward or downward) causation, and ontologically little more than nominal emergence: This is the disappointing outcome of research developed under a substantialist construal of the basic elements.

Is there any alternative left? I think there is, provided the substantialist presupposition is dropped at every single level of description.

So the problem with Bedau's analysis - why it is a straw man argument - is that the systems view takes the micro-level as also to be emergent. Substantial cause (material/efficient cause) is itself something that arises and is not fundamental.

There just is no local ontic stuff that has intrinsic properties as Bedau assumes. Yes we can model reality in this fashion for epistemic convenience. But what is really going on is that downwards constraints are limiting local degrees of freedom so as to shape up a realm of local actions.

Bedau does not even seem to understand that this is the systems viewpoint. So his whole argument is based on a wrong analysis. It is quite irrelevant to the modelling of strong emergence. Bitbol by contrast begins by treating local material causes as downwardly emergent.

Setting aside any conceptual trick such as hidden variables, the so-called elementary particles have to be treated as non-individuals, as mere units of a limited set of “sorts,” and thus as formal rather than substantial entities. This becomes even clearer in Quantum Field Theory, where cardinals of subsets of particles are in one-to-one correspondence with quantized modes of excitation of fields (Teller 1995). Accordingly, particles are de facto treated as patterns or configurations, rather than as substantial entities (Bickhard and Campbell 2000). The analysis of a level of organization in terms of structures, patterns, and topological configurations imposes itself throughout the scale of levels, even at the lower accessible level (Campbell and Bickard 2009). The first asymmetry of the standard picture disappears thus.
 
  • #223
bohm2 said:
Yes, I think he would agree with Bedau. I interpreted him as arguing that higher-level facts or events constrain/modify or form a context for the lower level stuff, so they are not independent of the higher level, so the whole notion of upward/downward causation is misconceived especially since he also argues that while lower-level facts or events are necessary they aren't sufficient for higher-level ones. I think Bedau takes the same position?
If he suggests that the lower level facts are insufficient to define higher levels, then at least if he's talking about classical physics, then he would disagree with Bedau. I would agree with apeiron, Bitbol probably rejects what Bedau has argued regarding weak emergence. I'm just not sure. He certainly notes the nonseparability of quantum mechanical systems and his argument regarding the 'fusion' of properties seems on par with other well accepted concepts of quantum mechanics. I just don't see a clearly defined treatment in his paper of the separability of classical physics.

If Bitbol wants to suggest that lower level facts are necessary but insufficient to define higher level facts, then at least for classical phenomena such as weather systems, the N body problem, etc... he would also need to reject the separability principal. See for example Kronz "Emergence and Quantum Mechanics". But the separability principal is clearly correct. It is used to derive all manner of higher level laws and is taught in one form or another in all college and universitiy courses on classical physics. Classical physics is separable.
 
  • #224
apeiron said:
Bitbol by contrast begins by treating local material causes as downwardly emergent.
Setting aside any conceptual trick such as hidden variables, the so-called elementary particles have to be treated as non-individuals, as mere units of a limited set of “sorts,” and thus as formal rather than substantial entities. This becomes even clearer in Quantum Field Theory, where cardinals of subsets of particles are in one-to-one correspondence with quantized modes of excitation of fields (Teller 1995). Accordingly, particles are de facto treated as patterns or configurations, rather than as substantial entities (Bickhard and Campbell 2000). The analysis of a level of organization in terms of structures, patterns, and topological configurations imposes itself throughout the scale of levels, even at the lower accessible level (Campbell and Bickard 2009). The first asymmetry of the standard picture disappears thus.
Bitbol is clearly talking about quantum mechanics here which is non-separable. I still don't see Bitbol making a distinction between separability and non seprable systems.

Also, the term "constraint" is used in physics and science every day, and it means just what you say, "constraints are limiting local degrees of freedom so as to shape up a realm of local actions." A simple example: we create a free body diagram of something, and those constraints are the known local efficient causal actions acting at some point on the boundary which provide the necessary knowledge about the system as it is exposed to other elements in the world. Sperry's classic wheel rolling down a hill can be used as an example, where the wheel interacts with the ground, there is a constraint or boundary condition so we know how the wheel interacts with the ground. Similarly, a point inside the wheel is constrained by it's location in the wheel and the geometry of the wheel. Constraints don't pose a downward control over anything, they are simply locally efficient causes. At least, that's the concept as taught in college and university which is clearly not downward causation.
 
  • #225
Q_Goest said:
Bitbol is clearly talking about quantum mechanics here which is non-separable. I still don't see Bitbol making a distinction between separability and non seprable systems.

Also, the term "constraint" is used in physics and science every day, and it means just what you say, "constraints are limiting local degrees of freedom so as to shape up a realm of local actions." A simple example: we create a free body diagram of something, and those constraints are the known local efficient causal actions acting at some point on the boundary which provide the necessary knowledge about the system as it is exposed to other elements in the world. Sperry's classic wheel rolling down a hill can be used as an example, where the wheel interacts with the ground, there is a constraint or boundary condition so we know how the wheel interacts with the ground. Similarly, a point inside the wheel is constrained by it's location in the wheel and the geometry of the wheel. Constraints don't pose a downward control over anything, they are simply locally efficient causes. At least, that's the concept as taught in college and university which is clearly not downward causation.
Constraints, ie., limitations wrt degrees of freedom of evolution, can also be a function of a fundamental dynamic, and bounded systems emerging from that fundamental dynamic. Can't they?
 
  • #226
ThomasT said:
Constraints, ie., limitations wrt degrees of freedom of evolution, can also be a function of a fundamental dynamic, and bounded systems emerging from that fundamental dynamic. Can't they?
I'm not sure what you mean by that. I know some folks such as Alwyn Scott ("Reductionism Revisited") for example, try to suggest that these constraints are more than locally efficient causes - that because of the system being nonlinear and highly dynamic, Scott contends that these 'constraints' actually interact with a given system such that they influence what occurs locally.

Although Scott doesn't use some of the common terminology used in fluid dynamics and thermodynamics, he does discuss such phenomena at length. Two terms I'd like to introduce which are common to the fluid systems and thermodynamic systems Scott discusses are:
  • "control volume" which is a volume of space within which something happens
  • "control surface" which is the two dimensional surface surrounding the control volume.
Scott suggests that for a given control volume within a nonlinear system, the locally efficient causes acting across the control surface might result in there being more than one possible outcome within the control volume. Scott contends that what happens nonlocally will affect the outcome within that control volume. Scott refers to "constraints" in the same way apeiron uses the term. Scott claims it is these constraints which force the control volume to come up 'heads' in one case and 'tails' in another so to speak and what causes it to come up heads or tails doesn't depend on locally efficient causes acting at the control surface. Scott's mantra is "nonlinear phenomena are those for which the whole is greater than the sum of its parts".

I actually think Scott's paper is a good one for understanding what might possibly be construed as downward causation because it's much more clearly written I think, than stuff by philosophers such as Bitbol. At least Scott is a scientist and brings that with him in his writing. Nevertheless, I disagree with Scott's conclusions since they would require the nonseparability of classical physics, his views disagree with mainstream science on local causation, and he also misquotes Emmeche regarding downward causation. So there are a lot of problems with his paper and it isn't highly referenced.
 
  • #227
ThomasT said:
Constraints, ie., limitations wrt degrees of freedom of evolution, can also be a function of a fundamental dynamic, and bounded systems emerging from that fundamental dynamic. Can't they?
Q_Goest said:
I'm not sure what you mean by that.
What is it that you're not sure about the meaning of?
 
  • #228
ThomasT said:
Constraints, ie., limitations wrt degrees of freedom of evolution, can also be a function of a fundamental dynamic, and bounded systems emerging from that fundamental dynamic. Can't they?
What is it that you're not sure about the meaning of?
Evolution: I assume you mean the time evolution of a system?
What is a dynamic and what makes a dynamic fundamental?

Do you mean: Constraints are limitations imposed on a system? Certainly, we can say that constraints limit the degrees of freedom of a system. Those are not the meanings implied by Alwyn Scott and apeiron for example.
 
  • #229
Q_Goest said:
Evolution: I assume you mean the time evolution of a system?
What is a dynamic and what makes a dynamic fundamental?
For example, the propagation of a disturbance in a medium. Any medium. It seems to me that there is a general quality wrt this that holds for all disturbances in all media wrt all scales. There's a simple mathematical equation that describes this. And, I wonder if this might be a fundamental dynamic.

Q_Goest said:
Do you mean: Constraints are limitations imposed on a system? Certainly, we can say that constraints limit the degrees of freedom of a system. Those are not the meanings implied by Alwyn Scott and apeiron for example.
I mean that any constraints that arise and exist, ie., emergent and bounded, ie., persistent systems/ojects, are a function of inumerable iterations of, and adherence to, the fundamental wave dynamic.
 
  • #230
I don’t know if I’d call the propogation of a disturbance in a medium a “fundamental dynamic” but certainly I understand what you mean. Propogation of pressure waves in a fluid (liquid or gas) or the propogation of sound waves through solid objects are well understood and the propogation of a wave of some kind through a medium seems to be fundamental with the exception of radiation which propogates through a vacuum. So if something propogates without the need for a medium, would we say that propogation through a medium is a dynamic but not a fundamental one?

Regardless, the disturbance and subsequent propogation in a medium and the equations that we use to describe the complex wave interactions that are set up fall into the category of weak emergence as defined by Bedau for example. For there to be some kind of downward causation, we would need to say that the propogation of causes is necessary but insufficient to determine how that propogation persists in that medium. Robert Bishop (apeiron pointed to his paper earlier) for example, suggests that the propogation of intermolecular forces, including heat transfer and gravitational fields, are necessary but insufficient to describe Rayleigh Benard convection, especially Benard cells. The constraining condition for this type of convection has a temperature or heat flux on two sets of parallel plates with a fluid between them. The constraints then must somehow add to the phenomena in a way which is above and beyond what can be described assuming only local intermolecular forces are causing the cells to set up. Clearly, we don’t need to add anything else to study the phenomena and Bishop doesn’t offer any suggestion as to what should be added. So Bishop for example, would disagree with you that the “constraints that arise and exist, ie., emergent and bounded, ie., persistent systems/objects, are a function of inumerable iterations of, and adherence to, the fundamental wave dynamic” because Bishop would say the wave dynamic is necessary but insufficient to determine the phenomena of Rayleigh Benard convection. Clearly, Bishop is out on a limb with this.
 
  • #231
Q_Goest said:
But the separability principal is clearly correct. It is used to derive all manner of higher level laws and is taught in one form or another in all college and universitiy courses on classical physics. Classical physics is separable.

The separability issue here comes at the level of models of causality itself, not simply state descriptions of objects.

So the systems claim is that objects separate into local and global kinds of cause. You have upward constructing degrees of freedom and downward forming global constraints.

The matching reductionist claim would be that there is only upwardly constructing degrees of freedom. Everything reduces to material/efficient cause. So all that happens at higher levels is the emergence of larger scales of material causality - new kinds of composite substances with their new kinds of properties (like liquidity).

In hierarchy theory, the difference is reflected in the distinction between subsumptive and compositional hierarchies - http://en.wikipedia.org/wiki/Hierarchy#Containment_hierarchy

Classical physics deals with the systems aspect of causality in an opaque way. It separates what is going on into initiating conditions and dynamical laws. So you have the local atoms and their global organising constraints. But both these necessary aspects of a system just "exist" in unexplained fashion. They don't develop into a persisting relationship, as argued by a systems view based on an interaction between what has become separated.

At the "fundamental" level, the classical view seems to treat both initiating conditions and laws as ontic - the fundamental laws of nature really exist (somewhere, in the mind of god perhaps, or Platonically). But then taking the compositional view of hierarchical complexity, the initiating conditions and dynamic laws for higher levels of objects becomes merely epistemic - a convenient descriptive impression rather than a reality.

A systems view is instead - according to some current developments - pansemiotic because it argues that even the fundamental level of existence is "ontically epistemic", in the sense used by Peirce and others (Bitbol, for instance, is developing on the autopoiesis of Varela). Nothing actually "exists". Everything has to emerge via development (that is via the hierarchical separation that allows dichotomistic interaction).

The further levels of organisation then occur via subsumptive interpolation - they develop nested within the whole, not constructed as another level upon some prior foundation.

Again, this shows how in the systems view, things are certainly separated (dichotomised) and yet also inseparable (they continue to dynamically interact).

Classical physics works because the universe has grown so large and cold as a system that its scales have become semiotically isolate. An atom is so different in scale from what it constructs that its emergent nature becomes a coarse grain irrelevance.

Perturbations that cross these chasms of scale are always possible. An atom can spontaneously decay. Which could be a surprise to what it forms. It could disrupt some biological object like a microtubule. Likewise, a cosmic ray might strike from the other direction of scale.

But generally, the differences in scale are so great that the classical view can treat a system like a microtubule as being composed of solid material/efficient causes - and in equally uncomplicated fashion, decomposable back into them. Any laws created to describe microtubule behaviour are just emergent descriptions, epistemic glosses, so can be discarded at no cost to the ultimate laws of nature. Nothing valuable is being chucked out.

So there are two models of nature, and of causality here. One to deal more simply with developed systems (like a cold, large, old universe). But a broader view of causality, of nature, is needed to talk about the development of systems - such as we were doing here with the OP, the emergence of the universe as an "object".

Classical physics is already known to break down at the extremes of scale - the global scale of GR and the local scale of QM. So I don't see why we should be constrained by classical notions of causality in this discussion.
 
  • #232
"Propagation of a wave through a medium" is maybe too specific, but the basic concept of energy transfer is at the root of all dynamics. I suppose in the case of electromagnetic radiation, one could argue that space is the medium.
 
  • #233
Q_Goest said:
If he suggests that the lower level facts are insufficient to define higher levels, then at least if he's talking about classical physics, then he would disagree with Bedau. I would agree with apeiron, Bitbol probably rejects what Bedau has argued regarding weak emergence.

Sorry, you guys are right. I should have read the Bedau paper more closely.
 
  • #234
ThomasT said:
This is the way I currently view it. With the reduction being toward ever more general, ie., fundamental, dynamics. In this view, the microcosmos isn't any more fundamental than the macrocosmos, because ontology isn't what's fundamental. Dynamics is.

If there's an inherent or fundamental dynamic, then determinism is fundamental, not emergent.

I think it depends on how one defines quantum nonlocality.

There's absolutely no physical evidence supporting the assumption of nonlocality. It's just based on interpretation. Anyway, I am an admitted ignorant pedestrian wrt these considerations, but I don't think it should be considered a great mystery that we can't predict certain phenomena. What is actually happening in the underlying reality is, of course, a mystery. But the fact that modern science, barely 100 years old, still can't predict lots of stuff isn't, to me, very surprising or even important.

That is, I don't think that some complicated systems view, or whatever, is necessary to establish the fact of our relative ignorance wrt a definitive qualitative apprehension of underlying ontologies and processes.

And thanks for your, and others', feedback to what must seem like a very simplistic view and questions ... from me.

Ignorant pedestrian ? Simplistic ? At least you are able to offer your view, and it is readily understood - not wrapped in ever increasing cycles of complexity that gets no one any closer to anything of substance . Oh, and BTW, an admission of ignorance puts you way ahead than some others.

Any intelligent fool can make things bigger, more complex.
It takes a touch of genius - and a lot of courage - to move in the opposite direction.
(Albert Einstein)
 
  • #235
Q_Goest said:
Evolution: I assume you mean the time evolution of a system?
Yes.
Q_Goest said:
What is a dynamic ...
Eg., an expanding wavefront/waveshell. Disturbances tend to move omnidirectionally away from their source unless somehow constrained.
Q_Goest said:
... and what makes a dynamic fundamental?
Some behavioral characteristic that's operational on the very largest to the very smallest scale. A dynamic that pervades and permeates the whole of reality.

Q_Goest said:
Do you mean: Constraints are limitations imposed on a system?
The dynamic, eg. an expanding wavefront/waveshell -- ie., the tendency for disturbances to move omnidirectionally away from their source, defines certain constraints or limitations on possible degrees of freedom. Then there's the consideration of the topology of the medium in which the disturbance is propagating, which entails more constraints. Then there's the consideration of interaction with other disturbances, which entails more constraints. Then there's the emergence of more or less persistent, bounded complex wave structures (ie., particulate matter), and the emergence of a hierarchy of particulate media.

And wrt all media, the tendency for disturbances to propagate omnidirectionally (ie., in the 'direction' of a presumed universal isotropic expansion), is evident.

Q_Goest said:
Certainly, we can say that constraints limit the degrees of freedom of a system. Those are not the meanings implied by Alwyn Scott and apeiron for example.
I'll have to look this up.
 
  • #236
Q_Goest said:
I don’t know if I’d call the propogation of a disturbance in a medium a “fundamental dynamic” but certainly I understand what you mean.
Suppose it's refined to refer to the tendency of all disturbances to propagate omnidirectionally away from their source?

Q_Goest said:
... if something propogates without the need for a medium, would we say that propogation through a medium is a dynamic but not a fundamental one?
I'm not aware of any mediumless propagation. That is, while I realize that, eg., no electromagnetic medium has been detected, it nonetheless seems, from the observation that em radiation seems to behave very much like radiating wavefronts in water and air, more reasonable to assume that light is propagating in some sort of medium, as opposed to propagating in ... nothing.

Wrt your question, refer to the preceding statement. Does the tendency of disturbances to propagate omnidirectionally away from their sources seem like it might be called a fundamental dynamic?

Q_Goest said:
Regardless, the disturbance and subsequent propogation in a medium and the equations that we use to describe the complex wave interactions that are set up fall into the category of weak emergence as defined by Bedau for example.
Yes, any interactions, topological anomalies, etc. would entail emergent (higher order systems, scale/regime specific interactional/organizational rules, ie., scale/regime specific dynamical tendencies) behavior. But the fundamental dynamic, the tendency for any disturbance in any medium to propagate omnidirectionally away from its source, would still ultimately determine this. Which is not to say that the behavior of emergent systems could ever actually be calculated via this fundamental dynamic. It's a, more or less, metaphysical view, an unfalsifiable assumption. But, nonetheless, one based on observation of the way the world actually behaves. And assuming it would solve a few otherwise somewhat perplexing problems ... such as the arrow of time, the apparent very large scale isotropic expansion of the universe, etc.

As to whether such an assumption would answer the question "why anything at all?", I don't think so, because it leaves open the question of "why is there a fundamental dynamic?".

I have to look up what you said about Bishop and Rayleigh Benard convection, etc.
 
Last edited:
  • #237
By the way, I apologize for my questions/statements in that I don't want to derail the ongoing discussion -- which discussion would certainly facilitate improvement wrt one's command of concepts and considerations that it has involved.
 
  • #238
wuliheron said:
If you ask a Zen master why there is something rather then nothing he might hit you over the head with a stick.

Seems you may have been right on that! :rolleyes:

Then on the street in Greenwich Village, I ran into a Zen Buddhist scholar who had been introduced to me once at a cocktail party as an authority on mystical matters. After a little chitchat, I asked him -- perhaps, in retrospect, a bit precipitately -- why there is something rather than nothing. He tried to bop me on the head. He must have thought it was a Zen koan.

http://dbanach.com/holt.htm
 
  • #239
The "why anything" question has extra force if cosmology can show the universe/multiverse/whatever in fact had a beginning. If reality was simply past-eternal, there would be more reason to shrug a shoulder over its "cause". But if reality once "wasn't" in some scientifically-supported sense, then the "why anything" question obviously becomes more pressing.

Alex Vilenkin continues to pursue the relevants proofs to show reality (at least in the crisply developed way we we know it) can't be past-eternal. The New Scientist covered (pay-walled) his most recent talk - http://www.newscientist.com/article/mg21328474.400-why-physicists-cant-avoid-a-creation-event.html

But anyway here is a summary...

Vilenkin discussed three models for an eternal universe in his presentation, describing why each cannot deliver on what it promises. The first is Alan Guth’s eternal inflation model which proposes eternally inflating bubble universes within a multiverse that stretches both forward and backward in time. In 2003 Vilenkin and Guth discovered that the math for this model will not work because it violates the Hubble constant. Speaking of the inflationary multiverse, Vilenkin said ―it can’t possibly be eternal in the past,‖ and that ―there must be some kind of boundary.

The second cosmological model was the cyclical model, which proposes that the universe goes through an eternal series of contractions and expansions – our Big Bang being the latest contraction in an eternal series. Vilenkin shows that this model cannot extend infinitely into the past either because disorder would accumulate with each cycle. If the universe has been going through this process eternally, we should find ourselves in a universe that is completely disordered and dead. We do not, hence a cyclical universe cannot extend infinitely into the past.

The final cosmological model Vilenkin deconstructed is the cosmic egg model. On this model the universe exists eternally in a steady state, but then it ―cracked‖ resulting in the Big Bang. The problem with this model is that quantum instabilities would not allow the ―egg to remain in a steady state for an infinite amount of time. It would be forced to collapse after a finite amount of time, and thus cannot be eternal.

And here are two of those papers...

http://arxiv.org/pdf/gr-qc/0110012v2.pdf
http://arxiv.org/pdf/1110.4096v4.pdf
 
  • #240
apeiron said:
The "why anything" question has extra force if cosmology can show the universe/multiverse/whatever in fact had a beginning.
I don't see how it can ever, definitively, show this. It seems to me that this sort of consideration is always going to depend on unfalsifiable assumptions. Not to say that they might not be very good assumptions based on all the currently available evidence, but unfalsifiable nonetheless.

But I do very much like your statement, in a previous post, that considering/discussing the thread question can have lots of positive effects wrt the depth and breadth, the sophistication, of the concepts held and presented by those doing the considering/discussing.

apeiron said:
If reality was simply past-eternal, there would be more reason to shrug a shoulder over its "cause". But if reality once "wasn't" in some scientifically-supported sense, then the "why anything" question obviously becomes more pressing.
Well, yes. But I don't see how science can ever support or falsify the assumption that before some prior time there wasn't ... anything.

apeiron said:
Alex Vilenkin continues to pursue the relevant proofs to show reality (at least in the crisply developed way we we know it) can't be past-eternal. The New Scientist covered (pay-walled) his most recent talk - http://www.newscientist.com/article/mg21328474.400-why-physicists-cant-avoid-a-creation-event.html

http://arxiv.org/pdf/gr-qc/0110012v2.pdf
http://arxiv.org/pdf/1110.4096v4.pdf
Thanks for the links.
 
  • #241
alt said:
Ok - let's go with 'intent or lack of intent', and extrapolate your examples.[...]
Both cases are legitimate. Intent from lack of intent. Lack of intent from intent.

If you accept the scientific position, can you really ascribe any special quality to your intentions ? Are they not a result of natural forces - merely an extension of the same principals that govern the raindrop causing the pebble to fall, causing the avalanche ?
You are here mixing causation and intention. (I state what we both understand for clarity).
I see no paradox nor contradiction here. Indeed for intention to manifest and have meaning one's actions need to be able to cause the effect which is the intended goal... at least in so far as it can significantly increase the likelihood of the desired outcome. Indeed for will to exist and have meaning there must be a mechanism of observation, modeling of cause and effect to predict, and power to act.

But there is a part of your examples which I think misses the mark. A spontaneous event may trigger the activity of an intention but the intention may previously exist. The rescue squad were trained and prepared and positions before the avalanche occurred. One may argue that the intention preceded the instigating trigger. Intent needn't invoke omnipotence and must if it is to be actualized account for and react to circumstance.
Is there something special about our intentions ? Intelligence perhaps ? Caused by an unintentional, unintelligent Big Bang (no God) ? Or was it intelligent and intentional (God) ? Or don't we know, are not sure ? Bringing it back to the ultimate question - why anything at all.
Yes intent requires some form of "intelligence" in so far as it must invoke expectations of effects of acts. It is an emergent property of living organisms. Now we can speak loosely of intent on a somewhat lower level and get into a very grey area. We often speak of the purpose of say the shape of a finch's beak or some other genetic characteristic of an organism. Here we are at a level of "quasi-intent" where there is no mind (one may assume for arguments sake) behind the design but there is information processing in the biology of genetic reproduction and evolution. The beak shape is in one sense accidental an in another sense purposeful. We need a distinction in the language to handle this level. Say "quasi-purpose" and "quasi-intent".

It is instructive to look at the thermodynamic environment in which we see life existing. We have Earth sitting with a high temperature sun nearby and a low temperature universe into which to radiate. We thus have a large flux of (Helmholtz) free energy through the system. This allows the emergence of spontaneous self organizing systems. It feeds heat engines which power refrigeration effects (formation of intricate crystalline structures, distillations of fresh water, chemical separation of elements, salt flats and ore deposits, ...)

Self organizing systems have an emergent causal structure. In the presence of free-energy flux they cause replication of their organized structure. No intent here but a different level of description for cause and effect. We see growth of crystals and quasi-crystals, propagation of defects in these, and similar condensed matter phenomena.

It is not so much as a specific organized outcome is caused as that over time and many random accidental effects, those which further the organization, are selected out as more resilient against reversal. (the clump of atoms which accidentally land in alignment with the crystalline structure are less likely to re-dissolve by better transmitting heat into the crystal and down to the cold point where it began to form.)

Within this sea of self organizing systems one presumes organisms emerged able to encode and replicate information about how it behaves physically. Now one has a new level of causation where the genetic structure causes the behavior and the behavior is selected for survival. One has "quasi-purpose" and "quasi-intent" in the form of selection from large numbers of variation for most favorable traits. It is the proverbial billions of monkeys tapping on typewriters except that those who fail to type something sensible get culled.

There are two more points of emergence, the first brings about intentional purposeful behavior. From flatworms to lions, tigers, and bears you have an organ dedicated to perception of the environment and triggering actions base on environmental cues. You have a rudimentary mind which encodes not just behavior but perception. In there somewhere must be a modeling function adapting a predictive mechanism, i.e. learning and changing behavior based on experience. These entities can be said to hold intent. The lion is indeed trying to eat me and the flatworm is in fact intending to move and find food.

At some level, possibly the lion, possibly only bigger brained animals such as primates and some others, possibly only the human mind, there is conscious intention. Instead of only learning cause and effect from our experience in a reactive way, we abstract and hypothesize constructing theories of how the world works and so extrapolating upon experience. I've certainly seen examples of parrots and chimps doing this but not universally, only specific trained examples. I suspect they are at the cusp where such emergent behavior is possible but exceptional among individuals.

(By the same token I've seen humans who seem incapable of anything other than reactive "animal" behavior.)

I personally think that the word 'accidental' and it's fluid use thereof, goes to the heart of the context, and the point (the OP) of this thread.
Hmmm... 'accidental' and also 'spontaneous' with some "accidental" confusion of the two meanings.

Identifying levels we may ask at what levels the meanings of words like "spontaneous" and "accidental" change their definition.

  1. Physics & Thermodynamics
  2. Chemistry & Condensed matter physics
  3. Self-organizing systems (specialized chemistry pre-biology, non-equilibrium thermo.?)
  4. Biology
  5. Behavioral (animal) Psychology
  6. Human Psychology/Philosophy of Thought (including epistemology, logic, etc and the philosophy of science including this list.)
Does that sound about right?
I'd say questions of intent and purpose don't have any meaning below the level of Biology and should be "quasi-" qualified at the level of biology. And then we can distinguish forms of intent at the last two levels e.g. the distinction between first and second degree murder and manslaughter. (conscious intent, reactive intent, no intent but responsibility for causation).

One may ask how 'spontaneous' is defined at the base level vs. 2nd and 3rd levels. In classical physics there is no 'spontaneous' and we have a clockwork determinism between past and future states of reality. Quantum mechanics modifies the issue a bit and there are arguments about interpretation but we can qualify e.g. spontaneous vs. stimulated emission. There is room for invoking the term and giving it meaning. Note however that at the next level spontaneous is quite distinctly meaningful. We can speak, even in the classical domain, of spontaneous reactions, such as condensation or :wink: spontaneous human combustion ;). We understand when speaking of this at the level of chemistry that we are speaking of random external causation and not the type of indeterminate causality invoked when considering quantum physics. It changes further at higher levels. Certain self organization phenomena are "inevitable" with spontaneous time of instigation. That's true even of critical phenomena in chemistry/condensed matter physics where phase changes are the rule and super-critical phases are exceptional.

This is how I see the meanings of the words parsed at different levels. Well I'm talked out and I've got to get ready for school. I apologize for being long winded.
 
  • #242
jambaugh said:
... I apologize for being long winded.

Not at all. Thank you for your informative and 'to the point' reply. I will read it with much interest and might have some further comments / questions later, if that's OK
 
  • #243
alt said:
Ignorant pedestrian ? Simplistic ? At least you are able to offer your view, and it is readily understood - not wrapped in ever increasing cycles of complexity that gets no one any closer to anything of substance . Oh, and BTW, an admission of ignorance puts you way ahead than some others.

Any intelligent fool can make things bigger, more complex.
It takes a touch of genius - and a lot of courage - to move in the opposite direction.
(Albert Einstein)
Thanks for the pep talk alt. :smile:

Truth be told, the reason I try so hard to simplify things is that I'm not capable of navigating through complexity. I'm a panicky guy. Keep it simple ... please. :smile:

And now I think I should just fade once again into the background and let the more informed members, you included, continue with the discussion.
 
  • #244
jambaugh said:
It is not so much as a specific organized outcome is caused as that over time and many random accidental effects, those which further the organization, are selected out as more resilient against reversal. (the clump of atoms which accidentally land in alignment with the crystalline structure are less likely to re-dissolve by better transmitting heat into the crystal and down to the cold point where it began to form.)

Descriptions of worlds constructed in purely bottom-up fashion are all very well, but they remain vulnerable to the realisation that worlds are fundamentally incomputable.

Here is a recent paper on the incomputability issue and its connection to the "why anything" question -

INCOMPUTABILITY IN NATURE Barry Cooper, Piergiorgio Odifreddiy
To what extent is incomputability relevant to the material Universe? We look
at ways in which this question might be answered, and the extent to which the
theory of computability, which grew out of the work of G¬odel, Church, Kleene
and Turing, can contribute to a clear resolution of the current confusion.
http://www1.maths.leeds.ac.uk/~pmt6sbc/preprints/co.pdf

(A gloss just appeared in Nature - http://www.nature.com/nature/journal/v482/n7386/full/482465a.html)

Cooper is talking about how systems self-organise out of vagueness and the need for a new view of mathematics to be able to model that. Maths is based on notions of definability and rigidity - the basis of reductionist computability - and yet we know this is an unreal idealisation (useful, sure enough, but metaphysically untrue).

I think Cooper offers another good way of looking at the question of the self-creation of the universe. We can say it is about the emergence of computability! In the beginning was vagueness - the incomputable. And then by way of a self-organising phase transition, this gave birth to all that was computable.

This is a very "material" or thermodynamic way at looking at maths. The usual approach to maths is immaterial - unconstrained by material limits. Like Bedau arguing for weak emergence, infinite computation is presumed. Big calculations are fine - even if they are so big that they would quickly bust the limits of any material attempts to compute them.

But many are starting now to object to this unrealistic view of computation - the kind that seem happy with non-linear terms that expand faster than the underlying computation that is hoping to keep up with them. If you presume infinite computational resources, then the distinction between polynomial time and exponential time just ceases to be a problem so far as you are concerned.

See these papers questioning such blythe reasoning...

Why Philosophers Should Care About Computational Complexity - Scott Aaronson
http://arxiv.org/PS_cache/arxiv/pdf/1108/1108.1791v3.pdf

The implications of a holographic universe for quantum information science and the nature of physical law - P.C.W. Davies
http://www.ctnsstars.org/conferences/papers/Holographic%20universe%20and%20information.pdf

Some mathematical biologists have been arguing this for a long time of course...

https://www.amazon.com/dp/023110510X/?tag=pfamazon01-20

But Cooper shows how mathematicians are facing up again to the deep issue of incomputability and its implications for how we even conceive reality (and its origination).

On the incomputability of global constraints...

At the same time, new science is often based on situations where the traditional reductions are no longer adequate (chaos theory being particularly relevant here). As one observes a rushing stream, one is aware that the dynamics of the individual units of flow are well understood. But the relationship between this and the continually evolving forms manifest in the streams surface is not just too complex to analyse Ñ it seems to depend on globally emerging relationships not derivable from the local analysis.

On the common trick of simply assuming the incomputability of vagueness to be computable "somehow" - given infinite material resources...

Quantum indeterminacy presents little problem for such an outlook. One either expects an improved scientific description of the Universe in more classical terms, or, more commonly, one takes quantum randomness as a given, and superimposes more traditional certainties on top of that.

The latter perspective is also common to world views that make no assumptions about discreteness. It has the advantage (for the Laplacian in quantum clothing) of incorporating incomputability in the particular form of randomness, without any need for any theory of incomputability. The origins of incomputability in mathematics may be theoretical, but not in the real world, the view is.

On computability acting as a downward constraint on incomputability so as to produce a "well-formed" universe...

Our basic premise, nothing new philosophically, is that existence takes the most general form allowed by considerations of internal consistency. Where that consistency is governed by the mathematics of the universe within which that existence has a meaning.

The mathematics leads to other scientiÞcally appropriate predictions. In particular, there is the question of how the laws of nature immanently arise, how they collapse near the big bang singularity, and what the model says about the occurrence or otherwise of such a singularity.

What we have in the Turing universe are not just invariant individuals, but a rich infrastructure of more general Turing definable relations. These relations grow out of the structure, and constrain it, in much the same sort of organic way that the forms observable in our rushing stream appear to. These relations operate at a universal level.

The similarities of Cooper's arguments with those of Peirce, or the pre-geometry of Wheeler, are obvious. But the computability question, coupled with the emerging information theoretic view of reality that we see both in holographic approaches to cosmology and dissipative structure approaches in material descriptions generally, offer a new paradigm for tackling the "why anything" question.
 
Last edited by a moderator:

Similar threads

Back
Top