What is the special signal problem and how does it challenge computationalism?

  • Thread starter Q_Goest
  • Start date
  • Tags
    Signal
In summary, the conversation discusses the "special signal problem" in computationalism, where a brain in a vat receives identical signals to those it would receive in a person's head. The brain is then separated into smaller sections, and the problem arises as to whether or not the brain can still experience consciousness without a specific "special signal". The conversation also introduces the idea of counterfactual sensitivity and the need to understand why this particular signal is crucial to consciousness. The conversation references a thought experiment by Arnold Zuboff and suggests that any theory of mind must address the importance of counterfactual alternatives.
  • #36
Q_Goest said:
(...) Once we dispence with counterfactuals (because they're simply wrong), computationalism predicts panpsychism, and panpsychism is unacceptable. (...) Putnam is retired now, so Bishop has taken up his flag so to speak, and continues to work on advancing Putnam's argument.


Here is the response from David Chalmers to Putnam - http://consc.net/papers/rock.html" .
Does a Rock Implement Every Finite-State Automaton? said:
If Putnam's result is correct, then, we must either embrace an extreme form of panpsychism or reject the principle on which the hopes of artificial intelligence rest.


Here is a response from Mark Bishop to Chalmers - http://docs.google.com/viewer?a=v&q...OWBL2&sig=AHIEtbTl0sSFE9SFNzqkg0u6CSWaJ3523Q".

Bishop asks what will happen if a "conscious" claimed robot R1 is step by step transformed into a robot Rn, by deleting the counterfactual states at each step (R1, R2, R3 ... Rn-1, Rn) - how will be changed the phenomenal perception through the stages? His only argument, before making the conclusion that the counterfactual hypothesis is wrong, is "it is clear that this scenario is implausible".

Basically from such example, illustrating the http://consc.net/papers/qualia.html" by Chalmers (which has weak points too, but that is another discussion), follow these possible results:
1) Every robot has the same degree of mentality (M):
-1.1) M == 0 -> Functionalism is wrong, it is reduced to behaviorism.
-1.2) M != 0 -> Functionalism could only exist as panpsychic theory.
2) Every robot has different degree of mentality -> Counterfactual states don't play causal role by itself, but somehow removing them the degree of consciousness changes -> See 1.2.
3) R1 is conscious, while the others are not -> Non-triviality condition holds.

I think both Chalmers and Bishop are looking at one hypothetical thing from different angles, giving arguments in the context of their own view. Only the time will show who was on the right side.
 
Last edited by a moderator:
Physics news on Phys.org
  • #37
Pythagorean said:
Q_Goest, you still haven't responded to post #15, which has sources and quotes from experts and confronts the physical premise for the thought experiment, which I feel I've demonstrated as flawed, which makes the question not so productive. The strawman is being built for physicalism…
Strawman discussions are Red Herrings. They draw away from the discussion at hand, forcing explanations to be written needlessly. Tin Man arguments are then required to return the Red Herrings to the pond. :frown: Let's refrain from letting strawmen fish for red herrings, otherwise the tin man has extra work to do.

Pythagorean said:
The thought experiment explicitly only applies to one kind of physical system: linear systems (which don't truly exist, but are convenient approximations). I would think this is an important epistemological consideration.
Regarding your references in #15 and nonlinear systems. I remember various discussions around separability of classical systems taking place on many threads. I remember one comment in particular, not from yourself, that went something like, “Why is separability so important to consciousness anyway?” Well, that’s what we’re talking about. Are classical systems separable or not? And the reason separability is important should now be well understood that we’ve read Zuboff’s paper. There are those that wish to claime that nonlinear systems are more than the sum of their parts. There is something extra, though exactly what that extra is, is never detailed. Perhaps there are new laws of physics that are created by these systems that guide their evolution over time, like an unseen orchestra conductor imposing his will on the various musicians and guiding the band to play in a phase synchronized, large scale integrated fashion. That conductor, operating at an emergent level above the functional organization of the neurons, forms dynamic links mediating the synchrony of the orchestra over multiple frequency bands. The difference between separability and non separability regards just such an orchestra conductor, an emergent set of laws that not only guides the individuals pieces, but subsumes their behavior. Can any such laws be reasonably predicted? Or is there nothing more than the push and pull of molecules, acting on one another locally?

From Valera’s reference:
But, at the same time, their existence is long enough for neural activity to propagate through the assembly, a propagation that necessarily involves cycles of reciprocal spike exchanges with transmission delays that last tens of milliseconds.
I haven’t read his paper and don’t have time to, but it seems this indicates a propagation of local, efficient causes, not an orchestra conductor. I see no reason to invoke any kind of new physical laws that subsume the laws acting at the local level.

Regardless of what explanation one gives to explain how neurons interact, we must select one of two possible alternatives for that interaction. Either:
1) The interaction is separable.
2) The interaction is not separable.

Computationalism assumes that neuron interaction can be described using classical mechanics, not a quantum mechanical one. I don’t think anyone really argues that point. So the question can also be reduced to:
1) Classical mechanical descriptions of nature are separable.
2) Classical mechanical descriptions of nature are not separable.

I’d say if you don’t have any higher level physical laws coming into being when neurons, or any nonlinear physical system interacts, then there are only local interactions and thus those interactions can be expected to follow separability:
- to a very high degree of accuracy mathematically (analytically)
- to an exact degree physically (ie: even if we can’t figure out the math, it seems mother nature can. No one is claiming these systems aren't deterministic.)

That is in fact what is normally accepted. If you’d like some references, I’d be glad to share what I feel is pertinent, but I think that would make a wonderful new thread. Seriously, it would be worth forming your thoughts around that and posting a new thread.

I’d also like to point out one more issue in support of separability of neurons. Those other references you provide, such as Hodgkin and Huxley’s famous papers and compartment models of neurons in general, along with analytical models such as the Blue Brain project which use those compartment models, are all taking these nonlinear systems and reducing them to linear ones. The result is a model of how the brain functions to a very high degree of accuracy, at least I’m sure the scientists in charge think so. Similarly, other highly nonlinear systems are studied using the same methods, finite elements and computational fluid dynamics are used to study these exact kinds of problems and they do so to a high degree of accuracy. The basic premise that a brain is separable seems to be pervasive throughout the scientific field. If the brain is separable, it is by definition a sum of parts that can be duplicated without forcing them to interact within the actual brain.

Ok, just one more point... we often allow for brain in vat thought experiments. We might even create thought experiments around large numbers of brains in vats such as the Matrix. Now if brains as a whole can be put into vats and experiences duplicated, why can't we do the same thing to parts of brains? What's so special about the parts of the brain that the brain as a whole isn't?
 
  • #38
Ferris_bg said:
Here is the response from David Chalmers to Putnam ...
Clappin' for Ferris. Thanks for another excellent post. I'm very familiar with all of those papers and I think you've referenced them well. They sit on my desk with many others, highlighted and inked all over.

I don't think rejecting the counterfactual argument leads us into the stone wall that many think it does. In fact, I think it points to a better theory of mind. It's just that no one seems to see the door yet, and our bias for computationalism will be hard to change.
 
  • #40
Lievo said:
I'd love to say this sit on my desk with many others. Unfortunatly, the link doesn't work. You're talking abouthttp://www.ncbi.nlm.nih.gov/pubmed/12470628" ?
Works for me, but yes, it's the same one.
 
Last edited by a moderator:
  • #41
Q_Goest said:
I haven’t read his paper and don’t have time to
Well then just one minor point until you find the time to read it

Q_Goest said:
Computationalism assumes that neuron interaction can be described using classical mechanics, not a quantum mechanical one. I don’t think anyone really argues that point.
You'll find many to argue that point, simply because it's plain false. Quantum mechanics is as computable as classical mechanics. As the name indicates, computationnalism assumes computability, not computability by classical mechanics only. Penrose explains this very well in the "[URL ,[/URL] and that's why he postulates that the mind must rely on yet undiscovered quantum laws -he knows present quantum mechanic does not allows to go outside computationnalism.

Gokul43201 said:
Works for me, but yes, it's the same one.
Thanks. The bug seems specific to chrome.
 
Last edited by a moderator:
  • #42
Q_Goest,

You agree, I hope, that there's no requirement or law in classical physics that philosophical reductionism or separability apply to all systems. Scientists practice scientific reductionism, there's a lot of intuition and creativity that goes into tying it all together.

I have never heard that the whole must be greater than the sum of the parts for dynamical systems. That's strange. No... the statement (that is true, not a claim) is that the sum of the parts is not equal to the whole.

And here's how it's quantified (forgive my handwriting, didn't feel like tex'n it):

F = force of gravity
G = gravitational constant
M = reference mass
m = test mass
r = distance between masses

Here's what the superposition principle says when it holds:
attachment.php?attachmentid=31089&stc=1&d=1294344299.jpg



In the two examples below, I will show superposition holding for masses when considering gravitational force. But I will also show you, that in the same classical system, the dynamics are not separable. Superposition does not hold for distance between masses when considering gravitational force.

attachment.php?attachmentid=31090&stc=1&d=1294344299.jpg


So you see that you can not separate the dynamics out like you can the mass (assuming, of course, that r is changing... which it is... which is why Newton couldn't solve the 3-body problem.

There isn't anything fundamentally new about this. There's no new physics necessary. Unless of course, you mean the kind of new physics that is being discovered every day already that you're not paying attention to. Still nothing fundamental changing though, it's just an extension of the old stuff. This is the high-hanging fruit of classical physics that hadn't started getting picked until Poincare's geometrical analysis and the invention of computers that made numerical solutions tangible.
 

Attachments

  • superpos1.jpg
    superpos1.jpg
    24.3 KB · Views: 640
  • superpos2.jpg
    superpos2.jpg
    37.4 KB · Views: 632
  • #43
Q_Goest said:
Well, that’s what we’re talking about. Are classical systems separable or not?

This is the big presumption. And people who model complexity - Robert Rosen in Essays on Life Itself, Steven Strogatz in Synch, Scott Kelso in Dynamic Patterns - would say that systems are not in fact separable into their local atoms.

There is both top-down causality and bottom-up. So a complex system cannot be fully accounted for as the sum of its efficient causes.

To avoid confusion here, note that there are two ways of viewing emergence.

1) A collection of atoms produces collective constraints which form an emergent level of downwards causation. So the atoms are what exist, the constraints simply arise.

2) Then there is the strong systems view (following Peirce) where both the local atoms and the global constraints are jointly, synergistically, emergent. So you don't have atoms that exist. They too are part of what emerges as a system develops.

This second view fits an understanding of brain function rather well. As for example with the receptive fields of neurons. A neuron does not fire atomistically. Its locally specific action is shaped up by a prevailing context of brain activity. Its identity emerges as a result of a focusing context. The orchestra analogy is indeed apt.

To anyone who has studied neuroscience, any discussion that assumes neural separability just sounds immediately hokus.

A neuron on its own doesn't even know how to be an efficient cause. It can only fire in a rather unfocused way. You can talk about recreating the global context that tells the neuron how to behave, separating this information off in some impulse cartridge, but you have to realize that this is global information and not itself a collection of atomistic efficient causality.

An analogy is imagine trying to scoop a whorl of turbulence out of a stream in a jam jar. The whorl is indeed non-separable.
 
  • #44
Please explain what you're doing at the bottom of the second photo.

I'm assuming k = G * M * m

Then you have:

k/r2 + k/r2 (not equal) k/(r1+r2)2

What is r1, r2 ? Explain how you end up with that last equation.

I don't know that it matters though. I doubt our definitions of separability are going to match.
 
  • #45
Q_Goest said:
Please explain what you're doing at the bottom of the second photo.

I'm assuming k = G * M * m

Then you have:

k/r2 + k/r2 (not equal) k/(r1+r2)2

What is r1, r2 ? Explain how you end up with that last equation.

I don't know that it matters though. I doubt our definitions of separability are going to match.

yes on k

I just plugged F(r) into the superposition principle.

F(r1) + F(r2) = km/r12 + km/r22

F(r1 + r2) = km/(r1+r2)2

they're not equal, superposition doesn't hold.

I don't know that it matters though. I doubt our definitions of separability are going to match.

I'd think they should still be consistent with the thought experiment. That's what my focus is, showing how the actions taken in the thought experiment would effect a real physical system. Proponents of the thought experiment seem to be claiming that dynamically decoupling the neurons wouldn't effect it.

Theoretically and experimentally, we know there are physical systems we can't treat independently. We know that group properties aren't always properties of individual particles (Temperature, Pressure, feedback, force). We happen to model neural systems as such a class of systems. the 3+ body problem can't be reduced to the 2-body problem and the 2-body problem can't be reduced to a 1-body problem (because a 1-body problem is meaningless, there's no force associated with a single body. You have to bring a test mass along to measure the force of interaction between the two masses.)
 
  • #46
Ok, thanks for the correction. Now what is r1 and r2? Since r is the distance between masses, I still don't see what these are.
 
  • #47
Q_Goest said:
Ok, thanks for the correction. Now what is r1 and r2? Since r is the distance between masses, I still don't see what these are.

They're two different masses (so two different distances). You can designate them m1 and m2 if you want, I took them to be equal so that the proof would be a little simpler. Even if you distinguish the masses, you still have the same problem:

F(m1+m2,r1+r2) != F(m1,r1) + F(m2,r2)

You can actually see this in the reduced mass equation in which the masses themselves actually get coupled, and then you get:

m1m2/(m1+m2)

So the resulting acceleration doesn't come from summing the masses, the sum of the masses does not give you the appropriate value for the whole system. The appropriate value (the reduced mass) is actually less than the sum.
 
  • #48
Pythagorean said:
They're two different masses (so two different distances). You can designate them m1 and m2 if you want, I took them to be equal so that the proof would be a little simpler.
I'm still not clear what r1 and r2 is, but I'm going to step out on a limb and say that you're thinking of putting a third mass in with these two and now you're suggesting that r1 is the distance between m1 and the new mass and r2 is the distance between m2 and the new mass. Is that what you mean? (please confirm)

In this case, the gravitational vectors are additive.
Gravitational Field for Two Masses
The next simplest case is two equal masses. Let us place them symmetrically above and below the x-axis: (see link for picture)

Recall Newton’s Universal Law of Gravitation states that any two masses have a mutual gravitational attraction. A point mass m = 1 at P will therefore feel gravitational attraction towards both masses M, and a total gravitational field equal to the vector sum of these two forces, illustrated by the red arrow in the figure.

The Principle of SuperpositionThe fact that the total gravitational field is just given by adding the two vectors together is called the Principle of Superposition.
Ref: http://galileo.phys.virginia.edu/classes/152.mf1i.spring02/GravField.htm
In fact, given a completely static set of n gravitational bodies, the gravitational field at any point is easily calculable by adding the gravitational contribution of each mass to every point in the field. The problem arises when we allow the masses to move around and the differential equations become unsolvable. But that doesn't make such a system separable. I'm not an expert on the n-body problem by any means, so I can't authoritatively discuss the issues. However, the philosophy of separability in classical mechanics is easy enough to grasp for engineers and scientists not in that particular field. More later.
 
  • #49
Q_Goest said:
I'm still not clear what r1 and r2 is, but I'm going to step out on a limb and say that you're thinking of putting a third mass in with these two and now you're suggesting that r1 is the distance between m1 and the new mass and r2 is the distance between m2 and the new mass. Is that what you mean? (please confirm)

Remember that k = GMm. M is the reference mass (it's at the origin of my coordinate system). It's not a real mass, but it's a way to measure the field as if there were a mass was there. But I'm making it the center of the reference frame: it's at the origin, and you can divide by it to see the field independent of it, since it's a constant (our dependent variable is the r's in the nonlinear case).
http://en.wikipedia.org/wiki/Test_particle

m becomes m1 and m2.
so:
GM(m1/r1^2 + m2/r2^2) if the masses were different, then then m comes out for m1=m2.

The equation for Newton's gravity is a two-body problem already, but with one of the masses as the center of reference:
F = GMm/r^2

but it can be more complicated than that with a more general reference frame that leads to the reduced mass:
http://en.wikipedia.org/wiki/Two-body_problem

In this case, the gravitational vectors are additive.

Ref: http://galileo.phys.virginia.edu/classes/152.mf1i.spring02/GravField.htm
In fact, given a completely static set of n gravitational bodies, the gravitational field at any point is easily calculable by adding the gravitational contribution of each mass to every point in the field.

I completely agree, and this is actually an impressive and fascinating result (have you read "http://www.dartmouth.edu/~matc/MathDrama/reading/Wigner.html"" We should be amazed by this!).

But regardless of whether you can appreciate the elegance of this, it doesn't make the whole system reducible to one component of the system. The value of the gravitational field isn't the whole story. We have also motion to consider, and the consequences of motion: collisions.

The problem arises when we allow the masses to move around and the differential equations become unsolvable. But that doesn't make such a system separable. I'm not an expert on the n-body problem by any means, so I can't authoritatively discuss the issues. However, the philosophy of separability in classical mechanics is easy enough to grasp for engineers and scientists not in that particular field. More later.

Reduction and separation in physics and engineering (especially 100 and 200-level courses) is ontological. Approximations and hand-waving are rampant in both disciplines and they're perfectly acceptable as long as they give us access to the switches and levers of nature. We will model things as random or do something to ensure the signal to noise ratio is high to do our best to ignore the nonlinearities.

We often take approximations to only to first order, rarely to second order, and the whole point is so that they remain linear, so that F(x1) + F(x2) = F(x1 + x2). Then we can just add all the x's together and do the operation (F) once and it's the same result as if we did each one individually and then added them together. In a complex system, the term on the left is the only meaningful term. You can't add all the objects together and perform the same operation and get the same result. x1 and x2 are now coupled: they interact with each other.

example: If different frequencies of electromagnetic waves interacted with each other, we could never separate it into bandwidth for information transfer as we do. However, since electromagnetic waves ARE separable, because they OBEY superposition, we can stand in the middle of a million intertwining signals and as long as each one has a unique frequency band, we can separate them. This is directly because superposition holds! And there are many physical systems for which superposition does not hold!

This is pretty much the meat of most physics and engineering courses (I've been through a whole undergraduate physics degree and I take many electrical engineering courses for my interdisciplinary graduate degree). Each department only has one graduate (600-level) class that offers nonlinear techniques... and they only offer them once every other year. I hope this gives an idea of how little prevalence it has in standard undergraduate physics and engineering courses. When we see nonlinear equations, we approximate them (for example, the simple pendulum) and remove the nonlinearity. That's a great deal of the training of undergraduate physics and engineering courses: making your system simpler so you can solve it faster and be more productive and efficient with government dollars.

It's not a very epistemological approach...

to ground this in the literature, here's an abstract:

The reduction of dynamical systems has a rich
history, with many important applications related to stability,
control and verification. Reduction is typically performed in an
“exact” manner—as is the case with mechanical systems with
symmetry—which, unfortunately, limits the type of systems to
which it can be applied. The goal of this paper is to consider a
more general form of reduction, termed approximate reduction,
in order to extend the class of systems that can be reduced.

Using notions related to incremental stability, we give conditions
on when a dynamical system can be projected to a lower
dimensional space while providing hard bounds on the induced
errors, i.e., when it is behaviorally similar to a dynamical system
on a lower dimensional space. These concepts are illustrated
on a series of examples.
(emphasis mine)

Approximate Reduction of Dynamical Systems
Paulo Tabuada, Aaron D. Ames, Agung Julius and George Pappas

Proceedings of the 45th IEEE Conference on Decision & Control
Manchester Grand Hyatt Hotel
San Diego, CA, USA, December 13-15, 2006
 
Last edited by a moderator:
  • #50
Lievo said:
As the name indicates, computationnalism assumes computability, not computability by classical mechanics only. Penrose explains this very well in the "[URL ,[/URL] and that's why he postulates that the mind must rely on yet undiscovered quantum laws -he knows present quantum mechanic does not allows to go outside computationnalism.
Thanks Lievo, I actually learned something here. After reviewing a few definitions of computationalism I’ve come to the conclusion that computationalism doesn’t necessarily rule out quantum theories of mind. So when I’ve used the term “computationalism” in the OP, I should explain that what I mean by that regards classical theories of mind as is the current paradigm of mind.

The Stanford Encyclopedia of Philosophy provides a very condensed definition of http://plato.stanford.edu/entries/computational-mind/" :
Over the past thirty years, it is been common to hear the mind likened to a digital computer. This essay is concerned with a particular philosophical view that holds that the mind literally is a digital computer (in a specific sense of “computer” to be developed), and that thought literally is a kind of computation. This view—which will be called the “Computational Theory of Mind” (CTM)—is thus to be distinguished from other and broader attempts to connect the mind with computation, including (a) various enterprises at modeling features of the mind using computational modeling techniques, and (b) employing some feature or features of production-model computers (such as the stored program concept, or the distinction between hardware and software) merely as a guiding metaphor for understanding some feature of the mind.

However, after reading through a paper by http://onlinelibrary.wiley.com/doi/10.1111/j.1747-9991.2009.00215.x/abstract" .

I’ve always taken computationalism to be the thesis that the interaction of neurons is a classical interaction and is what gives rise to the various phenomena such as qualia and self awareness that can be grouped under the heading of “consciousness”. Further, that any quantum mechanical description is not only unnecessary, but would not be considered a computational theory of mind. That neuroscience takes for granted that neuron interactions are ‘classical’ in the sense that they can be described by classical mechanics seems to be an axiom that I’ve always subscribed to, and for the sake of this thread, I’d ask that we go along with this view, only because it is by far the most prevalent one and to date. In fact, any theories that suggest that quantum mechanical interactions between neurons are widely considered "crackpottery".

For the sake of this thread, I’d like to look only at computationalist theories that presume that consciousness emerges from the classical interaction of neurons, not any presumed quantum mechanical ones. Further, any theory (such as Hameroff’s) that suggests consciousness emerges from quantum mechanical interactions within a neuron, can be distinguished from the classical mechanical versions of computationalism. In the future, I’ll make sure to distinguish between classical computationalist theories of mind and quantum mechanical ones. Thanks again for pointing that out.
 
Last edited by a moderator:
  • #51
I can't solve the three body problem, but I can take three bodies, let them move and attach an accelerometer to one of them. I can then take one of the three bodies and accelerate them in the exact same manner by pushing it with my hand (for precision's sake, probably a robotic arm of course), and it would move in the same way. That's what's going on in the story.

To go back to Maxwell's demon, it would be like having a box, and a demon opening and closing the door to separate hot and cold molecules. Then I take the box with the same exact molecular positions/velocities, and open and close the box in the same exact manner as the demon was doing. I will get the same results as the demon does.

The most immediate argument against this process being possible is that it's impossible to exactly replicate what the demon does, and have the exact same box again, and that being slightly wrong will cause the chaotic system to fall apart and render my opening and closing of the box meaningless. However, in the case of the three body problem, if I accelerate the body slightly differently, or put it in a slightly different starting position, the movement that I make the body perform will still be very close to what it originally did (in fact, it may even be a better approximation than it would be if I tried to reposition the three bodies and let them move again)

So the question of how chaotic the system is has to be applied specifically to consciousness/the brain. Specifically, how the neurons would fire in my brain are slightly different than everyone else's, even given the same stimuli. So you can argue that the input to the neurons is off from what it should be slightly when we apply it to the brain in the story. But this alone shouldn't be enough to kill consciousness: if you take a magnet and wave it around your skull, in theory it should induce some impulses amongst your neurons that are different from what would occur just by experiencing the things around you, but I doubt anyone would argue this means you lack consciousness.

On the other hand, in this case every neuron is receiving an input which is slightly off, which means that it triggers slightly differently from expected, which could result in an input which is more different from what the brain would have actually created. After a couple of rounds of this the neuron is just firing at random compared to what it would be doing if you were observing the stimuli that the scientists are attempting to re-create.

I see no reason why scientists should be able to perfectly re-create the input necessary for each neuron, similarly to how it would be impossible to perfectly re-create the box that Maxwell's demon has taught me how to divide into hot/cold
 
  • #52
Q_Goest said:
For the sake of this thread, I’d like to look only at computationalist theories that presume that consciousness emerges from the classical interaction of neurons...

Again, how does your bottom-up stance deal with top-down causality? Are you arguing for weak or strong emergence?

This is a typical neuroscience paper on how top-down attention modulates neural receptive fields.

http://www.bccn-goettingen.de/Members/tobias/press-releases/nn1748.pdf

The standard philosophical paradoxes arise because it is presumed that complex systems must be reducible to their "atoms", their component efficient causes. But a systems approach says that causality is holistic. It is not separable in this fashion. You cannot separate the bottom-up from the top-down as they arise in interaction.

Systems do of course appear to be composed of local component causes. But this local separability is in fact being caused by the top-down aspects of the system. Exactly as experiments show. A neuron is not a fixed autonomous switch (the computational analogy is wrong). It is responding dynamically to the urgings of the orchestra conductor.

This kind of systems logic can be implemented in software of course. For instance, the neural nets of Stephen Grossberg.

But complexity is different from either computationalism or non-linearity/chaos. And it cannot be reduced to either of them in an ontological sense (though you can do so as pragmatic approximations).
 
  • #53
Truth be told, biological systems are actually semiclassical. The chemistry that is inherent to them is (more or less) a summary of quantum mechanics (the way molecules form is based on their bond shape and energy, which comes down to the shape of electron valence shells which is a direct result of quantum numbers like spin and momentum).

Hydrogen bonds making and breaking is also important in biological systems.

So there i a small issue with taking the physical system to be purely classical. Ontologically, it doesn't matter to a network approach. We just care how the neurons talk to each other, which is assumed to be classical electrodynamics. Of course, this ignores the ligand-gating going on that leads to depolarization in the first place, but that's ok with me, since it can be modeled probabilistically; I can ignore the QM.

But... isn't this a problem for somebody asking epistemological questions? Isn't this just a convenient approximation for a faster solution?
 
  • #54
Phythagorean, apeiron, That classical mechanical systems differ in a fundamental way has been an argument made in science and philosophy (both pro and con) just as nonlinear systems has. Further, I have no hallucinations that you and I can resolve the dispute. I’ll just say that from everything I’ve seen on the subject from those people that understand it best, classical mechanics is separable and quantum mechanics is not. From http://plato.stanford.edu/entries/physics-holism/" for example:
Classical physics presents no definitive examples of either physical property holism or nonseparability.

Physical Property Holism: There is some set of physical objects from a domain D subject only to type P processes, not all of whose qualitative intrinsic physical properties and relations supervene on qualitative intrinsic physical properties and relations in the supervenience basis of their basic physical parts (relative to D and P).

Nonseparability: Some physical process occupying a region R of spacetime is not supervenient upon an assignment of qualitative intrinsic physical properties at spacetime points in R.

The boiling of a kettle of water is an example of a more complex physical process. It consists in the increased kinetic energy of its constituent molecules permitting each to overcome the short range attractive forces which otherwise hold it in the liquid. It thus supervenes on the assignment, at each spacetime point on the trajectory of each molecule, of physical magnitudes to that molecule (such as its kinetic energy), as well as to the fields that give rise to the attractive force acting on the molecule at that point.

As an example of a process in Minkowski spacetime (the spacetime framework for Einstein's special theory of relativity), consider the propagation of an electromagnetic wave through empty space. This is supervenient upon an ascription of the electromagnetic field tensor at each point in the spacetime.

But it does not follow that classical processes like these are separable. For one may question whether an assignment of basic magnitudes at spacetime points amounts to or results from an assignment of qualitative intrinsic properties at those points. Take instantaneous velocity, for example: this is usually defined as the limit of average velocities over successively smaller temporal neighborhoods of that point. This provides a reason to deny that the instantaneous velocity of a particle at a point supervenes on qualitative intrinsic properties assigned at that point. Similar skeptical doubts can be raised about the intrinsic character of other “local” magnitudes such as the density of a fluid, the value of an electromagnetic field, or the metric and curvature of spacetime (see Butterfield (2006)).

One response to such doubts is to admit to a minor consequent violation of separability while introducing a weaker notion, namely

Weak Separability: Any physical process occupying spacetime region R supervenes upon an assignment of qualitative intrinsic physical properties at points of R and/or in arbitrarily small neighborhoods of those points.

Along with a correspondingly strengthened notion of

Strong Nonseparability: Some physical process occupying a region R of spacetime is not supervenient upon an assignment of qualitative intrinsic physical properties at points of R and/or in arbitrarily small neighborhoods of those points.

No holism need be involved in a process that is nonseparable, but not strongly so, as long as the basic parts of the objects involved in it are themselves taken to be associated with arbitrarily small neighborhoods rather than points.

Any physical process fully described by a local spacetime theory will be at least weakly separable. For such a theory proceeds by assigning geometric objects (such as vectors or tensors) at each point in spacetime to represent physical fields, and then requiring that these satisfy certain field equations. But processes fully described by theories of other forms will also be separable. These include many theories which assign magnitudes to particles at each point on their trajectories. Of familiar classical theories, it is only theories involving direct action between spatially separated particles which involve nonseparability in their description of the dynamical histories of individual particles. But such processes are weakly separable within spacetime regions that are large enough to include all sources of forces acting on these particles, so that the appearance of strong nonseparability may be attributed to a mistakenly narrow understanding of the spacetime region these processes actually occupy.
So to put into simpler words, I take separability to mean that a physical process such as these nonlinear processes, occupying a volume of space (what SEP calls a spacetime region R) supervenes on or is influenced by, the measurable (qualitative) physical properties within this volume of space (at points of R) and/or really, really close by.

I take Zuboff’s discussion about neurons and IC’s to be an example of separability. The neurons are subject to local, causal influences just like the computational elements within a desktop computer (ie: transistors), and not nonlocal ones. This goes along with weak emergence, not strong emergence. If this is true, then we have to question whether or not there is any difference between the connected neurons and the disconnected ones if we provide all the same measurable physical properties to the neuron. Clearly, this question has raised a lot of discussion because it’s important to understanding how consciousness can arise from the interaction of neurons.

Regarding the n-body problem, Kronze (2002) has claimed that classical systems can exhibit chaotic behavior only if its Hamiltonian is inseparable, an example of which would be an n-body system. But he states, “Because the direct sum is used in classical mechanics to define the states of a composite system in terms of its components, rather than the tensor product operation as in quantum mechanics, there are no nonseparable states in classical mechanics.” Where this borderline between a classical system and a quantum mechanical system lies is unclear, but that neurons operate at a "classical" level is largely agreed to.

I understand that we can’t agree on this, and I’ll leave it at that. I’d be glad to listen to any arguments you may have, but I’d also suggest that they be backed up by papers that address the philosophical implications of any claims to nonseperability, strong emergence, etc… I’ve seen way too much hand waving with these claims, and references to papers that don’t address the basic issues. Yes, the science is very important and the papers you've provided are valid to discuss the issues those papers are written around. What bothers me is the way a few scientific papers are being referenced when they don't explicitly support the views being posted here. Not all references, most are okay, but there are a few such as this one:
This is a typical neuroscience paper on how top-down attention modulates neural receptive fields.

http://www.bccn-goettingen.de/Member...ses/nn1748.pdf
If you feel a paper supports your views, please post specific passages and state what they mean and how they support your view. Try and be as thorough as possible. Thanks. :smile:

Kronz, F. M and J. T. Tiehen, 2002, ‘Emergence and Quantum Mechanics’ Philosophy of Science, 69 (2), 324-347
 
Last edited by a moderator:
  • #55
Q_Goest said:
If you feel a paper supports your views, please post specific passages and state what they mean and how they support your view. Try and be as thorough as possible. Thanks. :smile:

In what way exactly are you suggesting the paper cited does not support my view? It shows empirically that global state constrains local actions.

Now you may chose to make your ontological argument down at the level of classical vs QM micro-processes. The separable vs non-separable distinction has some pragmatic meaning there. But where is the argument that says complexity is not something more than these varieties of simplicity? Why should we believe that a workable micro-scale distinction holds also at the level of complex systems?

It is an article of faith perhaps among reductionists that all macro-scale complexity is composed of micro-scale simplicity. But I was challenging you for a justification of that faith - when neuroscience so clearly tells us something else looks to be the case.
 
  • #56
I do not advocate holism, downward causation, or a "conductor". As I said earlier, the whole is not necessarily greater than the parts, it's just not necessarily equal either... and you have to be careful to define what you're talking about (what are you summing?)

I have to do more thinking and researching I think before replying in full, but I will say that based on the definition above, and a thread I found by you:
https://www.physicsforums.com/showthread.php?t=304933

It seems like you're discussion is confined to Euclidian space, which is of course separable. That doesn't seem immediately relevant to me.
 
  • #57
Q_Goest said:
I’ve concluded that computationalism doesn’t necessarily rule out quantum mechanical theories of mind. (...) For the sake of this thread, I’d like to look only at computationalist theories that presume that consciousness emerges from the classical interaction of neurons, not any presumed quantum mechanical ones.
Glad to read that (...) I don't think it's the best move: if Zuboff’s argument was valid, it would be valid whatever which version of computationnalism best explain the data. This opens a way to asses its validity.

Suppose a brain, say an artificial or extra-terrestrial one, which is not separable because it is based on some macroscopic quantum rule. You'd agree that Zuboff's argument would not work and that at least this brain would be computable, wouldn't you?

Now, as this quantum brain would be computable, it turns out that you could construct a classical brain where each 'neuron' would simulate the quantum brain as a whole and act according to what the simulation says. This just follows from the definition.

What I'd like you to consider, is that Zuboff's argument would be supposed to apply in the last case, but not in the first case. As the two cases are in fact strictly equivalent, the conclusion should be the same -but it can't.

Please consider that I'm not saying either possibility is likely. I'm just saying that if Zuboff's argument fails on one of two equivalent cases, then it lacks logical consistency.
 
Last edited:
  • #58
I forgot to say that one way to at least put pragmatic boundaries on Zuboff-style speculation is the Margolus–Levitin theorem - http://en.wikipedia.org/wiki/Margolus–Levitin_theorem

His ICs would need to pack a lot of information to do the job he requires - so much information that the packing density would be constrained by a holographic event horizon. His IC would turn into a black hole before it could do its job.

Event horizons are of course a modern physical example of precisely the global constraints that underly top-down causality which I am always mentioning.

But anyway, it is clearly unphysical to base any argument on infinite information density. An exact answer can be given on where this ultimate constraint kicks in. The argument then becomes about whether it is plausible that an IC of the kind required could come in under this holographic budget. Which cannot be answered via a thought experiment but must now be informed by some proper evaluation of just how much information would in fact be needed.

The non-linear story becomes particularly relevant now, as a continuous process would clearly need infinite information. One does not expect neural activity to be completely non-linear (it is sort of digital to some degree, that seems a fair assumption based on neuroscience). But still, neural activity is likely to be tilted far enough towards a non-linear basis for the information constraint to become an issue rather quickly in the argument.

I note Bishop, in the cited paper, is alert to arguments about the non-computability issue as he references the shadowing theorem in chaos theory. A digital computation has to round off a non-linear calculation at every step, so in fact changing the initial conditions with each iteration. Simulations can get away with this because they are illustrative approximations and not actual non-linear calculations (the chaotic paths look right enough, even if they are not formally right).

I note Bishop's general communication/interaction approach to "brain processing" also is based on an ennactive, semiotic, approach such as I have advocated here. Indeed, Bishop even cites my own writings with some approval, which is nice :smile:

Anyway, the message I take from Zuboffs parable is the usual. Reductionist views of complexity run into paradox because they have no way of defining their boundary constraints. The best they can say is that "constraints appear to emerge". But that is not the same as modelling them. And because there is a fundamental lack of principles here, philosophical thought experiments have a habit of presuming chains of effective causality can proceed freely from the local to the infinite. With no way of drawing the line, no line gets drawn.

The Margolus–Levitin theorem is at least one hard constraint on unbounded computationalism that has been agreed.

It is not actually very useful for doing neuroscience of course. But it shows hard boundaries do exist and we need to get used to modelling them to avoid the unbound speculation that brings philosophy into disrepute.

Again I recommend reading Robert Rosen's Essays on Life Itself for a source of multiple arguments against philosophic and scientific reductionism.
 
  • #59
Separability

I'll expand a bit on what I said here:

It seems like you're discussion is confined to Euclidian space, which is of course separable. That doesn't seem immediately relevant to me.

When using the differential equation approach (i.e. Hodgkins-Huxley) we don't model volumes of space. Each four-dimensional system represents a neuron, but we add a term to each of the neurons so that it depends on it's neighbor (based on the way it does in nature, either synaptically or diffusively/gap-junction). So you can imagine a ring (a common basic network topology for looking at simple characteristics of a system) of neurons, and perhaps label them: n1,n2,n3,n4...

But both in nature and in this model, space is irrelevant. n1 and n3 could be closer together than n1 and n2. It's irrelevant. Consider in nature. A common example of neural processing is five neurons who's axons synapse on the dendrite of a single neuron (the incident neuron).

The lengths of the five different axons are all different, but the neural system (not just the neurons now, the glia are involved in this, as are other cell interchanges that only a molecular biologist could explain in satisfactory detail) doesn't care. All that it cares about is that, when necessary, the five axon signals all arrive at the incident neuron in such a way that they can spatially sum to produce a significant result.

For instance (a simplified example from the Handbook of Brain Theory and Neural Networks): five photoreceptor neurons incident on a neuron. When an object passes left to right in front of you, the different lengths of the axons allow for the signals to spatially sum on the incident neuron, firing it (and suddenly you consciously detect "something's moving to the right!" If you think this is strange, you may want to read about sight blindness where people can't actually see a picture out their eyes, but the visual processing that senses motion is still working.)

The system "self organizes" such that the length's of the axons are irrelevant. Consequently, an object moving in the opposite direction will now never sum on the incident neuron. If all the lengths were always equal, then there would be a problem of ambiguity (the incident neuron would fire whether the object was moving left to right or right to left).

Anyway, I agree that we are done with the separability discussion. I've presented my arguments in full, though from an effective standpoint. You're (Q Goest) being more textual about it, and I don't know how much it impacts the actual efficacy because space volumes are not explicit in the neural systems I'm familiar with. For example:

officeshredder said:
I can't solve the three body problem, but I can take three bodies, let them move and attach an accelerometer to one of them. I can then take one of the three bodies and accelerate them in the exact same manner by pushing it with my hand (for precision's sake, probably a robotic arm of course), and it would move in the same way. That's what's going on in the story.

I agree that you can do this. Make some volume of space behave as it did in the system. But that's not to say you're studying the system anymore. For instance, in officeshredder's example above, he's no longer studying gravity. Gravity isn't physically part of the system anymore. He's just studying the effects of gravity in terms of the behavior produced. It's a postdiction, not a prediction. This is the same way I feel about digital perspectives of the brain. We can say "yes a neuron fired" or "no a neuron didn't fire" after looking at an experiment and assign 1's and 0's to it, sure.

But the dynamical biophysical models tell you (using continuity) when a particular neuron in a system will fire based on the mechanisms underlying the firing.

Maxwellian Demon

officeshredder, apeiron, and I have all mentioned this now and I didn't want it to distract from separability discussion, but now that we're essentially through with that, I think it's time to start looking at it.

You would need a Maxwellian Demon to perform the thought experiment (actually, you have one, it's the scientist) which means that you're investing energy into the system (the Maxwellian Demon had to learn the information, then convey and apply it to a physical system; thus the replicated system is a different system than the original.

From an information theory perspective, for instance, what the IC's are doing is actually quite important. Infinite information is equivalent to infinite energy in this view and that's an especially important information for such highly sensitive systems (small errors can lead to much different qualitative behavior). And, as you may know, in classical systems, there's an infinite number of points between any two finite points. If you can't describe what happens at every one of those point (which you can't... unless you're some kind of Grand Master Maxwellian Demon: a supernatural creature) than you don't really have control of the system.
 
  • #60
Pythagorean said:
But both in nature and in this model, space is irrelevant. n1 and n3 could be closer together than n1 and n2. It's irrelevant. Consider in nature. A common example of neural processing is five neurons who's axons synapse on the dendrite of a single neuron (the incident neuron).

The lengths of the five different axons are all different, but the neural system (not just the neurons now, the glia are involved in this, as are other cell interchanges that only a molecular biologist could explain in satisfactory detail) doesn't care. All that it cares about is that, when necessary, the five axon signals all arrive at the incident neuron in such a way that they can spatially sum to produce a significant result.

This seems a good example to focus on, but I am confused as to what you are actually arguing.

A computationalist building an IC would probably say that the differing lengths/transmission times of the input axons would be one of those features he would be able to replicate (given unlimited resources).

Are you agreeing this in principle possible, or arguing against it?

Perhaps you are saying yes it is possible for a single neuron, but it would not be possible for a functioning network of many neurons (for some non-linear reason?).
 
  • #61
apeiron said:
This seems a good example to focus on, but I am confused as to what you are actually arguing.

A computationalist building an IC would probably say that the differing lengths/transmission times of the input axons would be one of those features he would be able to replicate (given unlimited resources).

Are you agreeing this in principle possible, or arguing against it?

Perhaps you are saying yes it is possible for a single neuron, but it would not be possible for a functioning network of many neurons (for some non-linear reason?).

I don't mean to say it's impossible at all. What I mean to say is that the space wouldn't matter when you modeled it (it would be an unnecessary variable); only the transmission times are relevant.

Theoretically, I'm referring strictly to biophysical models of the Hodgkins-Huxley type since they've been accepted by the neurobiology community for nearly 60 years now.

The motion of each particle isn't traced through Euclidian space like a planetary model. The whole neuron is considered with four differential equations (none of which depend on space) that model the interaction between ion channel activation and membrane potential.

So what's being modeled is information transfer. There must be some Euclidian aspect though, since the electrochemical theory behind the model employs the concept of diffusion, but I'm still having trouble seeing how the separability of volumes of Euclidian space implies that you can separate a system into it's components and then make all the components behave like they did in the system and then still consider it a system.

As an ultimate proof, though, if we keep separating a classical system, we eventually get a quantum system. So there's at least some contradiction with the hard statement "classical systems are not separable".

What's especially interesting about this line of reasoning is that it brings us back to the "explanatory gap" between quantum physics and classical physics. And, interestingly enough, dynamical systems has foothold in the subject (i.e. quantum chaos).
 
  • #62
Pythagorean said:
I don't mean to say it's impossible at all. What I mean to say is that the space wouldn't matter when you modeled it (it would be an unnecessary variable); only the transmission times are relevant...So what's being modeled is information transfer.

But this does not really get to the nub of the argument then.

The hard problem depends on naive "separability". And if consciousness is just about the existence of information (in a static pattern) then Putnam's argument that any rock implements every finite state automaton may in principle go through. You are hinting that there is something more to be considered in talking about "information transfer" - states/patterns must change in some systematic fashion. But what is the nature of that essential change? And it it separable or non-separable?

Separable is a possibly confusing term here as it is properly a quantum states distinction. But we can keep using it as it is synonomous with digital (vs analog), discrete (vs continuous), reductionist (vs holistic), atomistic (vs well, again holistic) in this discussion.

Basically the hard problem arises when science can appear to separate the brain into its computational atoms, its digital components, and not lose anything essential in terms of causality. At some point a conscious human becomes a non-conscious heap of chemistry or a silicon zombie, or whatever. We are left with just the material, just the physical substance, and no causal account of the higher level "property".

Now it could be argued that there is indeed a hard boundary on separability in QM. But unless you are arguing consciousness is essentially a QM phenomenon - for which there is no good scientific backing - then this boundary seems irrelevant to philosophic thought experiments.

It could also be argued that non-linearity is another kind of hard boundary on separability - which is where I thought you were going with the cite of the three body problem. Again, this is probably a definite boundary on seperability (if non-linearity is being equated with an essential continuity of nature). Chaos theory would seem to say we cannot in practice pick out discrete instances in spacetime (measure exact initial conditions).

However I personally don't think non-linearity is a killer argument here. First, because chaos theory really models reality as it exists between the bounding extremes of the continuous and the discrete (if you have whorls of turbulence erupting, they are in some sense discrete structures in a continuous flow). And second because brain components like synapses, neurons and cortical circuits appear to be in some definite way structures with computational aspects.

For the purpose of philosophical thought arguments - based on what is at least broadly agreed and widely known about brains - it remains more plausible that the components of the brain are structures "striving to be digital even if made up of sloppy thermo stuff", rather than structures that are what they are, can do what they can do, because of some non-linear magic.

(Then I mentioned a third possible hard boundary on speculation - the relativistic issue of information density. Which like the QM separability bound, is a physically certain boundary, but again arguably irrelevant because such constraints only kick in at physically extreme scales).

Yet a further constraint on naive separability could be the argument that evolution is efficient and so the human brain (the most complex arrangement of matter in the known universe) is most likely to be close to the actual physical limits of complexity. Whatever it is that brains do to be conscious, we can probably expect that the way brains do it can't be beat. This makes it far less plausible that a technologist can come in and freely start stretching out the wiring connections, simplifying the paths, speeding up the transmission rates.

It might be argued by the likes of Zuboff that the brain as a natural machine is constrained by energetics - it is optimal, but optimised along a trade-off between consciousness production and metabolic cost. So a technologist with unlimited energy to make it all happen, could unpack a brain into a very different set of components. But the argument that evolution is efficient at optimisation, and so brains would resist the kind of naive separation proposed just on the grounds that there must be something significant about its physical parameters (its particular transmission times, its particular connection patterns, its particular molecular turnover, etc), must be at least dealt with in a thought experiment.

So we have two hard (but weak because they are distant in scale) constraints on speculation - QM and relativistic bounds.

We have a possible but unlikely constraint - non-linearity.

And we have the evolution optimisation constraint - which is probably weak here because it is easy enough I guess to imagine separating the energetic cost of replicating brain function.

Which all comes back to my original line of attack on the notion of separability. The systems view - which postulates a complex Aristotelean causality based on the interaction of bottom-up construction and top-down constraints - says reality is always dichotomised (divided into a local~global causality as just described) but never actually separated (reducible to either/or local causes, or global causes).

So what is going on here is that brains have both form and substance. They have their global organisation and their local components. There is indeed a kind of duality, but it is not the broken Platonic or Cartesean duality which leads to hard problems or waffling about emergence and supervenience. Instead, there is a duality of limits. You have a separation towards two different kinds of thing (bottom-up and top-down causation), but not an actual separation that divides reality. Just an asymptotic approach that produces two different "kinds" and in turn allows for the emergence of complexity in the synergistic interaction that results.

Translated into neuroscience, we should expect that the brain looks digital, computational, componential at the local level. This is what it is trying to be as this is what bottom-up, constructive, additive, causality looks like. But then we should also equally expect to be able to find the complementary global aspect to the brain which shows that it is an unbroken system. It must also be a system that can organise its own boundary constraints, its own global states of downward acting causality.

Which, as any neuroscientist knows, is what we see. Attention, anticipation, etc. Components have a definite local identity only because they are embedded in enactive contexts. Experiments at many levels of brain function have shown this. It is now a basic presumption of neuroscientific modelling (as shown for example, picking the cutting edge, Friston's Bayesian brain).

So if the hard problem arises because of a belief in physical separability, there are a number of arguments to be considered. But the best argument IMHO is the systems one. And note this explains both why consciousness and brain function are not ontically separated AND why they also appear as separated as possible.

Running the argument once more, local and global causality are in principle not separate (otherwise how do they interact?). Yet they are separated (towards physically-optimal limits - otherwise how would they be distinctive as directions of causality?).

Of course, this means that rather than selling an argument just about brains, you are trying to sell an argument about reality in toto.

But then if your thinking about the nature of things keeps leading you to the impasse of the hard problem, and its unpalatable ontic escape clauses like panpsychism, well you know that, Houston, you have a problem.

The hard problem should tell people that reductionism really is broke. If you accept the premise of separable causality, you end up looking over the cliff. So turn round and look at what got left behind. You have to rediscover the larger model of causality that was there when philosophy first got started.
 
  • #63
apeiron,

separability

That is originally where I was going with the n-body problem: I was previously using a casual definition of separability (similar to how you defined it just now) but Q_Goest introduced a very rigorous definition of separability and my point was that it seems too rigorous to be relevant to what we're talking about. His definition, as I was saying, seems to pertain to Euclidian space and that's not explicitly realized in the neural models I work with.

But yes, that aside, the definition of separable I was using before Q_Goest introduced this more rigorous definition was more to the point: "can you separate neurons and call it the same physical system?"

nonlinearity

I don't mean to imply at all that nonlinearity is a sufficient condition for consciousness. It's possibly necessary, but doubtfully sufficient.

The reason I bring up is nonlinearity up is that to me it appears that people attacking physicalism do so on the basis of linear physical systems, excluding the larger, more general class of physical systems that better describe our physical reality.

For the purpose of philosophical thought arguments - based on what is at least broadly agreed and widely known about brains - it remains more plausible that the components of the brain are structures "striving to be digital even if made up of sloppy thermo stuff", rather than structures that are what they are, can do what they can do, because of some non-linear magic.

There's no magic here, though. Nonlinearity is just unintuitive. It may appear as magic to someone who fails to understand the underlying mathematical principles, but it's really not. Everything "adds up" once you do the rigorous mathematical work.

And... the two views you presented here are not mutually exclusive. In fact (as I've already mentioned) a computer is realistically nonlinear itself. We simply accept slop, use filters, and set signal/noise ratios high enough that we can ignore the nonlinearities as "noise". So 1 is represented by ~5v and 0 is represented by ~0v, but 4.9 and .005 volts will work for a "1" and a "0" as well. We designed the system to be easy to interpret, regardless of the irregularity in the signal (as long as the signal is sufficiently larger than those irregularities).

So we actually ignore a lot of the "interesting" physics going in a computer because we don't want "interesting"; we want predictable because computers are extensions of our consciousness (we are their Maxwellian Demons).
 
  • #64
Pythagorean said:
The reason I bring up is nonlinearity up is that to me it appears that people attacking physicalism do so on the basis of linear physical systems, excluding the larger, more general class of physical systems that better describe our physical reality.

OK, agreed, and likewise this is why I point to the modelling of complexity by theoretical biologists such as Howard Pattee. Non-linear is simple complexity, then there is the analysis of systems complexity - the control of rate dependent processes (ie: self-organising, dynamical) by rate independent information (such as genes, words, neural connection patterns).
 
  • #65
I think the Maxwellian demon analogy is being misused here. The practical difficulty of duplicating a chaotic system can’t really be compared to Maxwell’s demon.

I don’t think anyone is disagreeing that in practice, trying to duplicate the causal influences acting on a single neuron so that the neuron undergoes the same physical changes in state while removed from the brain that it undergoes while in the brain, is going to be virtually impossible. Certainly we can claim it is impossible with our present technology. But that’s not the point of separability. Anyone arguing that in practice, the chaotic nature of neurons prevents the separability of the brain has already failed to put forth a legitamate argument. One needs to put forth an argument that shows that in principal, neurons are not separable from the system they are in. What principal is it that can be used to show neurons are not separable? Appealing the the practical difficulty of creating these duplicate physical states isn’t a valid argument.

The concept that nonlinear phenomena are "those for which the whole is greater than the sum of its parts" and thus aren't seperable has been appealed to by a few scientists and philosophers but that argument hasn’t been widely accepted. Further, it changes the present theory of mind. It says that digital computers can’t be conscious for starters since it is obviously very easy to duplicate the physical changes in state any portion of a computer undergoes. So now we need to say that some computational systems can be conscious and other computational systems can’t be, regardless of whether or not they are functionally the same.

If we’d like to use the argument that nonlinear systems are not separable, but we find just one nonlinear system that is in fact separable, then we have an even more difficult job of finding a reason why one nonlinear system can be consious but another can not. So let’s look at the n-body problem for a moment and contemplate whether or not it might be separable. To show separability, we only need to show that within a given volume of space, and over some time interval dt (ie: a spacetime region R), the gravitational field within that volume of space is identical to the gravitational field in another volume of space within a different n-body system. That is, if we find some spacetime region R1 within an n-body system that is identical with some other spacetime region R(identical), then we’ve shown separability in the sense that Zuboff is suggesting. These two regions of space undergo identical physical state changes over that time interval dt, despite the two being in different systems. Note here that Zuboff’s notion of separability is not just that physical processes supervene only on the measurable physical properties within some spacetime region R, but also that those physical processes can be duplicated within an identical spacetime region R(identical) without having R(identical) be part of the same overall physical process. In other words, the neuron in one system can be duplicated in another system. We might imagine those two neurons going through identical changes in state because the causal influences on them are identical which is just one way to understand what the story is about and what the problem is that we need to resolve.
 
  • #66
apeiron said:
The standard philosophical paradoxes arise because it is presumed that complex systems must be reducible to their "atoms", their component efficient causes. But a systems approach says that causality is holistic. It is not separable in this fashion. You cannot separate the bottom-up from the top-down as they arise in interaction.

Systems do of course appear to be composed of local component causes. But this local separability is in fact being caused by the top-down aspects of the system.


All this bottom-up and top-down talk, in as much as it's true, bears a striking similarity to Life(aliveness, being alive) if applied to the universe. May even shed light on the "special signal problem".
 
Last edited:
  • #67
Q_Goest said:
I don’t think anyone is disagreeing that in practice, trying to duplicate the causal influences acting on a single neuron so that the neuron undergoes the same physical changes in state while removed from the brain that it undergoes while in the brain, is going to be virtually impossible. Certainly we can claim it is impossible with our present technology. But that’s not the point of separability. Anyone arguing that in practice, the chaotic nature of neurons prevents the separability of the brain has already failed to put forth a legitamate argument. One needs to put forth an argument that shows that in principal, neurons are not separable from the system they are in. What principal is it that can be used to show neurons are not separable? Appealing the the practical difficulty of creating these duplicate physical states isn’t a valid argument.

Why is it the presumption here that neurons are separable rather than the converse?

But anyway, I have already given two "in principle" limits in QM and relativistic event horizons. Neurons would not be separable beyond these limits (or do you disagree?).

Then there is the "middle ground" attack (as QM and black holes clearly kick in only at the opposing extremes of physical scale).

And here I would suggest that networks of neurons, if ruled by global dynamics such as oscillatory coherence (an experimentally demonstrated correlate of consciousness), can be presumed to be NP complete.

This is the kind of argument indeed used within theoretical biology to show biology is non-computable - for example, the protein folding problem. You can know the exact sequence of bases, yet not compute the final global relaxation minima.

Here is an exert from Pattee's paper, CAUSATION, CONTROL, AND THE EVOLUTION OF COMPLEXITY, which explains how this is relevant (and how complexity is not just non-linearity).

The issue then is how useful is the concept of downward causation in the formation and evolution of complex systems. My conclusion would be that downward causation is useful insofar as it identifies the controllable observables of a system or suggests a new model of the system that is predictive. In what types of models are these condition met?

One extreme model is natural selection. It might be considered the most complex case of downward causation since it is unlimited in its potential temporal span and effects every structural level of the organism as well as social populations. Similarly, the concept of fitness is a holistic concept that is not generally decomposable into simpler components. Because of the open-ended complexity of natural selection we know very little about how to control evolution, and consequently in this case the concept of downward causation does not add much to the explanatory power of evolution theory.

At the other extreme are simple statistical physics models. The n-body problem and certainly collective phenomena, such as phase transitions, are cases where the behavior of individual parts can be seen as resulting from the statistical behavior of the whole, but here again the concept of downward causation does not add to the model's ability to control or explain.

A better case might be made for downward causation at the level of organism development. Here, the semiotic genetic control can be viewed as upward causation, while the dynamics of organism growth controlling the expression of the genes can be viewed as downward causation. Present models of developmental control involve many variables, and there is clearly a disagreement among experts over how much control is semiotic or genetic and how much is intrinsic dynamics.

The best understood case of an essential relation of upward and downward causation is what I have called semantic closure (e.g., Pattee, 1995). It is an extension of von Neumann's logic of description and construction for open-ended evolution. Semantic closure is both physical and logical, and it is an apparently irreducible closure, which is why the origin of life is such a difficult problem. It is exhibited by the well-known genotype-phenotype mapping of description to construction that we know empirically is the way evolution works. It requires the gene to describe the sequence of parts forming enzymes, and that description, in turn, requires the enzymes to read the description.

This is understood at the logical and functional level, but looked at in detail this is not a simple process. Both the folding dynamics of the polypeptide string and specific catalytic dynamics of the enzyme are computationally intractable at the microscopic level. The folding process is crucial. It transforms a semiotic string into a highly parallel dynamic control. In its simplest logical form, the parts represented by symbols (codons) are, in part, controlling the construction of the whole (enzymes), but the whole is, in part, controlling the identification of the parts (translation) and the construction itself (protein synthesis).

Again, one still finds controversies over whether upward semiotic or downward dynamic control is more important, and which came first at the origin of life. There are extreme positions. One extreme sees the universe as a dynamics and the other extreme sees the universe as a computer. This is not only a useless argument, but it obscures the essential message.

The message is that life and the evolution of complex systems is based on the semantic closure of semiotic and dynamic controls. Semiotic controls are most often perceived as discrete, local, and rate-independent. Dynamic controls are most often perceived as continuous, distributed and rate-dependent. But because there exists a necessary mapping between these complementary models it is all too easy to focus on one side or the other of the map and miss the irreducible complementarity.
 
  • #68
Q_Goest said:
I think the Maxwellian demon analogy is being misused here. The practical difficulty of duplicating a chaotic system can’t really be compared to Maxwell’s demon.

The maxwellian argument and the nonlinear argument are two different lines of reasoning.

I'm not saying that it's only in practice that complex systems are inseparable, I'm proposing that it's in principle. That's why I'm using mathematics to illustrate the point.

That they're nonlinear and complex is sufficient, and also, remember, I'm not claiming nonlinear systems are required for consciousness (as I've already said) just that the thought experiment narrows it's scope to linear systems and that neurons exhibit nonlinear behavior (just like the rest of the world does).

The concept that nonlinear phenomena are "those for which the whole is greater than the sum of its parts" and thus aren't seperable has been appealed to by a few scientists and philosophers but that argument hasn’t been widely accepted

And as I've already demonstrated, nonlinear phenomena formally only says "the whole doesn't need to be equal to the sum" and I've shown what exactly that means mathematically. It's quite obvious (from a function variable standpoint) how the variables, being acted on by the function, are not separable because of the nonlinearity.

Linearity literally allows us to reduce problems to their components, and this exactly falls out of the superposition (the function performed on each is the same as the function performed on all in a linear case).

To show separability, we only need to show that within a given volume of space, and over some time interval dt (ie: a spacetime region R), the gravitational field within that volume of space is identical to the gravitational field in another volume of space within a different n-body system. That is, if we find some spacetime region R1 within an n-body system that is identical with some other spacetime region R(identical), then we’ve shown separability in the sense that Zuboff is suggesting. These two regions of space undergo identical physical state changes over that time interval dt, despite the two being in different systems. Note here that Zuboff’s notion of separability is not just that physical processes supervene only on the measurable physical properties within some spacetime region R, but also that those physical processes can be duplicated within an identical spacetime region R(identical) without having R(identical) be part of the same overall physical process. In other words, the neuron in one system can be duplicated in another system. We might imagine those two neurons going through identical changes in state because the causal influences on them are identical which is just one way to understand what the story is about and what the problem is that we need to resolve.

But this is not what's being argued. One neuron in one system can be made to behave the same way in another system, that's not what's being contested. From the discussion by apeiron and I above, my claim is that a system of N coupled bodies (or neurons) is not the same as a system of N independent neurons, all exhibiting the same behavior independent of each other (i.e., no causal connection).

This seems to be reminding me of counterfactual states now. If an experimenter were to come in and probe one neuron to see how another acted, he wouldn't be able to find any causal relationship. The input and the output would appear to be completely random to him (and it may as well be, since the IC's can't predict the experimenters motives or else it would be a causal connection).

This would be a different result from if the experimenter ran the tests on the causally connected neurons. He would be able to find a consistent relationship between the input of neuron 1 and the output of neuron N.
 

Similar threads

Replies
2
Views
3K
Replies
61
Views
15K
Replies
28
Views
6K
Replies
33
Views
5K
Replies
70
Views
14K
Replies
5
Views
7K
Back
Top