This webpage title poses the question: Can Mind Arise from Plain Matter?

In summary: But it seems that if mental causation is necessary, then it is also possible that the body could operate without any mind at all. In summary, Yablo argues that the primary problems with mental causation are nicely summed up by him. He states that every physical outcome is causally assured already by preexisting physical circumstances; its mental antecedents are therefore left with nothing further to contribute. He defines dualism as the belief that mental and physical phenomena are, contrary to the identity theory, distinct, and contrary to eliminativism, existents. He argues that if mental causation is necessary, then it
  • #1
Q_Goest
Science Advisor
Homework Helper
Gold Member
3,012
42
The primary problems with mental causation are nicely summed up by Yablo (“Mental Causation” The Philosophical Review, Vol. 101, No. 2 (April 1992) ).
http://www.jstor.org/pss/2185535
"How can mental phenomena affect what happens physically? Every physical outcome is causally assured already by preexisting physical circumstances; its mental antecedents are therefore left with nothing further to contribute." This is the exclusion argument for epiphenomenalism.
...
(1) If an event x is causally sufficient for an event y, then no event x* distinct from x is causally relevant to y (exclusion).
(2) For every physical event y, some physical event x is causally sufficient for y (physical determinism).
(3) For every physical event x and mental event x*, x is distinct from x* (dualism).
(4) So: for every physical event y, no mental event x* is causally relevant to y (epiphenomenalism).

Yablo defines dualism as follows:
… all I mean by the term [dualist] is that mental and physical phenomena are, contrary to the identity theory, distinct, and contrary to eliminativism, existents.
In other words, the physical description of the color red as it appears in the mind will be different than the mental description of the color red (distinct), but they both should be taken as phenomena that actually occur (existents). The physical description would discuss what neurons and sections of the brain are active when the phenomenon of ‘red’ occurs, while the mental description would focus on explaining the qualia.

Take for example, an allegedly conscious computer. For the sake of clarity, let’s model a computer as a large collection of switches, which is basically all a computer is. A transistor is at the heart of every modern computer which is nothing more than a switch.

We can examine a computer that is reporting that it sees the color red when looking at a fire truck for example. This computer will have a camera for eyes and a speaker for a mouth. So out of the speaker, when the camera is turned on a fire truck, it reports ‘red’. But did it report red because it actually is experiencing red, or because its circuit is designed such that red is reported? None of the transistors in the computer are influenced by any ‘experience’ of redness. Each transistor only changes state because an electrical current is either applied or removed. And per computationalism, the experience of the color red is a phenomena produced by the interactions of the transistors, it is not a phenomena produced by any given transistor.

For the computer, we have physical events (changes transistor states) which have physical causes (application or removal of electric current). Mental events therefore, are not causally relevant and are epiphenomenal.

Appeals to mental causation because of quantum phenomena may also be problematic. Very briefly, if a quantum physical event was somehow influenced by a mental event, such as protein folding for example, then the probability of the physical event will have been influenced by the mental event. If a quantum physical event has a 50/50 chance of occurring and that event is influenced by some mental event such that the physical event no longer occurs with a 50/50 chance. This might violate entropy since a system can now become more ordered because of mental events.

What’s your view? How can mental events be reconciled with physical events? Please provide references to the literature if at all possible. We don’t want personal theories.
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
My objections to his argument:

(2) I don't think this is an established fact. If we knew every cause and effect, then everything is known and science has nothing left to investigate. But we don't know everything, so this is not the case. We don't know if and how strings work, if there are other universes, what happens in black holes, what started the universe, how gravity works, how QM and GR can fit together, where the laws of physics came from, how atoms work , etc.(example of unexpected atomic interaction: http://www.sciencedaily.com/releases/2008/07/080702132209.htm, or the reasons they built the large hadron collider). So there is room for unknown causal powers (either physical or mental) in our universe.

(3) is avoided by adopting monism such as materialism, panpsychism, idealism or something else. With materialism: if mind = matter, then mind has the same causal powers as matter. Panpsychism: we can talk about "the physical" as if it is unconscious, but we don't really know. A physical body might operate according to a known mechanism, yet be conscious. There is no logic that states that mechanistically/deterministically behaving objects cannot be conscious or that consciousness cannot cause mechanistic/deterministic behaviour.

The issue with entropy seems more to do with free will, and not so much with mind in general. And if a mind with free will made a system more ordered in one location, yet this caused a decrease in order in another location, would it violate entropy? If not, then a free will mind has much room to operate in concordance with entropy.
 
Last edited:
  • #3
Hi pftest,
pftest said:
My objections to his argument:
I’d rather not go through everyone’s own arguments and objections. Please review some of the literature on the topic.

Regarding atomic interactions, that’s a non starter. Those are physical processes that can be objectively measured and if they are random in nature, then as I’d pointed out before, the statistical chances of those processes occurring can be quantified. Radioactive decay for example, has a well defined statistical rate of occurance. No one has ever suggested that the liklihood of a physical process occurring is dependant on someone’s mood for example, which is what we’d need to find if mental causation is true.

pftest said:
With materialism: if mind = matter, then mind has the same causal powers as matter.
This leaves open the explanatory gap. Why should there be any feeling or phenomenal experience at all? How can we know what a computer experiences since everything a computer does can be FULLY explained in physical terms? This argument can be extended to the mind given the present computational paradigm. What we say and how we act can be explained by referencing the governing physical interactions between neurons and other physical interactions within the body. It also leaves open the issue of reliable reporting that we discussed in your other thread.

I’d suggest doing a search of the web for:
http://www.google.com/search?hl=en&q=mental+causation&aq=f&oq=&aqi=g2"
http://www.google.com/search?hl=en&source=hp&q=explanatory+gap+&aq=f&oq=&aqi=g1"
 
Last edited by a moderator:
  • #4
Im sorry i don't have any references, but i shall look for some later. I will have a go at it now anyway, i believe it is not against the forum rules to do so.

Q_Goest said:
Regarding atomic interactions, that’s a non starter. Those are physical processes that can be objectively measured and if they are random in nature, then as I’d pointed out before, the statistical chances of those processes occurring can be quantified. Radioactive decay for example, has a well defined statistical rate of occurance. No one has ever suggested that the liklihood of a physical process occurring is dependant on someone’s mood for example, which is what we’d need to find if mental causation is true.
I brought the atomic interactions up because it shows that even in atoms there is room for unknown causal powers. That room itself is enough to dismiss the idea that there is no room for mental causation.

This leaves open the explanatory gap. Why should there be any feeling or phenomenal experience at all? How can we know what a computer experiences since everything a computer does can be FULLY explained in physical terms? This argument can be extended to the mind given the present computational paradigm. What we say and how we act can be explained by referencing the governing physical interactions between neurons and other physical interactions within the body. It also leaves open the issue of reliable reporting that we discussed in your other thread.
You are right, i don't think materialism solves this and i should not have mentioned it.

What i was trying to say is that "the physical" need not be not in conflict with "the mental". When we have an equation that predicts that an object will move in a straight line, from this it doesn't follow that thus the object has no mind.

The statements "the object will move in a straight line" and "the object has a mind" are not in conflict with each other. Similarly, I am saying that physical interactions need not be in conflict with mental ones. The end result is then panpsychism or neutral monism (but not materialism as i mistakenly said).
 
Last edited:
  • #5
Q_Goest said:
We can examine a computer that is reporting that it sees the color red when looking at a fire truck for example. This computer will have a camera for eyes and a speaker for a mouth. So out of the speaker, when the camera is turned on a fire truck, it reports ‘red’. But did it report red because it actually is experiencing red, or because its circuit is designed such that red is reported?

It may be important to distinguish between a computer - a Turing machine - and machines more generally here. What you are describing is a rather hybrid system that confuses the essential issues I believe.

So a Turing machine is the familiar tape and gate system. If it is really "computing" that your machine is doing, then all those functions and activities can be reduced to the making and erasing of marks on an infinite tape. You can then ask the relevant questions of this most minimal model and see how they stack up.

Do this and you can see for example that you now have no clear place to insert your camera input and your speaker output. You can write these actions into the program as data on the tape. But the point is that YOU have to. The computer is not in a dynamic relationship with the world as a basic fact of its nature.

Now you can begin to think about how you would build up some actual modelling relation with a world - how you would build something more like a neural network that could learn from experience. What is it exactly that you are adding that was missing?

To cut a long story short, the whole epiphenomenal/dualistic debate arises because we insist on taking a strictly bottom-up, built from smallest components, approach to thinking about complex adaptive systems. Complexity involves both its forms - its global organisation - as well as its substances, its local material out of which things can get made (which includes the notion of information, or transistor bits, or marks on an infinite tape).

With neural networks, we are beginning to see signs of a global ongoing state that acts as a living context to the system's moment-to-moment reactions - the ideas or long term memories that frame the impressions or short term memories (see Grossberg, Rao, Hinton, McKay, Attneave, or anyone working on generative neural nets, forward models, Helmholtz machines, dietic coding, etc).

A Turing machine has no global organisation, no hierarchy of operational and temporal scale. So there is nothing like a top-down causation guiding and constraining its actions. There is no internal meaning or semiosis. It is only we programmers who found the marks on the tape meaningful when we first wrote them and when we looked again at how they were rearranged.

All this is perfectly obvious from a systems perspective and so these kinds of philosophical traumas have no real content. There is a problem of coming up with an adequate model of top-down causality as it applies to conscious human brains - it is a hard ask - but not an issue of actual causal principle.

To add a little reality to your hybrid machine, what if you allowed that it was sufficiently complex to be an anticipatory device?

This would mean that before a fire truck hoved into sight, it would be in a state of prevailing expectation of not seeing red in that particular part of the visual field. It would be expecting to see whatever colour of whatever was already in that place. The red fire truck would then be a surprise. Although hearing its sirens would prime for the sight of it coming around the corner (and if the truck were painted green, that would be an even bigger surprise).

And so on. The point being that the "mind" is always there as a global state of aniticipation and prepared habits. New information can be taken in. But there is always a prevailing context that is framing it. This is what a computer simulation would have to replicate in all its gloriously complex detail. And such a simulation would have even less to do with the canonical Turing machine than a neural net. It would in fact have to have the real-life dynamism of a human brain embedded in a human body.

So the standard trick of philosophical dualists is to say we can't imagine a Turing machine being conscious. Well neither can a systems theorist. And what is completely lacking in the one, and completely necessary in the other, is this hierarchy of scale, this interaction between bottom-up constructing causality and top-down contextualising, or constraining, causality.
 
  • Like
Likes mattt
  • #6
Hi apeiron,
Thanks for the responce. I realize some folks feel a system approach and some form of downward causation are instructive. The paper "Physicalism, Emergence and Downward causation" by Campbell and Bickhard for example, is right up your ally. They discuss mental causation and reference Kim. To me, it's all mere handwaving.

I'm on the other side of the fence. Craver and Bechtel2 I think do a nice job of getting in between the two camps and provide an argument that you might find interesting. They suggest a way of thinking about "top down casuation" without resorting to downward causation. They suggest that interlevel relationships are only constitutive. To a system level approach they suggest, "...those who invoke the notion of top-down causation ... owe us an account of just what is involved.” I see very few individuals attempt to provide that account, and those that do have not been able to prove any kind of downward causation. Bedau1 discusses weak and strong emergence as well as downward causation. Bedau suggests that "weak emergence is all we are entitled to" and does a very good job pointing out that "emergent macro-causal powers would compete with micro-causal powers for causal influence over micro events, and that the more fundamental micro-causal powers would always win this competition." I see no evidence to challenge that.

Regardless of which camp you’re in, the system's approach doesn't do anything to change the conclusion. Every single transistor, switch or other classical element of any neural net only ever changes state because of local causal actions. In the case of a transistor, it's the current applied to the transistor. It really is that simple!

1. Bedau; "http://people.reed.edu/~mab/publications/papers/principia.pdf" "
2. Craver and Bechtel; "http://philosophyfaculty.ucsd.edu/faculty/pschurchland/classes/cs200/topdown.pdf" "
 
Last edited by a moderator:
  • Like
Likes mattt
  • #7
Q_Goest said:
Thanks for the responce. I realize some folks feel a system approach and some form of downward causation are instructive. The paper "Physicalism, Emergence and Downward causation" by Campbell and Bickhard for example, is right up your ally. They discuss mental causation and reference Kim. To me, it's all mere handwaving.

Most philosophers don't take it seriously and yet most mathematical biologists do. Interesting that o:).

Here is my set own refs from an earlier thread on this...
(https://www.physicsforums.com/showthread.php?p=2469005&highlight=emmeche#post2469005)

http://www.ctnsstars.org/conferences...0causation.pdf

http://www.buildfreedom.com/tl/tl20d.shtml

http://people.reed.edu/~mab/papers/principia.pdf

http://www.nbi.dk/~emmeche/coPubl/2000d.le3DC.v4b.html

http://www.nbi.dk/~emmeche/coPubl/97e.EKS/emerg.html

http://pespmc1.vub.ac.be/CSTHINK.html

http://www.calresco.org/

http://books.google.co.nz/books?id=N...ollege&f=false

http://www.nbi.dk/~emmeche/pr/DC.html

http://www.isss.org/hierarchy.htm

https://webspace.utexas.edu/deverj/p...bingmatter.pdf

Q_Goest said:
Regardless of which camp you’re in, the system's approach doesn't do anything to change the conclusion. Every single transistor, switch or other classical element of any neural net only ever changes state because of local causal actions. In the case of a transistor, it's the current applied to the transistor. It really is that simple!

Not so fast cowboy. If you are dealing with hierarchically organised systems, you can't just blithly label all the causality as "local". The point is that there are in fact prevailing long term states of activity across the network that act as contextual constraints.

The on-ness or off-ness of a particular transistor gate is the result of events that happened in the past and predicted (with certain weight) in the future.

The on-ness or off-ness of a particular transistor gate has both some level of "now" meaning relating to some current spatiotemporal pattern of activation, and also some level of more general meaning as part of long term memory patterns.

If you look at the transistor gate from an outside point of view - and make that choice to measure its isolated state at some isolate instant in its history - then it may indeed seem you are only seeing bottom-up local causality. But you precisely then are missing the real deal, the internal systems perspective by which every local action has meaning because it occurs with a running global context.

If philosophers studied biology and systems theory, this would not be such a mystery.

There are honorable exceptions of course like Evan Thompson.

http://individual.utoronto.ca/evant/MBBProblem.pdf
 
Last edited by a moderator:
  • #8
Hi apeiron,
apeiron said:
The on-ness or off-ness of a particular transistor gate is the result of events that happened in the past and predicted (with certain weight) in the future.
This is okay. Why "certain weight" though? Are you suggesting computers are not deterministic?
apeiron said:
The on-ness or off-ness of a particular transistor gate has both some level of "now" meaning relating to some current spatiotemporal pattern of activation, and also some level of more general meaning as part of long term memory patterns.

If you look at the transistor gate from an outside point of view - and make that choice to measure its isolated state at some isolate instant in its history - then it may indeed seem you are only seeing bottom-up local causality. But you precisely then are missing the real deal, the internal systems perspective by which every local action has meaning because it occurs with a running global context.
Would you agree that none of this changes the fact that transistors only ever change state because of a current being applied? And if that's true, then mental states per the standard computational paradigm still don't influence individual transistors any more than a brain state influences individual neurons. Macro-states at the classical scale simply don't influence micro-states except that macro-states provide these boundary conditions that put limits on potential micro-states. I only see bottom up causality (in classical mechanics) because that's all there is. That's why engineers, scientists, meteorologists, etc... use finite element analysis for all kinds of structural, fluid, heat transfer, electromagnetic... all classical phenomena. They can all be dealt with using local bottom up causation. That's the "real deal".

Regarding chemistry, biology, and condensed matter physics, there are many instances of new and unexpected things that can happen. There are some articles in the literature that have valid cases for their being non-separable physical states at or below the level where classical mechanics gives way to quantum mechanics. We might find some common ground there, but I doubt we'll ever see eye to eye on everything.
 
  • Like
Likes mattt
  • #9
Q_Goest said:
This is okay. Why "certain weight" though? Are you suggesting computers are not deterministic?

Computers are certainly designed to be as deterministic as possible - that is part of their engineering spec. And of course we know how difficult this is becoming as chip gates get down to the nano-scale.

But no. The nodes of neural nets are weighted in the sense they do not switch indiscriminately but on the basis of their learning history, just like the real neurons they are meant to vaguely simulate.

Q_Goest said:
Would you agree that none of this changes the fact that transistors only ever change state because of a current being applied? And if that's true, then mental states per the standard computational paradigm still don't influence individual transistors any more than a brain state influences individual neurons. Macro-states at the classical scale simply don't influence micro-states except that macro-states provide these boundary conditions that put limits on potential micro-states. I only see bottom up causality (in classical mechanics) because that's all there is.

I was saying that if you insist on only measuring systems in simple ways, you will of course only extract simple measures of what is going on.

Your question is "why did this transistor switch"? You say it is only because of some set of inputs arriving at that moment. I say it is because of some history of past learning, some set of expectations about what was likely to happen, some current context in which its switching state makes a co-operative and cohesive sense.

You then say well I'm looking at the transitor and I can't see these things. I reply that is because all the fancy stuff that is actually making things happen has been hidden away from your gaze at the level of the software.

In a realistic neural simulation, for example, there would have to be some equivalent of neural priming, neural binding, population voting, evolving responses. Some thousands of transistors would be needed (a computational sub-system) to even begin getting this necessary global complexity represented as hardware.

So again, you are pursuing an illegitimate route to an argument.

The honest approach is to strip your thought experiment down to the bare essentials of a Turing machine and see if your idea still holds. And the standard outcome of such approaches is agreement that you have now clearly put all meaning outside the physical implementation. The writing and the interpreting of the programs is external to its running. And all you have done is break apart the bottom-up crunching from the top-down contextualisation. Not proved that the top-down part is actually unneccessary to the deal.

This is not a classical vs QM issue either. It applies to all systems (and thus all reality - reality being best understood as a system, except when you find it more useful to model it as a machine).
 
  • #10
Q_Goest said:
Would you agree that none of this changes the fact that transistors only ever change state because of a current being applied? And if that's true, then mental states per the standard computational paradigm still don't influence individual transistors any more than a brain state influences individual neurons.

But the applied current itself is based on inputs to the system; ultimately, from a user, who may be very well responding to an output from the system. I don't know how easily we can separate a computer from the user, or the engineers that designed it.

This leaves open the explanatory gap. Why should there be any feeling or phenomenal experience at all? How can we know what a computer experiences since everything a computer does can be FULLY explained in physical terms?

Why shouldn't there be a feeling or phenomenal experience? I'm not sure we can know what a computer experiences until we close the explanatory gap. I think we'd have to find the physical basis for our experience and start mapping it to get an idea of what physical process is associated with what kinds and parts of consciousness.

Speaking in magnitudes of centuries, I don't think we're that far off from being able to bring the mind into the physical arena.
 
  • #11
Hi Q_Goest, it was discussed some time ago about how experience is the action of several neurons and chemical processes recording an incident or event. If your computer records the "experience" of red then uses that earlier response in a more recent one, then it is experiencing red. Similarly with the brain doing relatively the same action or event.

What's different here is that your computer is not set up to experience red with genetic responses or "innate" response. Every cell in our entire bodies responds to the colour red as do plants and other animals. This is either genetic or a chemo-photosensitive trait that has been selected through our evolution.

I don't think the genetic or chemo-photosensitive reactions we have to the colour red are dependent on mental activity. The cells respond autonomically. And personally, I'd tell you that any brain with its visual centre, eyes etc... in working order is primarily autonomic as well. I would point out the obvious here and say that when our brain detects red there is an intellectual signal to stop the car... and that would indicate a mental cause of an action/event. (?)
 
  • #12
Hi pftest,
pftest said:
I brought the atomic interactions up because it shows that even in atoms there is room for unknown causal powers. That room itself is enough to dismiss the idea that there is no room for mental causation.
Sorry if my last post sounded a bit abrupt. I actually agree that there may be room for mental causation at a quantum level, but I don’t yet see how and I’ve not read enough of the literature to locate a good argument in this regard. One issue is the explanatory gap – why should any physical process be accompanied by a mental one? This is equally applicable to quantum interactions. Another issue is that there could be violations of ‘entropy’ in the sense that I provided in the OP. I think what quantum models have in their favor is that they provide a physical substrate which is intrinsically inseparable. Phenomena that can be described in classical terms however are separable (depending on how you define separable).


Hi apeiron,
Let’s clarify one issue. In your first post you mentioned “top-down causation” and when I read that in context it seemed to me you meant “downward causation”. Hence the focus on the transistor. Downward causation may or may not be what you have in mind, but I’m assuming it is. Top-down causation has been defined in different ways in the literature, so perhaps you’d like to clarify.
1) Top-down causation can mean “downward causation”. I’ll define “downward causation” below as defined by Bedau.
or
2) it can mean that the boundary conditions of a physical system restrict the potential micro-states of that system. For example, a point on a wheel rolling down a hill has its motion restricted by the boundary conditions on the wheel. See Bedau’s paper for more on this. This kind of top-down causation is not problematic, but it also doesn’t lend any help to mental causation.

You may have another meaning for top down causation in mind, so feel free to clarify.


Hi Pythagorean,
I’d asked for people to reference the literature, not so much because I like people to keep referencing things, but because the philosophy forum has a reputation for people ignoring the literature as if it doesn’t exist. The issues regarding cognition have already been considered in depth by others, so using our own intuitions about philosophy typically gets us in trouble.

Before getting into this, I want to define “downward causation” as given by Bedau:
The most stringent conception of emergence, which I call STRONG EMERGENCE, adds the requirement that emergent properties are supervenient properties with irreducible causal powers. These macro-causal powers have effects at both macro and micro levels, and macro-to-micro effects are termed “downward” causation. We saw above that micro determination of the macro is one of the hallmarks of emergence, and supervenience is a popular contemporary interpretation of this determination. Supervenience explains the sense in which emergent properties depend on their underlying bases, and irreducible macro-causal power explains the sense in which they are autonomous from their underlying bases.

By definition, such [downward] causal powers cannot be explained in terms of the aggregation of the micro-level potentialities; they are primitive or “brute” natural powers that arise inexplicably with the existence of certain macro-level entities. This contravenes causal fundamentalism – the idea that macro causal powers supervene on and are determined by micro causal powers, that is, the doctrine that “the macro is the way it is in virtue of how things are at the micro”

Downward causation is now one of the main sources of controversy about emergence. There are at least three apparent problems. The first is that the very idea of emergent downward causation seems incoherent in some way. Kim (1999, p. 25) introduces the worry in this way:
The idea of downward causation has struck some thinkers as incoherent, and it is difficult to deny that there is an air of paradox about it: After all, higher-level properties arise out of a lower-level conditions, and without the presence of the latter in suitable configurations, the former could not even be there. So how could these higher-level properties causally influence and alter the conditions fromwhich they arise? Is it coherent to suppose that the presence of X is entirely responsible for the occurrence of Y (so Y’s very existence is totally dependent on X) and yet Y somehow manages to exercise causal influence on X?
The upshot is that there seems to be something viciously circular about downward causation.

The second worry is that even if emergent downward causation is coherent, it makes a difference only if it violates micro causal laws (Kim 1997).
I’ll end it there. Hopefully you get the idea.

If you agree that transistors only change state because of there being a current applied to the base, then you can successfully rule out downward causation (and very likely, mental causation) for such a system of switches. I believe you must agree with that, so hopefully the above discussion by Bedau helps provide an understanding of what is meant by downward causation. I’d strongly recommend reading Bedau’s paper (link above).

There are others who accept this but still try to defend mental causation in some fashion. I’ve seen various methods of attack. I’d categorize these as largely appealing to the complexity of such a system and glossing over the simple facts. Once you rule out downward causation, mental causation (using the standard computational paradigm) becomes not only indefensible, but it creates a very nasty paradox.

If we rule out mental causation, we have a very serious paradox that is almost ignored in the literature. (almost but not quite) The problem is that if mental causation is false, then any behavior we express or reporting of mental states cannot be shown to correlate reliably with the actual mental states. In fact, in the worst case, we may even be forced to accept the worst form of panpsychism, that all matter experiences every possible phenomenal experience at the same time!* The standard line of defense on this issue is to say that mental states ARE physical states. However, this doesn’t help with the paradox one bit IMHO. If the mental states really are epiphenomenal on the physical states, then there is nothing we can do to determine what those mental states are. We can’t discover them by observing behavior and we can’t find out by asking people about them.

Ultimately, the behavior and reports of those mental states in a computer are completely governed by the physical states, so there is no chance of the mental state being reliably reported. For example, consider a computer animation on a screen of a man in pain saying he’d like you to stop pressing the down arrow because each time you do he feels a stabbing pain. If the computer really feels this pain, how can we know? Did the computer say so because of the change in physical states of the switches? Or did the computer experience something and it told you what it was feeling?

We can take the machine apart and we’ll find a series of transistors that change state just like dominos falling over. There is a physical reason for the behavior (and the reporting) that the animated character provides. The animation MUST act and say those things because there is a physical reason for the changes in state of the computer. However, the figure could equally be experiencing anything or nothing at all. There is no way for the animated figure to do anything but act and talk as if it were experiencing pain because that’s what the physical changes of state resulted in. Those physical changes of state can’t report mental states in any way, shape or form, so even behavior does not reliably correspond to mental states if mental causation is ruled out.

Per the paradox above, I think we’re forced to conclude that mental causation is a fact of nature. But the computational paradigm rules this out since it insists that classical scale physical processes govern the actions of the brain, and those processes are both separable and locally causal such that the overall macro-state of the brain has no causal influence on any individual neuron any more than the macro-state of a computer has a causal influence on any individual transistor.

*This was brought out by Mark Bishop, "Dancing with Pixies"
 
  • #13
Q_Goest said:
Let’s clarify one issue. In your first post you mentioned “top-down causation” and when I read that in context it seemed to me you meant “downward causation”.

I would happily use the terms interchangeably. And I don't actually think either of them are the best way to put it.

A first issue is that this is "top-down" and "downwards" in spatiotemporal scale. So it is better to speak of global causality. The action is from global moments to the local ones. So it is from a larger size, but also a longer time. Thus it is as much from before and after as it is from "above" in spatial scale. Which is why there is such a stress on history, goals and anticipation - the global temporal aspects.

A second point is that I want to stress the primacy of constraint as the form of causality we are talking about. I am dividing causality not just by direction or scale but also by kind.

Local bottom-up causality has the nature of "construction" - additive action. Global top-down causality has the nature of "constraint" - a suppression of local degrees of freedom (free additive constructive action).

Note this is different from versions of cybernetics or complexity theory, for example, where the top-down action is thought of as "control". Another different kind of thing. Although autonomous systems (like us humans) can appear to act in controlling causal fashion on the world.

As you suggest, a lot of people see control as indeed the definition of what consciousness is all about if consciousness is a something that does anything. But this is a wrong idea on closer analysis.
 
  • #14
Q_Goest said:
<...>

If we rule out mental causation, we have a very serious paradox that is almost ignored in the literature. (almost but not quite) The problem is that if mental causation is false, then any behavior we express or reporting of mental states cannot be shown to correlate reliably with the actual mental states. In fact, in the worst case, we may even be forced to accept the worst form of panpsychism, that all matter experiences every possible phenomenal experience at the same time!* The standard line of defense on this issue is to say that mental states ARE physical states. However, this doesn’t help with the paradox one bit IMHO. If the mental states really are epiphenomenal on the physical states, then there is nothing we can do to determine what those mental states are. We can’t discover them by observing behavior and we can’t find out by asking people about them.

Ultimately, the behavior and reports of those mental states in a computer are completely governed by the physical states, so there is no chance of the mental state being reliably reported. For example, consider a computer animation on a screen of a man in pain saying he’d like you to stop pressing the down arrow because each time you do he feels a stabbing pain. If the computer really feels this pain, how can we know? Did the computer say so because of the change in physical states of the switches? Or did the computer experience something and it told you what it was feeling?

We can take the machine apart and we’ll find a series of transistors that change state just like dominos falling over. There is a physical reason for the behavior (and the reporting) that the animated character provides. The animation MUST act and say those things because there is a physical reason for the changes in state of the computer. However, the figure could equally be experiencing anything or nothing at all. There is no way for the animated figure to do anything but act and talk as if it were experiencing pain because that’s what the physical changes of state resulted in. Those physical changes of state can’t report mental states in any way, shape or form, so even behavior does not reliably correspond to mental states if mental causation is ruled out.

Per the paradox above, I think we’re forced to conclude that mental causation is a fact of nature. But the computational paradigm rules this out since it insists that classical scale physical processes govern the actions of the brain, and those processes are both separable and locally causal such that the overall macro-state of the brain has no causal influence on any individual neuron any more than the macro-state of a computer has a causal influence on any individual transistor.

*This was brought out by Mark Bishop, "Dancing with Pixies"

Background

Ok, first some of my background: I have no formal education in anything mind science. I have an undergraduate degree in physics, so I'm very causal-minded. I am currently designing a master's degree in theoretical neuroscience and have been investigating the literature on my own (I start the relevant classes next semester).

I spent a little time looking at the top down approach, but for the most part, I've been looking at bottom-up approaches lately. I'm familiar with Koch (here's Kritoph Koch's laboratory home page: http://www.klab.caltech.edu/ ) and Daniel Dennit (philosopher who has lots of talks available online).

Preconceived Notion

Here are some experiments that seem to suggest that top-down causation doesn't exist:



Personally, I don't think there's such a thing as top-down causation. I tend to agree with Dennit that nobody's really running the wheelhouse (The problem of the Cartesian Theatre, as he calls it: http://en.wikipedia.org/wiki/Cartesian_theater ). If somebody's running the wheelhouse, then we still haven't answered the question of the mind, we've just specified the location (in the wheelhouse!).

I take the materialist view that our system of biological neural networks is handling inputs and transforming them into outputs. In this view, for instance, the interneural computations between sensory (input) neurons and motor (output) neurons might be responsible for higher-level consciousness, as well as the illusion of self-control, will-power, and other abstract ideas.

Paradox

If we define consciousness as a 1 and non-consciousness as a 0, then this paradox is sure to bother people, but Koch for example claims that there are many kinds of consciousness. (Though Koch also refrains from a pin-point definition of consciousness.)

If we assume that the many kinds of consciousness can be normalized and assigned a value between 1 and 0 instead of strictly 1 or 0, then it may be more palatable to say something like "The computer has a Class C consciousness rating of 0.3".

Like I said before though, I believe we will have to wait for people like Koch and other bottom-up theoretical neuroscientists to pin-down the physical system of events associated with consciousness before we can judge whether other systems experience some degree of consciousness.
 
Last edited by a moderator:
  • #15
Pythagorean said:
I take the materialist view that our system of biological neural networks is handling inputs and transforming them into outputs. In this view, for instance, the interneural computations between sensory (input) neurons and motor (output) neurons might be responsible for higher-level consciousness, as well as the illusion of self-control, will-power, and other abstract ideas.
Suppose you are right and consciousness is the computation of neurons (or maybe i misunderstood what you meant with 'responsible'). Since computation has causal powers (it does something in the physical world), this would grant causal powers to consciousness. If C does cause things, why is the sense of control still an illusion? It may be so that this causal power matches with the subjective sense of self-control.

Btw the free will issue can be disconnected from the mental causation issue. The Libet experiments for example may show that the decision feeling ("hey i just made a decision") comes after the decision has been physically made. But even prior to the decision feeling, the subject was already conscious and those conscious states may have influenced the physical processes anyway. So there may be mental causation, regardless if it felt like a decision or not. A simple example is just watching TV. You can have all kinds of experiences and your neurons will do all kinds of stuff, yet there is no feeling of "i just made a decision".

Paradox

If we define consciousness as a 1 and non-consciousness as a 0, then this paradox is sure to bother people, but Koch for example claims that there are many kinds of consciousness. (Though Koch also refrains from a pin-point definition of consciousness.)

If we assume that the many kinds of consciousness can be normalized and assigned a value between 1 and 0 instead of strictly 1 or 0, then it may be more palatable to say something like "The computer has a Class C consciousness rating of 0.3".

Like I said before though, I believe we will have to wait for people like Koch and other bottom-up theoretical neuroscientists to pin-down the physical system of events associated with consciousness before we can judge whether other systems experience some degree of consciousness.
I like the idea of a spectrum, since that is how all of nature seems to work. But... if there is some minimum degree of consciousness (0.000001), then at the very least everything is conscious to some degree.
 
Last edited:
  • #16
I just have some questions. Perhaps I missed it, but I haven't seen a definition of "consciousness". The Turing Test was designed to test for something called "intelligence". If a machine is "intelligent", is it therefore "conscious"? If an entity is "conscious" does it therefore have some level of "intelligence"?
 
  • #17
Pythagorean said:
Personally, I don't think there's such a thing as top-down causation. I tend to agree with Dennit that nobody's really running the wheelhouse (The problem of the Cartesian Theatre, as he calls it: http://en.wikipedia.org/wiki/Cartesian_theater ).

Oh how my heart sinks at the mention of these names, at the whole tenor of what you already believe.

I spent 15 years in this area. And all I can say is that you are heading down the hughest blind alley.

If you want a flavour of the neuroscience debate over top-down causality, see for example this...
http://www.dichotomistic.com/mind_readings_molecular_turnover.html

Those youtube clips are all about habit vs attention. It would be correct to think of habits as bottom-up in a sense. But habits would have originally been learned in the eye of (top down) attention) and then unfold within the context of some prevailing attentive state.

So if I know I am required to flex my finger, then that is the top-down anticipatory preparation. A whole lot of global brain set-up is taking place - that would look quite different from when I want to be very sure I'm not about to make some unnecessary twitch. The actual flexing of a finger is a routinised habit and so has to arise within the prepared context via activity from the relevant sub-cortical paths - striatum, cerebellum, etc.

But hey, if you are going to be studying neuroscience, you will learn these things anyway.
 
  • #18
apeiron said:
So if I know I am required to flex my finger, then that is the top-down anticipatory preparation. A whole lot of global brain set-up is taking place - that would look quite different from when I want to be very sure I'm not about to make some unnecessary twitch. The actual flexing of a finger is a routinised habit and so has to arise within the prepared context via activity from the relevant sub-cortical paths - striatum, cerebellum, etc.

But in the second clip I provided, they subject has a button in each hand and decides randomly to press the left or the right button. By the time the person has perceived his choice and pressed his button, the testers with their technology, have already (six seconds beforehand) predicted which side he was going to press.

Was the choice really conscious? It seems from this experiment, that the conscious decision came six seconds after the brain had already made its choice, leading me to believe the "conscious decision" wasn't really a decision at all, but a sensation resulting from a decision made by the neural network.
 
  • #19
Is there a simple example of top-down causality? One not involving minds, but just a simple physical system (the simpler the better).
 
  • #20
SW VandeCarr said:
I just have some questions. Perhaps I missed it, but I haven't seen a definition of "consciousness". The Turing Test was designed to test for something called "intelligence". If a machine is "intelligent", is it therefore "conscious"? If an entity is "conscious" does it therefore have some level of "intelligence"?

Conscious awareness: "The conscious aspect of the mind involving our awareness of the world and self in relation to it"

http://wps.pearsoned.co.uk/wps/media/objects/2784/2851009/glossary/glossary.html#C
 
  • #21
pftest said:
Suppose you are right and consciousness is the computation of neurons (or maybe i misunderstood what you meant with 'responsible'). Since computation has causal powers (it does something in the physical world), this would grant causal powers to consciousness. If C does cause things, why is the sense of control still an illusion? It may be so that this causal power matches with the subjective sense of self-contr

That's not quite what I meant. The computation is a description of the states of the neurons themselves. Consciousness is a phenomenal experience. In my argument, consciousness is a byproduct of the neural computation. That is, consciousness may be necessary for the computation to take place (i'm not sure whether it is or not!) but in my view, it would be something like waste heat from a generator. The waste heat doesn't generate the energy, but it's a necessary byproduct of energy generation.

My point is not that I know specifically what consciousness is in this way, it is that consciousness need not be responsible for causation, but may still be a necessary byproduct of neural activity.
 
  • #22
pftest said:
Is there a simple example of top-down causality? One not involving minds, but just a simple physical system (the simpler the better).

In this world one thing qualifies the other. For instance, an observer needs an observation to be an observer... so that, causation would appear to be equally distributed throughout all systems.

For instance the "top" is a result of the "bottom" and it would be equally as correct to say that the bottom is a result of the top. Example being wheat when it dies and decomposes, causing fertility in the soil that will produce more wheat. So, this cycle nullifies the idea that there is one cause to anyone event.

Which came first... egg or chicken?
 
  • #23
Pythagorean said:
But in the second clip I provided, they subject has a button in each hand and decides randomly to press the left or the right button. By the time the person has perceived his choice and pressed his button, the testers with their technology, have already (six seconds beforehand) predicted which side he was going to press.

Was the choice really conscious? It seems from this experiment, that the conscious decision came six seconds after the brain had already made its choice, leading me to believe the "conscious decision" wasn't really a decision at all, but a sensation resulting from a decision made by the neural network.

This is conceptual confusion on your part (and the hammy guy in the clip). You are confusing consciousness with self-regulation. You are making the mistake of trying to localise consciousness to instants in time. Your familiarity with simple machines - like input-output computers - is blinding you to the complex causality of living and mindful systems.

Psychology started with exactly these kinds of "conscious control" questions being asked experimentally by Helmholtz, Wundt and Donders over 100 years ago.

Consciousness is not a real-time process. It is a hierarchical construction. To talk properly about "when" particular things happen, you have to have a correct model of how the brain actually functions.
 
Last edited:
  • #24
pftest said:
Is there a simple example of top-down causality? One not involving minds, but just a simple physical system (the simpler the better).
Good question. I believe you really meant "downward causation". There are plenty of examples of top-down causation (depending on how that is defined). Let's define downward causation as Bedau (and many others) do, and top-down causation as what happens when there are boundary conditions on a macro-state that put limitations on a micro-state. Examples of top-down causation include the hinge on a door for example, which only allows a door to swing around a specific axis. Such examples of top-down causation are not particularly interesting. The question I suspect you want to ask regards downward causation. The answer isn't simple, but basically the answer is that no clear cases of downward causation are known to exist. Chalmers for example, states:
I do not know whether there are any examples of [downward causation] in the actual world... While it is certainly true that we can't currently deduce all high-level facts and laws from low-level laws plus initial conditions, I do not konw of any compelling evidence for high-level facts and laws (outside the case of consciousness) that are not deducible in principal.
See Chalmers, "Strong and Weak Emergence"
 
  • #25
Pythagorean said:
That's not quite what I meant. The computation is a description of the states of the neurons themselves. Consciousness is a phenomenal experience. In my argument, consciousness is a byproduct of the neural computation. That is, consciousness may be necessary for the computation to take place (i'm not sure whether it is or not!) but in my view, it would be something like waste heat from a generator. The waste heat doesn't generate the energy, but it's a necessary byproduct of energy generation.

My point is not that I know specifically what consciousness is in this way, it is that consciousness need not be responsible for causation, but may still be a necessary byproduct of neural activity.
So C is not the brain, its not the computation or anything else physical, but its something generated by the brain. But at this 'moment of generation', there is interaction between brain and C. Its like kicking a ball. You can't kick a ball without the ball also touching you.

I don't think noncausal byproducts exist. Heat from a generator may seem like a waste of energy from our perspective, but it still has causal powers, it could set a house on fire.
 
  • #26
Q_Goest said:
The answer isn't simple, but basically the answer is that no clear cases of downward causation are known to exist. Chalmers for example, states:

The whole universe can be thought of as a downward causation. Unless someone wants to clarify for me how elementary particles/waves have the properties to cause the macro realm(the universe).
Is everyone living under the impression that indeterminate, fuzzy states, that for many body systems are best treated as fields that strech to infinity, are the Cause of what we call the "Universe"?
 
  • #27
Q_Goest said:
Good question. I believe you really meant "downward causation". There are plenty of examples of top-down causation (depending on how that is defined). Let's define downward causation as Bedau (and many others) do, and top-down causation as what happens when there are boundary conditions on a macro-state that put limitations on a micro-state. Examples of top-down causation include the hinge on a door for example, which only allows a door to swing around a specific axis. Such examples of top-down causation are not particularly interesting. The question I suspect you want to ask regards downward causation. The answer isn't simple, but basically the answer is that no clear cases of downward causation are known to exist. Chalmers for example, states:

See Chalmers, "Strong and Weak Emergence"
Yes i meant downward causation. I didnt know there were no examples of it. It doesn't really strengthen the idea that consciousness does it.
 
  • #28
pftest said:
<...>But... if there is some minimum degree of consciousness (0.000001), then at the very least everything is conscious to some degree.

That doesn't bother me.

apeiron said:
<...>You are making the mistake of trying to localise consciousness to instants in time. Your familiarity with simple machines - like input-output computers - is blinding you to the complex causality of living and mindful systems.<...>

Even if you globalize consciousness we can still frame it as inputs and outputs. We've said nothing about how old the inputs are or whether they are being operated on by nonlinear functions or how they relate to time or space.

Neither am I disputing the property of emergence itself.

"living" and "mindful" systems is kind of vague, define it for me.

<...>Consciousness is not a real-time process. It is a hierarchical construction. To talk properly about "when" particular things happen, you have to have a correct model of how the brain actually functions.

I don't disagree with this. We'd be posting more in medical sciences subforum if it were the case that we new how the brain functioned. But there's people working on it from both sides (top down and bottom up) and I hold out hope that the bottom up people will be able to verify and falsify top down conclusions. I am interested in both sides, but my background seems more suitable for bottom up (modeling real neurons).

But why must it necessarily for the mind to be different from a community of cells? We can observe emergent properties in cell communities (some more interesting and complex then others, granted).

What about the weather? Is not temperature an emergent property?

It's difficult since we personally experience events in our brains. We don't seem to experience all the emergent properties of our brain's functions. We can say a thousand things about what the neurons are doing, but tying them to experiences is more tricky.

Something interesting to leave you with:
The Dictyosteliida, cellular slime molds, are distantly related to the plasmodial slime molds and have a very different life style. Their amoebae do not form huge coenocytes, and remain individual. They live in similar habitats and feed on microorganisms. When food runs out and they are ready to form sporangia, they do something radically different. They release signal molecules into their environment, by which they find each other and create swarms. These amoeba then join up into a tiny multicellular slug-like coordinated creature, which crawls to an open lit place and grows into a fruiting body. Some of the amoebae become spores to begin the next generation, but some of the amoebae sacrifice themselves to become a dead stalk, lifting the spores up into the air.
 
  • #30
Q_Goest said:
Let's define downward causation as Bedau (and many others) do, and top-down causation as what happens when there are boundary conditions on a macro-state that put limitations on a micro-state.

Bedau is attempting to make a distinction between weak and strong downward causation. And I agree with his basic approach - though calling it "weak" is a bit unnecessary, because bottom-up construction would also be "weak" for the same reasons in my book.

Strong upward and downward causality would be a dualistic situation. It would be claiming reality has been broken apart. Which is not really what a systems theorist wants to think.

So instead the argument is that causality is separated in two directions - a dichotomy. And they always remain in mutual interaction - a system.

Thus both upwards and downwards causation exist as "weak" versions. That is, they don't actually exist, just very nearly exist.
 
  • #31
Pythagorean said:
We'd be posting more in medical sciences subforum if it were the case that we new how the brain functioned.

But I do know how the brain functions. Therein seems to lie the difference.
 
  • #32
apeiron said:
But I do know how the brain functions. Therein seems to lie the difference.

This is kind of a meaningless post isn't it? Why don't you display your knowledge in a more responsive post?
 
  • #33
Pythagorean said:
This is kind of a meaningless post isn't it? Why don't you display your knowledge in a more responsive post?

Because you don't listen.

I posted a link to some actual neuroscience I wrote in my old Lancet Neurology column (now why would they ask me to be their commentator?). And which the Association for Consciousness Studies also re-ran as a keynote.

So why don't you respond to some knowledge?
 
  • #34
apeiron said:
Because you don't listen.

I posted a link to some actual neuroscience I wrote in my old Lancet Neurology column (now why would they ask me to be their commentator?). And which the Association for Consciousness Studies also re-ran as a keynote.

So why don't you respond to some knowledge?

Yes, you posted a link on molecular turnover. I don't see the conflict with anything I'm saying. I don't disagree with the facts you've presented. Your interpretation of what molecular turnover means differs from mine. A wave front is another example of a 'thing' that persists when it's molecules do not. Again, how is this different from the weather?

I could even agree with your conclusion:
This kind of topsy-turvey picture can only be resolved by taking a more holistic view of the brain as the organ of consciousness. The whole shapes the parts as much as the parts shape the whole. No component of the system is itself stable but the entire production locks together to have stable existence. This is how you can manage to persist even though much of you is being recycled by day if not the hour.

without conflicting with my point (depending on how you define "whole").

If you are calling consciousness or 'the mind' the "whole" then you'd have to provide a valid argument of why you think these things represent the whole (which you have not done in this article).
 
  • #35
Pythagorean said:
If you are calling consciousness or 'the mind' the "whole" then you'd have to provide a valid argument of why you think these things represent the whole (which you have not done in this article).

Good luck with your future career then.
 

Similar threads

Replies
18
Views
5K
Replies
246
Views
31K
Replies
5
Views
5K
Replies
33
Views
5K
Replies
40
Views
7K
Replies
18
Views
5K
Replies
21
Views
2K
Back
Top