- #1
- 2,285
- 3
The following is a thought experiment devised by David Chalmers which suggests that the physical constitution of a conscious system doesn't have any bearing on that system's state of consciousness. Rather, Chalmers argues that the only relevant properties of a system in determining its state of consciousness are organizational (functional / information processing) ones. More simply put, the idea is that the subjective experiences of a physical system don't depend on the stuff the system is made of, but rather what the system does. On this hypothesis, any two systems that process information in the same way should experience the same state of consciousness, regardless of their physical makeup.
The argument that follows has the flavor of the traditional functionalist thought experiment that goes something like the following: "Does a system S made of silicon have the same conscious experiences as a system N made of biological neurons, provided that S and N process information in identical ways? It does. Imagine that we replace a single neuron in N with a silicon chip that performs the same local information processing. There is no attendant difference in N's quality of consciousness. (Why should there be? Intuitively, there should be no difference.) Now, if we continue replacing neurons in N with silicon chips that perform the same local functions one by one, there will be no change in N's state of consciousness at each step, and eventually we will arrive at a system identical to S whose conscious experiences are still identical to N. Therefore, N and S have identical conscious experiences."
I have always (and still do) regard this traditional thought experiment for functionalism as terribly inadequate. It has the flavor of an inductive proof, but it begs the question on the base case (written in bold in the above argument); how can we just state outright that replacing a neuron in N with a silicon chip will not change N's state of consciousness? That is the issue up for debate, so we cannot assume that it is true and use it in our argument in order to prove our argument. Even if our intuition may suggest that the base case is true, it could be the case that our intuition is misguided.
Chalmers' argument uses the same basic thought experiment but employs a much more sophisticated and convincing analysis of the consequences of replacing a neuron in N with a silicon chip that performs the same local function. Rather than beg the question at the crucial point, Chalmers gives a well reasoned argument for why the replacement of a neuron in N by a silicon chip should not make a difference in N's state of consciousness.
Chalmers' thought experiment as displayed below is excerpted from his paper http://www.u.arizona.edu/~chalmers/papers/facing.html .
-----------------------------------------
2. The principle of organizational invariance. This principle states that any two systems with the same fine-grained functional organization will have qualitatively identical experiences. If the causal patterns of neural organization were duplicated in silicon, for example, with a silicon chip for every neuron and the same patterns of interaction, then the same experiences would arise. According to this principle, what matters for the emergence of experience is not the specific physical makeup of a system, but the abstract pattern of causal interaction between its components. This principle is controversial, of course. Some (e.g. Searle 1980) have thought that consciousness is tied to a specific biology, so that a silicon isomorph of a human need not be conscious. I believe that the principle can be given significant support by the analysis of thought-experiments, however.
Very briefly: suppose (for the purposes of a reductio ad absurdum) that the principle is false, and that there could be two functionally isomorphic systems with different experiences. Perhaps only one of the systems is conscious, or perhaps both are conscious but they have different experiences. For the purposes of illustration, let us say that one system is made of neurons and the other of silicon, and that one experiences red where the other experiences blue. The two systems have the same organization, so we can imagine gradually transforming one into the other, perhaps replacing neurons one at a time by silicon chips with the same local function. We thus gain a spectrum of intermediate cases, each with the same organization, but with slightly different physical makeup and slightly different experiences. Along this spectrum, there must be two systems A and B between which we replace less than one tenth of the system, but whose experiences differ. These two systems are physically identical, except that a small neural circuit in A has been replaced by a silicon circuit in B.
The key step in the thought-experiment is to take the relevant neural circuit in A, and install alongside it a causally isomorphic silicon circuit, with a switch between the two. What happens when we flip the switch? By hypothesis, the system's conscious experiences will change; from red to blue, say, for the purposes of illustration. This follows from the fact that the system after the change is essentially a version of B, whereas before the change it is just A.
But given the assumptions, there is no way for the system to notice the changes! Its causal organization stays constant, so that all of its functional states and behavioral dispositions stay fixed. As far as the system is concerned, nothing unusual has happened. There is no room for the thought, "Hmm! Something strange just happened!". In general, the structure of any such thought must be reflected in processing, but the structure of processing remains constant here. If there were to be such a thought it must float entirely free of the system and would be utterly impotent to affect later processing. (If it affected later processing, the systems would be functionally distinct, contrary to hypothesis). We might even flip the switch a number of times, so that experiences of red and blue dance back and forth before the system's "inner eye". According to hypothesis, the system can never notice these "dancing qualia".
This I take to be a reductio of the original assumption. It is a central fact about experience, very familiar from our own case, that whenever experiences change significantly and we are paying attention, we can notice the change; if this were not to be the case, we would be led to the skeptical possibility that our experiences are dancing before our eyes all the time. This hypothesis has the same status as the possibility that the world was created five minutes ago: perhaps it is logically coherent, but it is not plausible. Given the extremely plausible assumption that changes in experience correspond to changes in processing, we are led to the conclusion that the original hypothesis is impossible, and that any two functionally isomorphic systems must have the same sort of experiences. To put it in technical terms, the philosophical hypotheses of "absent qualia" and "inverted qualia", while logically possible, are empirically and nomologically impossible.
(Some may worry that a silicon isomorph of a neural system might be impossible for technical reasons. That question is open. The invariance principle says only that if an isomorph is possible, then it will have the same sort of conscious experience.)
There is more to be said here, but this gives the basic flavor. Once again, this thought experiment draws on familiar facts about the coherence between consciousness and cognitive processing to yield a strong conclusion about the relation between physical structure and experience. If the argument goes through, we know that the only physical properties directly relevant to the emergence of experience are organizational properties. This acts as a further strong constraint on a theory of consciousness.
The argument that follows has the flavor of the traditional functionalist thought experiment that goes something like the following: "Does a system S made of silicon have the same conscious experiences as a system N made of biological neurons, provided that S and N process information in identical ways? It does. Imagine that we replace a single neuron in N with a silicon chip that performs the same local information processing. There is no attendant difference in N's quality of consciousness. (Why should there be? Intuitively, there should be no difference.) Now, if we continue replacing neurons in N with silicon chips that perform the same local functions one by one, there will be no change in N's state of consciousness at each step, and eventually we will arrive at a system identical to S whose conscious experiences are still identical to N. Therefore, N and S have identical conscious experiences."
I have always (and still do) regard this traditional thought experiment for functionalism as terribly inadequate. It has the flavor of an inductive proof, but it begs the question on the base case (written in bold in the above argument); how can we just state outright that replacing a neuron in N with a silicon chip will not change N's state of consciousness? That is the issue up for debate, so we cannot assume that it is true and use it in our argument in order to prove our argument. Even if our intuition may suggest that the base case is true, it could be the case that our intuition is misguided.
Chalmers' argument uses the same basic thought experiment but employs a much more sophisticated and convincing analysis of the consequences of replacing a neuron in N with a silicon chip that performs the same local function. Rather than beg the question at the crucial point, Chalmers gives a well reasoned argument for why the replacement of a neuron in N by a silicon chip should not make a difference in N's state of consciousness.
Chalmers' thought experiment as displayed below is excerpted from his paper http://www.u.arizona.edu/~chalmers/papers/facing.html .
-----------------------------------------
2. The principle of organizational invariance. This principle states that any two systems with the same fine-grained functional organization will have qualitatively identical experiences. If the causal patterns of neural organization were duplicated in silicon, for example, with a silicon chip for every neuron and the same patterns of interaction, then the same experiences would arise. According to this principle, what matters for the emergence of experience is not the specific physical makeup of a system, but the abstract pattern of causal interaction between its components. This principle is controversial, of course. Some (e.g. Searle 1980) have thought that consciousness is tied to a specific biology, so that a silicon isomorph of a human need not be conscious. I believe that the principle can be given significant support by the analysis of thought-experiments, however.
Very briefly: suppose (for the purposes of a reductio ad absurdum) that the principle is false, and that there could be two functionally isomorphic systems with different experiences. Perhaps only one of the systems is conscious, or perhaps both are conscious but they have different experiences. For the purposes of illustration, let us say that one system is made of neurons and the other of silicon, and that one experiences red where the other experiences blue. The two systems have the same organization, so we can imagine gradually transforming one into the other, perhaps replacing neurons one at a time by silicon chips with the same local function. We thus gain a spectrum of intermediate cases, each with the same organization, but with slightly different physical makeup and slightly different experiences. Along this spectrum, there must be two systems A and B between which we replace less than one tenth of the system, but whose experiences differ. These two systems are physically identical, except that a small neural circuit in A has been replaced by a silicon circuit in B.
The key step in the thought-experiment is to take the relevant neural circuit in A, and install alongside it a causally isomorphic silicon circuit, with a switch between the two. What happens when we flip the switch? By hypothesis, the system's conscious experiences will change; from red to blue, say, for the purposes of illustration. This follows from the fact that the system after the change is essentially a version of B, whereas before the change it is just A.
But given the assumptions, there is no way for the system to notice the changes! Its causal organization stays constant, so that all of its functional states and behavioral dispositions stay fixed. As far as the system is concerned, nothing unusual has happened. There is no room for the thought, "Hmm! Something strange just happened!". In general, the structure of any such thought must be reflected in processing, but the structure of processing remains constant here. If there were to be such a thought it must float entirely free of the system and would be utterly impotent to affect later processing. (If it affected later processing, the systems would be functionally distinct, contrary to hypothesis). We might even flip the switch a number of times, so that experiences of red and blue dance back and forth before the system's "inner eye". According to hypothesis, the system can never notice these "dancing qualia".
This I take to be a reductio of the original assumption. It is a central fact about experience, very familiar from our own case, that whenever experiences change significantly and we are paying attention, we can notice the change; if this were not to be the case, we would be led to the skeptical possibility that our experiences are dancing before our eyes all the time. This hypothesis has the same status as the possibility that the world was created five minutes ago: perhaps it is logically coherent, but it is not plausible. Given the extremely plausible assumption that changes in experience correspond to changes in processing, we are led to the conclusion that the original hypothesis is impossible, and that any two functionally isomorphic systems must have the same sort of experiences. To put it in technical terms, the philosophical hypotheses of "absent qualia" and "inverted qualia", while logically possible, are empirically and nomologically impossible.
(Some may worry that a silicon isomorph of a neural system might be impossible for technical reasons. That question is open. The invariance principle says only that if an isomorph is possible, then it will have the same sort of conscious experience.)
There is more to be said here, but this gives the basic flavor. Once again, this thought experiment draws on familiar facts about the coherence between consciousness and cognitive processing to yield a strong conclusion about the relation between physical structure and experience. If the argument goes through, we know that the only physical properties directly relevant to the emergence of experience are organizational properties. This acts as a further strong constraint on a theory of consciousness.
Last edited by a moderator: