- #1
ThoughtExperiment
- 41
- 0
Removing a brain cell and replacing with a chip must make one of two possible assumptions. The first is that what goes on inside a neuron is pertinent to consciousness. Alternatively, we might assume that what goes on inside a neuron is NOT pertinent to consciousness, that only the outward function of the neuron is important, thus we break the argument into two possibilities:
EITHER: We assume the internal workings of a neuron are pertinent to consciousness and we assume the computer chip has the same internal workings that the neuron does, so the computer chip brain can become conscious. This is obviously not true, so we can't assume that the internal workings of the neuron are pertinent to consciousness if we want to accept strong AI. Since the internal workings of a neuron and a computer chip are different - if we assume what goes on inside a neuron is pertinent to the phenomenon of consciousness, then a neuron can not be replaced with a computer chip and still create consciousness.
OR: It assumes that whatever a neuron is doing inside it's cell wall does not need to be duplicated, only it's outward function needs be duplicated, it only has to 'say' the same thing in response to a given input that a neuron does. This is the Chinese room argument at neuron scale. We could have any number of different methods of producing the same output from a given input. A recording for example, is not the voice of a conscious individual, it is simply a duplicate of a conscious individual's voice. This is also identical in philosophy, and directly analogous to, the Turing test which has been widely rejected as a method of determining consciousness.
Of the three reasons Wikipedia provides, this first one is the most applicable.
Conclusion: The best assumption a strong AI advocate can possibly make is that the chip creates an outward function that duplicates the function of a neuron. This is directly analogous to the Turing test which states only the outward appearances of consciousness is necessary to assume consciousness exists. Further, one must assume the inner workings of a neuron have no affect whatsoever on the phenomenon of consciousness.
Comments/thoughts?
EITHER: We assume the internal workings of a neuron are pertinent to consciousness and we assume the computer chip has the same internal workings that the neuron does, so the computer chip brain can become conscious. This is obviously not true, so we can't assume that the internal workings of the neuron are pertinent to consciousness if we want to accept strong AI. Since the internal workings of a neuron and a computer chip are different - if we assume what goes on inside a neuron is pertinent to the phenomenon of consciousness, then a neuron can not be replaced with a computer chip and still create consciousness.
OR: It assumes that whatever a neuron is doing inside it's cell wall does not need to be duplicated, only it's outward function needs be duplicated, it only has to 'say' the same thing in response to a given input that a neuron does. This is the Chinese room argument at neuron scale. We could have any number of different methods of producing the same output from a given input. A recording for example, is not the voice of a conscious individual, it is simply a duplicate of a conscious individual's voice. This is also identical in philosophy, and directly analogous to, the Turing test which has been widely rejected as a method of determining consciousness.
Ref: WikipediaIt has been argued that the Turing test so defined cannot serve as a valid definition of machine intelligence or "machine thinking" for at least three reasons:
1. A machine passing the Turing test may be able to simulate human conversational behaviour, but this may be much weaker than true intelligence. The machine might just follow some cleverly devised rules. A common rebuttal in the AI community has been to ask, "How do we know humans don't just follow some cleverly devised rules?" Two famous examples of this line of argument against the Turing test are John Searle's Chinese room argument and Ned Block's Blockhead argument.
Of the three reasons Wikipedia provides, this first one is the most applicable.
Conclusion: The best assumption a strong AI advocate can possibly make is that the chip creates an outward function that duplicates the function of a neuron. This is directly analogous to the Turing test which states only the outward appearances of consciousness is necessary to assume consciousness exists. Further, one must assume the inner workings of a neuron have no affect whatsoever on the phenomenon of consciousness.
Comments/thoughts?