- #71
Canute
- 1,568
- 0
You stated that to know, to remember, to speak and to see are physical processes. No doubt there are physical processes usually involved in these things, but on what grounds do you say that only physical processes are involved? Do you have some data that nobody else has?StatusX said:Those are all processes that can affect the physical world. You have to understand the difference between the subjective experience of a function and the function itself.
You are assuming that consciousness is non-causal. You may be right, but you'll have trouble proving it. Nobody else can.The easiest way to understand this is by asking, what do you know about another person that you haven't inferred about them under the assumption they are just like you? You know they know things, since you can ask a question and get an intelligent reply. You know they can see, because you can throw a punch at them and theyll try to duck. You know they can speak, because you hear them. For this information to get from them to you, it had to affect the physical world, and so all these functions are physical.
Quite agree. I find Occam's razor applicable in this situation. It would be a needless complication to assume that they are not conscious. However, you're right, this does not prove that they are.However, you don't know they have an experience of these things. That you infer because you assume all humans are like you.
Of course you're entitled to your opinion, but this is all conjecture. As yet there is no evidence that it is the case, and much evidence that it is not. For instance, how many people who are unconscious answer 'yes' when you ask them if they are conscious?Your whole argument seems to rest on this, so let me just make it clear. Someone asks you: "Are you conscious right now?" This rattles your eardrum, and makes neurons begin firing. This starts a chain reaction that goes into your cerebrum where, due to its physical structure, a new signal is sent to your vocal cords to make the sound "yes." At every point in this process, the operation is physical, and there is no reason to doubt that every step will one day be explained by conventional science, just like digestion or circulation is now.(I know these aren't completely understood, but hopefully you get my point) Your conscious experience during this time is a sort of side effect, and would not affect the results of an experiment.
I don't understand your argument here. Why do I have to explain something that is non-physical besides consciousness? Also the question is whether an AI program can be conscious, not whether it can behave like a human being. Consciousness is not behaviour.It basically comes down to this: Do you think an artificial intelligence program could, in principle, behave exactly like a human? Maybe our technology will never get there, but is it physically possible? If you don't, then you think there is something about the brain that is nonphysical besides consciousness, and you'll have to explain what it is. If you do, then you can understand why this could be a realizable example of a zombie.
Again, you are entitled to your opinion. But if you want to influence anybody else's you're going to have to find some evidence. Personally I believe that it is unreasonable to say that one can argue about consciousness without being conscious.Everything this zombie says is, as I described before, a consequence of his total physical brain structure. If a being had the exact same brain structure, it would respond to the same stimuli the same way. This includes any questions about consciousness. When we argue about consciousness, it is our physical brains that read the arguments, access memories and logically analyze ideas for counterarguments, and control our fingers to type a response. During all of this, yes, we are aware. But a thrid party could not know this, and it is not logically necessary that we be conscious during any of it. I don't mean we could do it in our sleep, because our physical brain state would be radically different. I mean even a zombie could do it.
Again, more opinion. You need to explain why my argument is very weak. What if I took your 'intelligent' (whatever we mean by that) but non-conscious AI program and instead of programming it to be convinced that it is conscious I programmed it to be convinced that it is not conscious? According to you it would go on behaving in precisely the same way. This seems a muddle of ideas to me.In particular, your argument that a non-conscious being couldn't be convinced of anything is very weak. Yea, your computer couldn't be convinced of anything anymore than a hamster or a piece of toast could. They don't have the physical cognitive structure. It has nothing to do with consciousness. You could imagine a very intelligent but non-conscious AI program which is programmed to think it is conscious. It could be convinced of plenty of things, but you would have a hard time convincing it that it isn't conscious.
But how would this zombie know that its brain is in a different state? Surely it would just be in a different state. In order to know that it's brain is in a different state it's brain would have to be in yet another different state (the one correlating to 'knowing' that it's state is different). Where does this regression end?Just as another example: a zombie would know the difference between wake and sleep because his brain would be in a different state, and his behavior would be different.
Human beings do not rely on the observation of their own brain-states in order to know how they feel. If a zombie can only tell that it's awake only by observing its own brain-states then it does not have human-like consciousness.
Another problem is that of how a zombie brain can observes itself? Does one part encode for another in some sort of self-referential loop? Which bit of brain correlates to being awake, which to 'knowing' that it is awake, and which to knowing it knows that it's awake? Without consciousness there is no way to break out of this loop.