- #1
selfAdjoint
Staff Emeritus
Gold Member
Dearly Missed
- 6,894
- 11
Is the "Hard Problem" Just Silly?
I have been reading Chalmer's "Facing up to the Problem of Consciousness", kindly linked to by hypnagogue. Starting out, Chalmers says:
He says there are no deep problems, only technical ones, in the way of establishing these phenomena.
Then he describes the "hard problem": the feeling or experiencing of consciousness. He quotes Nagel's formulation: "There is something that it is like to be conscious". And goes on to suggest that this something cannot be expressed as a logical consequence of the "easy" phenomena above.
But suppose we built an AI that had all of the easy phenomena aced, and was recursive besides. By recursive I mean that it doesn't jus access mental states like retrieving data, but can create new mental states that have a subject/content relation to given existing mental states. Goedel created recursive relations when he mapped logic into arithmetic so that the arithmetic operations and rules faithfully mirrored the logical ones. Thus arithmetic had logic within it as an arithmetic content. And I visualize a recursive mental state having expressed within it as lower level content the reality of other mental states, be they feelings,memories, sensations or whatever.
Chalmers would agree that it is possible, even if technically difficult, to build an AI that manifests his easy phenomena, and it is a fact that recursive programs can be built, so I don't see any problem with stipulating that.
Now I say, pretty much echoing Dennet, the experience of being conscious, the something that it is to be conscious, is what that AI would itself experience while it was executing. And our experience of consciousness is what we experience when our a-consciousness runs on the wetware of our nervous systems. Call it emergent if you like; it's not a static logical consequent, but a dynamic activity of a recursive system.
Sometimes beginners on the physics boards complain that physics accounts for electromagnetism and gravity but it doesn't explain them. What they mean by explain, they can't explain (heh!) though they try mightily. But we all know what they mean: there is something that it is like to be heavy and gravity doesn't account for that. But that is just a beginner problem in physics. Could the hard problem be just a beginner problem in cognitive science?
I have been reading Chalmer's "Facing up to the Problem of Consciousness", kindly linked to by hypnagogue. Starting out, Chalmers says:
The easy problems of consciousness include those of explaining the following phenomena:
the ability to discriminate, categorize, and react to environmental stimuli;
the integration of information by a cognitive system;
the reportability of mental states;
the ability of a system to access its own internal states;
the focus of attention;
the deliberate control of behavior;
the difference between wakefulness and sleep.
He says there are no deep problems, only technical ones, in the way of establishing these phenomena.
Then he describes the "hard problem": the feeling or experiencing of consciousness. He quotes Nagel's formulation: "There is something that it is like to be conscious". And goes on to suggest that this something cannot be expressed as a logical consequence of the "easy" phenomena above.
But suppose we built an AI that had all of the easy phenomena aced, and was recursive besides. By recursive I mean that it doesn't jus access mental states like retrieving data, but can create new mental states that have a subject/content relation to given existing mental states. Goedel created recursive relations when he mapped logic into arithmetic so that the arithmetic operations and rules faithfully mirrored the logical ones. Thus arithmetic had logic within it as an arithmetic content. And I visualize a recursive mental state having expressed within it as lower level content the reality of other mental states, be they feelings,memories, sensations or whatever.
Chalmers would agree that it is possible, even if technically difficult, to build an AI that manifests his easy phenomena, and it is a fact that recursive programs can be built, so I don't see any problem with stipulating that.
Now I say, pretty much echoing Dennet, the experience of being conscious, the something that it is to be conscious, is what that AI would itself experience while it was executing. And our experience of consciousness is what we experience when our a-consciousness runs on the wetware of our nervous systems. Call it emergent if you like; it's not a static logical consequent, but a dynamic activity of a recursive system.
Sometimes beginners on the physics boards complain that physics accounts for electromagnetism and gravity but it doesn't explain them. What they mean by explain, they can't explain (heh!) though they try mightily. But we all know what they mean: there is something that it is like to be heavy and gravity doesn't account for that. But that is just a beginner problem in physics. Could the hard problem be just a beginner problem in cognitive science?