- #71
DiracPool
- 1,243
- 516
madness said:If IIT explained how experience emerges from information, using some standard physical mechanism, it would not be proposing a fundamental relationship between the physical process and the experience. Within the theory, the relationship between phi and consciousness is fundamental and does not reduce to any simpler underlying mechanisms or relationships.
I think the main problem here in trying to come up with an explanation of what consciousness "is" lies in the fact that no one can really agree on what they are trying to define. Each person comes up with as broad or narrow a definition of the term that suits their needs--i.e., that is interesting to them or that they think they can manage as far as constructing a model, the end result being that everyone ends up talking past each other.
Susan Pocket recently wrote an article dealing with a number of the models in this thread: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4243501/pdf/fnsys-08-00225.pdf
She divides contemporary consciousness models into "process" versus "vehicle" theories, with almost all currently vogue models being of the "process" variety. That is certainly reflected in most of the models discussed in this thread thus far. As I've stated in earlier posts, I'm not too enamored with Susan personally, she's kind of a grump . However, I'd have to say I agree mostly with her assessment of information-based process theories in the article, especially Tononi's model. There are fundamental problems with these models in their treatment of the subject.
What are those problems you might ask? Well, there's a number of them, but I think the principle problem here is the consistent reluctance of consciousness or even cognitive science researchers in general to draw a sharp distinction between the function of the human brain versus the non-human animal brain. To put it another way, I think the single biggest problem here is the idea that consciousness is a "thing" or a property of brains in general, and that what needs to be done is to figure out how neural assemblages generate this consciousness (to clarify, when I use the term "consciousness," here, I am referring to phenomenological consciousness, the kind that is characterized as the "hard problem.")
The fact is that is that we, as humans, have no idea what the consciousness of a mouse is, or a cricket, or a frog, or even a chimpanzee. We can only speak of what it is like to have a "human" consciousness. This human consciousness comes with a lot of features that non-human consciousness does not come with. To name a few of these features; the certitude of a sense of self-awareness, of language capacity, of logic-sequential thought structures, of musical ability, of mathematical ability, of a "theory of mind" capacity to project onto other individuals (and unfortunately also onto non-human animals and even inanimate objects), of the capacity to issue introspective reports of qualitative experiences and thoughts, and many others. We don't know for sure if any non-human animals have any of these capacities. So it seems highly probable to me that the phenomenological consciousness we experience is somehow wrapped up in this suite of other uniquely human capacities we possess. I think that theories that try to model consciousness as a single physical process and that are argued to apply to essentially all animal taxa are largely missing point and are at best, academic exercises likely to yield little, if any, progress toward the goal of explaining human conscious experience. These models include Tononi's information-integration theory, McFadden's EM theory, Hameroff's microtubule model, the vast number of "quantum theories" of consciousness which equate the collapse of the wave function to human sentient experience, and even Graziano's "attention schema theory," which I'm seeing as simply another process model.
From: http://aeon.co/magazine/philosophy/how-consciousness-works/
" (The attention schema theory) says that awareness is not something magical that emerges from the functioning of the brain. When you look at the colour blue, for example, your brain doesn’t generate a subjective experience of blue. Instead, it acts as a computational device. It computes a description, then attributes an experience of blue to itself. The process is all descriptions and conclusions and computations. Subjective experience, in the theory, is something like a myth that the brain tells itself. The brain insists that it has subjective experience because, when it accesses its inner data, it finds that information."
I'm not sure what this is supposed to tell me about my conscious experience or how it is different from my cat's experience. His idea of mental processes being ordered and structured like a General looking at a model of his army is interesting and probably true in a sense but, again, it tells me nothing of why I need to have a phenomenological experience of that schematic construction. It also does not tell me whether or not a macaque monkey has a similar construction and phenomenological experience of such going on in their "minds." Is there a monkey General in the macaque's brain? I submit that, until we have adequate brain-based model for how the human mind generates consciousness, and what that is empirically, it makes little sense to talk about animal consciousness at all, especially in terms of how to compare it to a human consciousness we haven't even defined yet.