Consciousness and the Attention Schema Theory - Meaningful?

In summary, the author argues that consciousness is formed at the intermediate level - the lower level is primary brute processing while the higher level is concerned with abstraction and categorization. He also suggests that attention is the mediating agent, and that awareness (his attentional model) arises in the superior temporal sulcus and the temporoparietal junction.
  • #36
Graeme M said:
Put another way, I think I am aware of things even if not attending to them, however by attending to a thing I am definitely more aware of it. But in this sense, am I aware of a thing because I attend to it, or am I attending a thing because I am aware of it? The latter seems more reasonable. Therefore I think Graziano's idea is more consistent because in such an interpretation the process of attention leads to both background and foreground awareness as consciously discerned, whereas Prinz's idea suggests only foreground awareness can be conscious. Marrying the two ideas as I suggest above seems to resolve that.

Naively, there could be both. A sudden loud sound in the coffee shop will draw your attention to it even though you haven't been paying attention. On the other hand, there are sounds all around you which you don't notice until you pay attention to them. You can search for "bottom-up" and "top-down" attention. It's also an issue in machine vision.
https://www.cs.utexas.edu/~dana/Hayhoe.pdf
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.295.9787
http://thesis.library.caltech.edu/4722/
http://www.cnbc.cmu.edu/~tai/readings/tom/itti_attention.pdf
 
Last edited:
Biology news on Phys.org
  • #37
Oh, this was interesting!

Graeme M said:
Although I haven't gotten far into the book, I get the feeling that for all the strictly physical evidence he has assembled Prinz appears still to be arguing for the idea that a conscious experience somehow 'arises' from the neural processing of information.

It is a funny idea to try to explain a trait by its own function. Conscious experience _is_ neural processing (of information, if you must; if we dismiss the rest of the body for simplicity). It is just not all of it, and it leads to a particular behavior.

Graeme M said:
This latter theory strikes an intuitive chord for me. Consciousness is what it feels like for the brain to continuously construct a model of attention - a model that changes moment by moment and which correlates a range of perceptual data and unconscious processing into a directive process for managing the organism's behaviour.

It has been referenced to as the only biologically motivated and putatively sound theory that handles both the "soft" (when awake) and "hard" (how awake) problems of consciousness.

"The attention schema theory satisfies two problems of understanding consciousness, said Aaron Schurger, a senior researcher of cognitive neuroscience at the Brain Mind Institute at the École Polytechnique Fédérale de Lausanne in Switzerland who received his doctorate from Princeton in 2009. The "easy" problem relates to correlating brain activity with the presence and absence of consciousness, he said. The "hard" problem has been to determine how consciousness comes about in the first place. Essentially all existing theories of consciousness have addressed only the easy problem. Graziano shows that the solution to the hard problem might be that the brain describes some of the information that it is actively processing as conscious because that is a useful description of its own process of attention, Schurger said.

"Michael's theory explains the connection between attention and consciousness in a very elegant and compelling way," Schurger said.

"His theory is the first theory that I know of to take both the easy and the hard problems head on," he said. "That is a gaping hole in all other modern theories, and it is deftly plugged by Michael's theory. Even if you think his theory is wrong, his theory reminds us that any theory that avoids the hard problem has almost certainly missed the mark, because a plausible solution — his theory — exists that does not appeal to magic or mysterious, as-yet-unexplained phenomena.""

[ http://www.princeton.edu/main/news/archive/S38/91/90C37/index.xml?section=featured ]

madness said:
My point was simply that Graziano addresses what Chalmers refers to as the easy problems, while explicitly stating that he is addressing what Chalmers calls the hard problem. Note that Chalmers defines "awareness" as an easy problem not a hard problem. Here are the easy problems outlined by Chalmers in the paper I linked to above:

• the ability to discriminate, categorize, and react to environmental stimuli; • the integration of information by a cognitive system; • the reportability of mental states; • the ability of a system to access its own internal states; • the focus of attention; • the deliberate control of behavior; • the difference between wakefulness and sleep.

In particular, the ability to access and report internal states and focus attention are what Graziano attempts to address.

I wasn't aware that the mystic (well, determined-to-confuse dualist then) Chalmers introduced the term. "The really hard problem of consciousness is the problem of experience." It seems his putative problem is controversial. [ https://en.wikipedia.org/wiki/Hard_problem_of_consciousness ]

If so I have to assume that Graziano, who claims to address the problem, is confident that he has addressed the factual content of it, that awareness of what we are aware of is what we experience. Or in other words, that Chalmers 'hard' problem isn't one, it is just the focus of attention and the reportability of it. Schurger seems to agree [see above].

Honestly, looking over the putative testable definitions of the "hard problem", it is most or all Chalmer's qualia/zombie hogwash:

"Various formulations of the "hard problem":
"How is it that some organisms are subjects of experience?"
"Why does awareness of sensory information exist at all?"
"Why do qualia exist?"
"Why is there a subjective component to experience?"
"Why aren't we philosophical zombies?"" [Ibid]

"Qualia", "zombies", honestly!? What use have they been?

I wouldn't bother with that list as much as the trait of consciousness, how it evolved and what its fitness increase was based on (since it is preserved it is likely maintained by purifying selection).
 
  • #38
I agree with madness that it's rather evasive in terms of the hard problem. Until you can design me a test that tells me whether my computer, a robot, an insect, or a fish is conscious, you haven't addressed the hard problem. The underlying assumption, of course, is that designing/constructing the test requires knowledge of the mechanism of consciousness. In what way do we have to order neurons (and their relevant supporting cells) to generate consciousness?

Saying "consciousness is neural processing" seems oversimplified too. And I don't mean this in a dualist way. Certainly if we line up two neurons in a dish and stimulate one to stimulate the other, we are doing neural processing, but are we doing consciousness? I somehow doubt it. Consciousness is something that emerges from the same system where neural processing occurs. How it emerges, we still don't know.
 
  • #39
Torbjorn_L said:
I wasn't aware that the mystic (well, determined-to-confuse dualist then) Chalmers introduced the term. "The really hard problem of consciousness is the problem of experience." It seems his putative problem is controversial. [ https://en.wikipedia.org/wiki/Hard_problem_of_consciousness ]

If so I have to assume that Graziano, who claims to address the problem, is confident that he has addressed the factual content of it, that awareness of what we are aware of is what we experience. Or in other words, that Chalmers 'hard' problem isn't one, it is just the focus of attention and the reportability of it. Schurger seems to agree [see above].

While I don't agree with this point of view, I accept it as a legitimate stance that many people take. What I take umbrage with is Graziano stating that he has solved the "hard problem", rather than simply stating that there is no hard problem and that he has therefore not attempted to solve it.
 
  • #40
Thanks for the links atyy, they join the long list of papers to read! I skimmed a couple of them and I see some of what was being talked about earlier regarding processing being not necessarily strictly hierarchical.

I suppose the problem of definition could be mine alone. It seems to me that in everyday moments I am aware of the surrounding world even if I don't pay attention to it. By focusing on a book, or an object in front of me, the rest of my surroundings do not disappear - I am still aware of them. I can selectively focus on certain things, either top down (as in a conscious choice to focus) or bottom up (as in the sudden sound).

Prinz suggests that attention is what mediates experience into consciousness but then goes on to describe 'attention' as almost anything at all. That is, I can attend a specific thing, but equally I can attend the rest of the scene. This seems rather non-explanatory in that to me he is really saying that because I am aware of the world I am attending it.

In his examples he cites more specific cases of attention, usually involving visual cognition, whereby it seems clearer that by attention he means a specific focus on specific objects - eg in experiments in visual cognition such as masking studies, the procedure by its nature incurs focused attention, whether top down or bottom up.

That just leads back to my uncertainty around what he means by attention. In fact it even leads me to uncertainty about what people are meaning by 'consciousness'.

To me consciousness appears as a broader thing, a spectrum if you will, and sensorily at least begins with awareness (I am aware of the world and my placement in it and how I move within it) and then extends to include directive focused consciousness such as reading a book and thinking about the meaning of the words.

Some things *seem* unconscious, for example walking, yet awareness is critical for walking so I think I am conscious when I walk even if the act itself is not necessarily consciously directed. That consciousness includes awareness is evident when we are unconscious - there is no awareness.

Thus it seems to me then that others mean something different by consciousness. Perhaps they mean that some sense of the world is held in directive focus within the mind. For example, if I am at a dinner party and I am talking animatedly with someone and unconsciously reach for the salt and sprinkle it on my meal, is this consciousness or not? I think it IS consciousness but that's because I think an awareness of surroundings and the ability to act meaningfully within that environment is to be conscious. Whereas I think Prinz is arguing that is not consciousness, but it would be if I actually placed my attention on the salt and directively sprinkled only so much on my meal.

This ambiguity disappears for me when reading Graziano's position but again I acknowledge this may be due to my lack of detailed knowledge.

Graziano proposes a clear mechanism for how attention is processed and how that process forms a model for further processing, this model itself being what we call consciousness or experience. All signals that become available for processing are collected into the modelling process; that is, the signals that emerge from the noise are attended to and are then incorporated into the wider abstraction. To me that explains why my experience includes both things I am directly focusing on a well as those things that are simply there. It also mechanistically explains the results of such things as masking studies.

madness I don't think Graziano is being disingenuous in his claims about the hard problem. Quite to the contrary, I think he very clearly tackles the problem. If I follow his argument correctly, he is saying that the mechanics of consciousness are complex, but that the concept of how consciousness arises is relatively less complex. In other words, his theory explains why conscious experience arises and it is a relatively simple thing, thus the hard problem no longer need be considered hard. What might remain to be hard is explaining exactly how the Attention Schema Theory unfolds in mechanical detail.

Graziano's solution proposes that subjective experience - qualia, phenomenology etc - are properties of the model of attention that the brain constructs. What we feel as consciousness is a state that the brain utilises to enable the organism to better compute the intention of surrounding agents, including itself. The model itself presents the idea of experience, of a subjective point of view.

This in some ways harks back to my earlier comments about my own confusion around why consciousness is seen as hard, or mysterious. Phenomenlogy is not some apart quality, it just is what it is like for a brain to construct a model of internal representations and relationships.

Pythagorean, if Graziano's idea is correct, or substantially correct, wouldn't it mean that suitable tests simply require that an organism has the requisite processing arrangements and displays requisite behaviours? If there is no special quality to experience, then purely physical arrangements are sufficient to conscious experience and one must by extension accept consciousness in any device that exhibits those arrangements. It might be harder to test for an experience of self-awareness, but is not self awareness simply a more complex instantiation of consciousness?

Put another way, if phenomenologically experiencing colour is shown mechanically to derive from certain cells in the retina, certain neural processing arrangements, and the capacity in behaviour to distinguish colour, why should we also expect to find some other quality to the experience of colour? Is that not enough to conclude that the organism is conscious of colour?
 
  • #41
Graeme M said:
madness I don't think Graziano is being disingenuous in his claims about the hard problem. Quite to the contrary, I think he very clearly tackles the problem. If I follow his argument correctly, he is saying that the mechanics of consciousness are complex, but that the concept of how consciousness arises is relatively less complex. In other words, his theory explains why conscious experience arises and it is a relatively simple thing, thus the hard problem no longer need be considered hard. What might remain to be hard is explaining exactly how the Attention Schema Theory unfolds in mechanical detail.

Graziano's solution proposes that subjective experience - qualia, phenomenology etc - are properties of the model of attention that the brain constructs. What we feel as consciousness is a state that the brain utilises to enable the organism to better compute the intention of surrounding agents, including itself. The model itself presents the idea of experience, of a subjective point of view.

This in some ways harks back to my earlier comments about my own confusion around why consciousness is seen as hard, or mysterious. Phenomenlogy is not some apart quality, it just is what it is like for a brain to construct a model of internal representations and relationships.

I think this entirely misses the point of the hard problem. The hard problem asks why there is a "something it is like" to be an organism (https://en.wikipedia.org/wiki/What_Is_it_Like_to_Be_a_Bat?), and the proposed soultion is that consciousness "is what it is like". This explanation ends exactly where the hard problem starts.

It also looks like a deflationary account of consciousness (https://en.wikipedia.org/wiki/Hard_problem_of_consciousness#Deflationary_accounts), i.e., one which attempts to show that the hard problem doesn't really exist rather than provide a solution to the hard problem. This is what I find disingenuous - if that is his stance he should be up front about it.
 
  • #42
Graeme M said:
Pythagorean, if Graziano's idea is correct, or substantially correct, wouldn't it mean that suitable tests simply require that an organism has the requisite processing arrangements and displays requisite behaviours? If there is no special quality to experience, then purely physical arrangements are sufficient to conscious experience and one must by extension accept consciousness in any device that exhibits those arrangements. It might be harder to test for an experience of self-awareness, but is not self awareness simply a more complex instantiation of consciousness?

Put another way, if phenomenologically experiencing colour is shown mechanically to derive from certain cells in the retina, certain neural processing arrangements, and the capacity in behaviour to distinguish colour, why should we also expect to find some other quality to the experience of colour? Is that not enough to conclude that the organism is conscious of colour?

The hard problem would be to prove that the "requisite processing arrangements" and "requisite behaviours'' are uniquely associated with consciousness. Self-awareness is an easy problem (there are robots that have passed the self-awareness test... and it's not trivial, it was a difficult AI problem, but it's still what Chalmers would call part of the "easy" problem). When I say consciousness, I mean that a subjective experience is occurring. Everything else (like intelligence, awareness, and self-awareness) can be ascribed to robots that don't experience anything. Graziano has a proposed a mechanism, but he's kind of put the cart before the horse - since we don't have any way to measure consciousness, we can propose a 1000 mechanisms and still not be any closer to solving the hard problem.

The retina likely has little to do with consciousness and more to do with processing sensory information. That information is somehow handed to consciousness later, but what goes on in the retina is, again, part of the easy problem. Additionally, a lot of that information is abstracted by the visual cortex before reaching consciousness; we don't experience edge and alignment detection, we get more of a holistic picture after these processing tasks have been put together.
 
  • #43
Pythagorean said:
Self-awareness is an easy problem (there are robots that have passed the self-awareness test... and it's not trivial, it was a difficult AI problem, but it's still what Chalmers would call part of the "easy" problem).

What are some examples of robots that have passed a self-awareness test?
 
  • #44
atyy said:
What are some examples of robots that have passed a self-awareness test?

The Ransselaer Polytechnic Institute has designed robots that can pass the "King's Wise Men" test.
 
  • Like
Likes atyy
  • #45
Madness, if Graziano has reformulated the hard problem and explained its operation, isn't that a solution? Wouldn't it be unreasonable to demand a solution to a particular formulation of a problem if that formulation is shown to be in error? The question is whether Graziano's idea legitimately restates the hard problem and proposes an explanation.

I *think* I can see what he's getting at, but I find it hard to put into words. I will try to do it without straying too far into philosophical territory. Both Graziano and Prinz are posing science based hypotheses but I feel it's hard to assess their utility without at least touching on philosophical considerations.

Taking the wikipedia entry about "something that it is", doesn't Nagel's argument appear to boil down to the same thing you accuse Graziano of? Summarising that entry, I read it as "consciousness requires a unique subjective experience because a unique subjective experience requires consciousness". Worse, why is this so? Because Nagel says so. I suspect that Nagel proposes the hard problem because he thinks he is in there observing. And everyone agrees with him because they think they are in there observing.

However, what I think Graziano is getting at is that when you report a subjective experience, you are reporting an internal model. Subjective experience is the brain reporting on itself, using shorthand descriptions of internal representations. Thus there are no qualia, there is no subjective experience, rather there is the report that the brain generates from its model. We mistake the fact that we report a qualia for the fact there IS a qualia in there.

That doesn't mean that consciousness doesn't exist - it most certainly does, just not how we intuitively feel that it exists. Graziano's theory points to a mechanism for it and how it is physically generated in the brain. And in doing so, he shows why the hard problem is not hard.

Or so it seems to me! :)

Pythagorean, when we detect consciousness in human beings, how do we do so quantitatively? I assume we can only measure physical features (both in terms of neural activity/arrangements and in terms of macro scale behaviours) or rely on reports. If a being or device reports a subjective experience and the physical evidence supports that, why should we not conclude that subjective experience is present? This is genuinely meant, it's not clear to me why we should not. If a robot can describe a red ball as distinct from a green ball, and it can explain that it is aware of itself and the balls, should that not be sufficient evidence for consciousness?

I suppose you could argue that without novel behaviours all we have is stimulus/response, but then I'd ask why stimulus/response is not consciousness. After all, human behaviour and experience can only be stimulus/response at a more complex level. What else could it be? Physically evolution has led to more complex forms of life but these have not added something else beyond what is present in simpler forms, so shouldn't it be the same for consciousness?
 
  • #46
Graeme M said:
Madness, if Graziano has reformulated the hard problem and explained its operation, isn't that a solution? Wouldn't it be unreasonable to demand a solution to a particular formulation of a problem if that formulation is shown to be in error? The question is whether Graziano's idea legitimately restates the hard problem and proposes an explanation.

I'm a little confused. You say he has solved the problem, yet say it is unreasonable to ask him to solve the problem and that he only has to show it is in error. If it is the latter, it would be a deflationary account of consciousness, which is nothing new. People who subscribe to a deflationary account generally do not make claims to have solved the hard problem, and if this is Graziano's stance neither should he. If you are claiming the former, then I disagree.

Graeme M said:
Taking the wikipedia entry about "something that it is", doesn't Nagel's argument appear to boil down to the same thing you accuse Graziano of? Summarising that entry, I read it as "consciousness requires a unique subjective experience because a unique subjective experience requires consciousness".

Not as I understand it. Nadel is pointing to the fact that there is "something it is like" when we have an experience. He does not make any logical arguments as to why this must be the case, he simply points to the fact that it is something we know to be true.

Graeme M said:
However, what I think Graziano is getting at is that when you report a subjective experience, you are reporting an internal model. Subjective experience is the brain reporting on itself, using shorthand descriptions of internal representations. Thus there are no qualia, there is no subjective experience, rather there is the report that the brain generates from its model. We mistake the fact that we report a qualia for the fact there IS a qualia in there.

That doesn't mean that consciousness doesn't exist - it most certainly does, just not how we intuitively feel that it exists. Graziano's theory points to a mechanism for it and how it is physically generated in the brain. And in doing so, he shows why the hard problem is not hard.

Again this is all standard deflationary stuff, put forward long ago by Dennett. If Graziano is taking a deflationary approach, he is adding nothing new to the hard problem. All he is doing is contributing to the easy problems, while falling back on an established view point that the easy problems are all there is to solve.

As I said before, this is fine, so long as he makes no claims to attack the hard problem, which unfortunately he does.
 
  • #47
Hmmmm... I wouldn't say Graziano is right, I am certainly not in a position to make any judgements. I simply said I liked his approach. Re the hard problem, all I meant was, what if the hard problem is wrong by definition?
 
  • #48
Graeme M said:
Hmmmm... I wouldn't say Graziano is right, I am certainly not in a position to make any judgements. I simply said I liked his approach. Re the hard problem, all I meant was, what if the hard problem is wrong by definition?

All I'm saying is that if someone believes the hard problem is "wrong" (which I can only interpret as meaning "not a real problem") , they shouldn't claim to have solved it. It's a contradiction in terms to solve a problem which is not a problem. I also like Graziano's ideas, I just think he should sell them for what they are rather than what they're not.

If you want to read an argument against the hard problem, read this http://www.fflch.usp.br/df/opessoa/Dennett-Quining-Qualia.pdf. Graziano's theory, combined with some deflationary argument such as this which purportes to remove the hard problem, could be seen as a theory of consciousness.

In contrast, a theory which genuinely accepts the hard problem and proposes a solution would be the integrated information theory https://en.wikipedia.org/wiki/Integrated_information_theory.
 
  • #49
madness said:
All I'm saying is that if someone believes the hard problem is "wrong" (which I can only interpret as meaning "not a real problem") , they shouldn't claim to have solved it. It's a contradiction in terms to solve a problem which is not a problem. I also like Graziano's ideas, I just think he should sell them for what they are rather than what they're not.

If you want to read an argument against the hard problem, read this http://www.fflch.usp.br/df/opessoa/Dennett-Quining-Qualia.pdf. Graziano's theory, combined with some deflationary argument such as this which purportes to remove the hard problem, could be seen as a theory of consciousness.

In contrast, a theory which genuinely accepts the hard problem and proposes a solution would be the integrated information theory https://en.wikipedia.org/wiki/Integrated_information_theory.

Hmmm, do you really think Tononi comes closer than Graziano?
 
  • #50
Graeme M said:
Pythagorean, when we detect consciousness in human beings, how do we do so quantitatively? I assume we can only measure physical features (both in terms of neural activity/arrangements and in terms of macro scale behaviours) or rely on reports. If a being or device reports a subjective experience and the physical evidence supports that, why should we not conclude that subjective experience is present? This is genuinely meant, it's not clear to me why we should not. If a robot can describe a red ball as distinct from a green ball, and it can explain that it is aware of itself and the balls, should that not be sufficient evidence for consciousness?

Well, first we can't! The only person I know of that has come close is Tonini; that's not to say this his result is correct, but he has been successful in using his approach on unconscious patients. Anyway, the point is that he actually seems to address the question (even if his answer is wrong). Qualitatively, we infer consciousness in other humans because we're similar to each other (and we may probably have social circuits that rely on this assumption, particularly facial analysis circuits. But inference is not a scientific result, just a segway to scientific curiosity. As for the robot, if I had sufficient understanding of the robot's design and was assured that it wasn't just a clever linguistics machine, it would be a good first step.

I suppose you could argue that without novel behaviours all we have is stimulus/response, but then I'd ask why stimulus/response is not consciousness. After all, human behaviour and experience can only be stimulus/response at a more complex level. What else could it be? Physically evolution has led to more complex forms of life but these have not added something else beyond what is present in simpler forms, so shouldn't it be the same for consciousness?

Experience isn't necessarily only stimulus/response. Stimulus/response are more sensory and muscle output. Experience most likely takes place in the neurons in between those two processes. Your last statement seems to play on the concept of consciousness as an emergent phenomena, which is certainly a valid approach, but figuring out what complex interactions between those simple parts are required for consciousness to occur is still an open question.

The other take... that consciousness can't ever be explained, it just is (and thus the hard problem is pointless to ask) is also valid. It's possible that, much like mass or charge are just properties of matter and we can't formulate a theory of why mass or charge must arise from matter, consciousness is just a fundamental property of the universe. But, echoing madness's sentiments, that's not solving the hard problem; rather, that's essentially saying that it's not solvable.
 
  • #51
Graeme M said:
Hmmmm... I wouldn't say Graziano is right, I am certainly not in a position to make any judgements. I simply said I liked his approach. Re the hard problem, all I meant was, what if the hard problem is wrong by definition?

Yes, but what is a clear and convincing argument that the hard problem is wrong? Although the problem is named by Chalmers, it is widely accepted as a problem even by (some) physicists (Witten) and logicians (Feferman).

Witten:


Feferman:
http://math.stanford.edu/~feferman/papers/penrose.pdf
 
  • #52
Graeme M said:
Prinz notes that evidence to date shows that most sensory processing is probably organised into a tripartite hierarchy - lower, intermediate and higher levels. He argues that consciousness is formed at the intermediate level - the lower level is primary brute processing while the higher level is concerned with abstraction and categorisation.
Hi @Graeme:

I confess that Prinz's proposition "consciousness is formed at the intermediate level" confuses me. I cannot tell if I am interested in this topic or not. Perhaps if you can clarify a few points for me, I can decide about this, and pehaps study this thread to further educate myself.

1. I get a sense from your summary of the three level model that it is more a model of mental functioning, rather than neurological functioning. That is, the model relates more closely to the mind rather than to the body. Do you agree?

2. You give a summary categorization of the general low and high level, but for the intermediate level, "consciousness is formed" suggests that this is not a defiition or charactrerization, but only one of possibly many functions. An alternative interpretation might be that that the intermediate level is simply defined to be the level in Prinz's three level model at which consciousness "is formed", and that any other functions performed at the intermediate level are incidental. (My personal preference for vocabulary would be "emerges" rather than "is formed".) If this alternative interpetation is correct, then it seems to reasonable that the lower level might be defined as the model element where pre-conscious sensory (and other) processes take place, and that the higher level is where post-consciousness functions take place.

3. You omitted any mention of other mental functions. How, for example, do intellect, learning, memory (multiple kinds), emotions, intuition, and the focus of attention fit in? Does Printz recognize these as mental functions? If so, at which level in the model does each function emerge?

Regards,
Buzz
 
  • #53
I'm afraid that Integrated Information Theory page is rather beyond me. I think it's saying that the theory provides a framework for 'measuring' or predicting the amount of information represented in a network and that the more integrated a network the greater informational content (in the sense that any individual node can have a greater number of connections). But that's fairly unremarkable so clearly I don't get it. I'm not sure how it goes from that to providing an explanation for consciousness - it could provide a quantitative measure of informational content in a conscious state, but how does it go from that to offering an explanation? If IIT is Tononi's explanation, can you summarise why you see it having explanatory power?

Buzz, I don't know that I can answer your questions, I posed my original question because I was trying to get my head around what Prinz and Graziano were proposing and also wondering at the extent to which they are complementary ideas.

I thought that Prinz was talking of a mental hierarchy, although strictly I think I mean a logical architecture, rather than a purely physical hierarchy. I'm still not quite clear on that as some of the references supplied in this thread show that physically the hierarchy idea has only a broad applicability. I haven't read it yet but I think someone even posted a link to a paper that suggests some attentional effect at the point of the actual sensory receptors which is interesting.

I can't comment re your point 3 as I am only a little over a third of the way into the book and haven't yet come to his more detailed explanations. Most of the book so far seems focused on results from visual cognition studies.
 
  • Like
Likes Buzz Bloom
  • #54
Graeme M said:
I'm afraid that Integrated Information Theory page is rather beyond me. I think it's saying that the theory provides a framework for 'measuring' or predicting the amount of information represented in a network and that the more integrated a network the greater informational content (in the sense that any individual node can have a greater number of connections). But that's fairly unremarkable so clearly I don't get it. I'm not sure how it goes from that to providing an explanation for consciousness - it could provide a quantitative measure of informational content in a conscious state, but how does it go from that to offering an explanation? If IIT is Tononi's explanation, can you summarise why you see it having explanatory power?

In fact there is a rather robust argument that Tononi's conception does not provide any explanatory power (see the link in post #18), so I am perplexed in what way it would be better than Graziano's. (If anything, Graziano's conception in which there is a model of the self and its interaction with the world seems closer to what one colloquially calls consciousness, whereas Tononi simply declares certain computational gates configured according to expander graphs to be conscious.)
 
Last edited:
  • #55
atyy said:
Hmmm, do you really think Tononi comes closer than Graziano?

Closer to what? I think it comes closer to a theory of the type that Chalmers suggested would be required to address the hard problem. From Chalmers' original paper:

"...there is a direct isomorphism between certain physically embodied information spaces and certain phenomenal (or experiential) information spaces. From the same sort of observations that went into the principle of structural coherence, we can note that the differences between phenomenal states have a structure that corresponds directly to the differences embedded in physical processes; in particular, to those differences that make a difference down certain causal pathways implicated in global availability and control. That is, we can find the same abstract information space embedded in physical processing and in conscious experience.

This leads to a natural hypothesis: that information (or at least some information) has two basic aspects, a physical aspect and a phenomenal aspect. This has the status of a basic principle that might underlie and explain the emergence of experience from the physical. Experience arises by virtue of its status as one aspect of information, when the other aspect is found embodied in physical processing."


If you mean closer to a correct theory of consciousness then I'm not sure.
 
  • #56
atyy said:
In fact there is a rather robust argument that Tononi's conception does not provide any explanatory power (see the link in post #18), so I am perplexed in what way it would be better than Graziano's. (If anything, Graziano's conception in which there is a model of the self and its interaction with the world seems closer to what one colloquially calls consciousness, whereas Tononi simply declares certain computational gates configured according to expander graphs to be conscious.)

But that makes Graziano's approach idiosyncratic and anthropocentric. If we want to be able to test consciousness in a robot, we need a more generalized measurement of consciousness. Looking at information flow and boundaries is a more generalized measurement. Again, it doesn't appear Tonini's got it right, but Graziano's approach would be difficult to apply outside of humans.
 
  • #57
Pythagorean, what is the definition of consciousness at play here?

Regarding IIT, how does this explain the hard problem? I gather that the theory provides a method for computing a system's informational potential ("information integration" where that has a specific meaning) and that with a sufficiently high value the system is conscious. But doesn't this just tell us what system is definitionally conscious by the definition of the test itself, and really what of that is of more utility than being able to assess consciousness from an evaluation of behaviour and brain state? As Scott argues, it should theoretically be possible to create a system with a high Φ value that we would intuitively doubt is conscious. And if that system cannot report its experience, where does that leave us?

Simply, why does information integration, or a high Φ value, generate "experience"?
 
  • #58
Again, it's the approach, not the specific method or outcome, of abstracting the measurement to information that I like about Tonini's work. He's looking for a measurement that can be applied to any object, not just humans (thus external validity becomes testable, which is important). Though I would note that if we did come up with such a measure and it did accurately predict consciousness in humans (where we can verify) and it gave a positive result for a bank of logic gates as well, then the only thing stopping us from accepting that a bank of logic gates can be conscious is our own biases and anthropocentric view of consciousness.

Of course, we don't have that; Tonini's approach hasn't proven to be robust and we have no idea where to set the threshold for phi. What I really mean is that Tonini seems to be asking the right questions, even if he's produced a wrong answer.
 
  • #59
It solves the hard problem (rather, it proposes a tentative solution) by providing a mathematical framework to quantify the amount of consciousness and the kind of consciousness (qualia) in an arbitrary physical system. Testability is always going to be difficult for a theory which attempts to solve the hard problem. It is being tested in humans in different states and levels of consciousness, however.

It's not always obvious or intuitive which systems are conscious, and there is a large amount of disagreement. This is a good example of an opinion dividing case https://en.wikipedia.org/wiki/China_brain. I would be interested to know whether you think the china brain would be conscious. The lack of consensus on these issues is exactly why we would like a principled theory.

Edit: I wrote this before I saw Pythagorean's response. It's directed towards Graeme's post.
 
  • #60
@Graeme M:

Thank you for your prompt responses to my post #52 questions in your post #53.

I think there is some overlap between my interest in models of the mind and the topic of this thread, but have a great deal of difficulty estimating the extent of this overlap. The mental model I describe in the short essay (about 4400 words) I cited in my post #30 does not explicitly discuss consciousness, but does interelate several mental functions that I believe are closely related to consciouness. If you are willing to take a look at this essay, I think you may be able to help me make an estimate of the extent of overlap I mentioned above.

Regards,
Buzz
 
Last edited:
  • #61
madness said:
Closer to what? I think it comes closer to a theory of the type that Chalmers suggested would be required to address the hard problem. From Chalmers' original paper:

"...there is a direct isomorphism between certain physically embodied information spaces and certain phenomenal (or experiential) information spaces. From the same sort of observations that went into the principle of structural coherence, we can note that the differences between phenomenal states have a structure that corresponds directly to the differences embedded in physical processes; in particular, to those differences that make a difference down certain causal pathways implicated in global availability and control. That is, we can find the same abstract information space embedded in physical processing and in conscious experience.

This leads to a natural hypothesis: that information (or at least some information) has two basic aspects, a physical aspect and a phenomenal aspect. This has the status of a basic principle that might underlie and explain the emergence of experience from the physical. Experience arises by virtue of its status as one aspect of information, when the other aspect is found embodied in physical processing."


If you mean closer to a correct theory of consciousness then I'm not sure.

How is it any different the basic idea behind Graziano's if one treats Graziano's boxes as outlining information flow between informational processing units?

Also, is it clear that one should accept Chalmers's proposed form of solution to the hard problem (because it seems that Graziano's ideas could be mapped onto Chalmer's proposed form of solution)? [I don't consider the hard problem, if it exists, to be defined by Chalmers, only to be named by him. I would include other arguments including Searle's "Chinese room argument" or Block's "China Brain" that you mentioned, as well as the vaguer thoughts expressed by Witten above].
 
Last edited:
  • #62
Pythagorean said:
But that makes Graziano's approach idiosyncratic and anthropocentric. If we want to be able to test consciousness in a robot, we need a more generalized measurement of consciousness. Looking at information flow and boundaries is a more generalized measurement. Again, it doesn't appear Tonini's got it right, but Graziano's approach would be difficult to apply outside of humans.

In specifics perhaps, but if you look at just the first figure linked in the Graziano paper linked in the OP, why couldn't that be fleshed out and applied to robots?
 
  • #63
atyy said:
In specifics perhaps, but if you look at just the first figure linked in the Graziano paper linked in the OP, why couldn't that be fleshed out and applied to robots?

I was actually thinking about this. Not in reference to the figure, but in general. If Graziano's conjecture was framed in terms of information flow, it could be abstracted to look similar to Tononi's. But Graziano seems to frame a lot of his model phenomenologically in terms of human experience and psychology (which is what I meant by idiosyncratic).
 
  • #64
atyy said:
How is it any different the basic idea behind Graziano's if one treats Graziano's boxes as outlining information flow between informational processing units?

The major difference is that IIT proposes a fundamental relationship between some feature of physical systems and the associated conscious experience. Crucially, this relationship could not have been deduced from, or reduced to, the standard laws of physics which determine the activity of that physical system.

Graziano, on the other hand, proposes a mechanism which performs a function. This, by definition of Chalmers' original formulation of the hard an easy problems, is a solution to an "easy problem" by definition.
atyy said:
Also, is it clear that one should accept Chalmers's proposed form of solution to the hard problem (because it seems that Graziano's ideas could be mapped onto Chalmer's proposed form of solution)? [I don't consider the hard problem, if it exists, to be defined by Chalmers, only to be named by him. I would include other arguments including Searle's "Chinese room argument" or Block's "China Brain" that you mentioned, as well as the vaguer thoughts expressed by Witten above].

My point is that Graziano's ideas could not be mapped onto any solution to Chalmers' formulation of the hard problem, as a result of his formulation of the problem rather than his form of a solution. Of course, you could reformulate Chalmers' definition of the problem, but I find that to be evasive unless you openly admit to a deflationary approach.

In my opinion, Chalmers has put forward the most carefully argued and comprehensive account of the hard problem, at least out of those I have read (http://www.amazon.com/dp/0195117891/?tag=pfamazon01-20). That's not to diminish the contributions of others (I have mentioned Nagel in this thread several times), but I consider that book to be pretty definitive on the issue.
 
  • #65
madness said:
The major difference is that IIT proposes a fundamental relationship between some feature of physical systems and the associated conscious experience.

madness can you briefly summarise how IIT does this? From my pretty sketchy take on what I read, IIT can quantify whether a system is conscious if it has a large enough phi value. But what does it say about how an experience emerges from the informational potential of the system? What measure quantifies that?
 
  • #66
Graeme M said:
madness can you briefly summarise how IIT does this? From my pretty sketchy take on what I read, IIT can quantify whether a system is conscious if it has a large enough phi value. But what does it say about how an experience emerges from the informational potential of the system? What measure quantifies that?

This is precisely the point. If IIT explained how experience emerges from information, using some standard physical mechanism, it would not be proposing a fundamental relationship between the physical process and the experience. Within the theory, the relationship between phi and consciousness is fundamental and does not reduce to any simpler underlying mechanisms or relationships.
 
  • #68
madness said:
My point is that Graziano's ideas could not be mapped onto any solution to Chalmers' formulation of the hard problem, as a result of his formulation of the problem rather than his form of a solution. Of course, you could reformulate Chalmers' definition of the problem, but I find that to be evasive unless you openly admit to a deflationary approach.

In my opinion, Chalmers has put forward the most carefully argued and comprehensive account of the hard problem, at least out of those I have read (http://www.amazon.com/dp/0195117891/?tag=pfamazon01-20). That's not to diminish the contributions of others (I have mentioned Nagel in this thread several times), but I consider that book to be pretty definitive on the issue.

Why can't Graziano's ideas be mapped onto Chalmer's proposed solution form? All Tononi is doing is saying some configuration characterized by phi is conscious. One could just as easily say in the spirit of Graziano, that anything that has a model of itself interacting with the world is conscious. (See also Pythgorean's post #63.)

My criticism is that I do not believe Chalmer's proposed form to the hard problem is satisfactory (consciousness is "nothing but" X, where X is some equivalence class of dynamical systems). If we believe Chalmer's proposed form of solution, it is difficult to see how the hard problem is hard, since why wouldn't we already accept Graziano's or Tononi's proposals? Even Penrose's would be plausible. Chalmer's idea is essentially that the hard problem should be solved by a definition. But surely what the hard problem is asking for is an explanation - and Graziano's comes far closer than Tononi to that.
 
  • #69
atyy said:
Why can't Graziano's ideas be mapped onto Chalmer's proposed solution form? All Tononi is doing is saying some configuration characterized by phi is conscious. One could just as easily say in the spirit of Graziano, that anything that has a model of itself interacting with the world is conscious. (See also Pythgorean's post #63.)

My understanding of Graziano's theory is that it proposes a mechanism which performs a function. Any such theory is, by Chalmers' definition, a solution to an easy problem. I think you are correct that, if you make the extra step to say "all systems which implement this mechanism are conscious" (and I think also some other statement such as "no systems which do not implement this mechanism are conscious") then you will have a theory which addresses the hard problem. Do you think these statements are reasonable for a proponent of Graziano's theory to make?

atyy said:
My criticism is that I do not believe Chalmer's proposed form to the hard problem is satisfactory (consciousness is "nothing but" X, where X is some equivalence class of dynamical systems). If we believe Chalmer's proposed form of solution, it is difficult to see how the hard problem is hard, since why wouldn't we already accept Graziano's or Tononi's proposals? Even Penrose's would be plausible. Chalmer's idea is essentially that the hard problem should be solved by a definition. But surely what the hard problem is asking for is an explanation - and Graziano's comes far closer than Tononi to that.

It depends. I've recently seen people talk of the "pretty hard problem", which is to provide a theory of exactly which physical systems are conscious, by how much, and what kind of experiences they will have, but without explaining why. I'm not sure we can ever achieve a solution to the real hard problem, because there is a similar "hard problem" with any scientific theory. Chalmers' and Tononi's proposed solutions seem to attack the pretty hard problem rather than the hard problem. Noam Chomsky actually does at great job of explaining this point - .
 
  • #70
atyy said:
Chalmer's idea is essentially that the hard problem should be solved by a definition. But surely what the hard problem is asking for is an explanation - and Graziano's comes far closer than Tononi to that.

My issue with Graziano's is that it comes "too close" I guess. More accurately, that it's some kind of analog to "over-fitting". Tonini proposes physical events as the mechanism (in terms of information theory) while Graziano proposes it in terms of psychological functions (that we may or may not know how to quantify the physics of). So Graziano's is more intuitively graspable, but it assumes too much for it to be generalizable to all systems. I think Tonini's "naive" approach is more suitable in that regard. Of course, the two approaches are not mutually exclusive, and perhaps equivable in some limit.

Another approach that frames brain function in terms of information flow is Friston's "Free Energy Principle for the Brain"[1]. Friston doesn't directly try to answer the hard problem, but he sets out to understand the brain in a non-anthropocentric framework.[1]
http://www.nature.com/nrn/journal/v11/n2/full/nrn2787.html
 
Last edited by a moderator:

Similar threads

Back
Top