Daniel Dennett's Consciousness Explained

In summary: This is not a helpful concept.In summary, Daniel Dennett's book "Consciousness Explained" discusses how subjective experience or qualia is just a result of biology and nothing more. There is no clear answer to the question of whether consciousness is purely biological or something more. The best evidence so far suggests that it is more complicated than just pure computation.
  • #1
riezer
58
0
In Daniel Dennett's Consciousness Explained, he described and explained how subjective experience or qualia or consciousness in general is just a result of biology and nothing more. The word "Consciousness" is very broad and can become vague so let's just focus on qualia or internal subjective experience. Could it really just be a result of pure brain biology? Is there any latest research how neural algorithm or network property can give rise to it? Someday. Can we create pure mechanical sentient being with internal subjective experience? This would be the definite proof that consciousness and qualia is just a special properties of certain network algorithm or processes. Is this possible? I'm quite undecided about this... one day believing our qualia is due to something extra.. other day believing it is just a result of neural and glial processes and pure biological based on cells and thermodynamics and nothing more. What is the best evidence and latest research so far now with regards to this issue? Is there a definite argument for example why biology (or mere calculational methods based on neural networks) is insufficient to produce qualia or internal subjective experience ?
 
Physics news on Phys.org
  • #2
riezer said:
In Daniel Dennett's Consciousness Explained, he described and explained how subjective experience or qualia or consciousness in general is just a result of biology and nothing more. The word "Consciousness" is very broad and can become vague so let's just focus on qualia or internal subjective experience. Could it really just be a result of pure brain biology? Is there any latest research how neural algorithm or network property can give rise to it? Someday. Can we create pure mechanical sentient being with internal subjective experience? This would be the definite proof that consciousness and qualia is just a special properties of certain network algorithm or processes. Is this possible? I'm quite undecided about this... one day believing our qualia is due to something extra.. other day believing it is just a result of neural and glial processes and pure biological based on cells and thermodynamics and nothing more. What is the best evidence and latest research so far now with regards to this issue? Is there a definite argument for example why biology (or mere calculational methods based on neural networks) is insufficient to produce qualia or internal subjective experience ?

The answer to your question is not agreed. But at least it can be clearly stated.

As you say, the guts of it is whether consciousness is "computational", or whether it "involves something more"?

If it is computational - a software pattern - then the presumption that follows is it could run on any kind of hardware in principle. So this could be biological. Or it could be something that is doing the same job. You could do it in silicon chips and if you made the same patterns, your machine would be aware.

Now we already can say that the brain, as a biological organ, certainly looks computational in many aspects. Neurons fire in ways that look like digital signals. Axons conduct the signals point-to-point. It looks like information processing.

So the question is whether it is "pure computation" (rather than pure biology). Or whether it is something more entangled, more subtle, more difficult (even impossible) to replicate at the hardware level.

We can think the brain is crisply divided by the software/hardware distinction - as it is in the Turing machine which is the basis of our concept of computation - but it is then a big if as to whether the brain actually is divided in this "pure" fashion.

I believe, having studied this issue plenty :smile:, that the mind is NOT pure computation. It is not a Turing machine. There is not a clean software~hardware divide that would allow you to identify some set of consciousness algorithms that you pick up (or download) and implement on some other general purpose hardware.

This is just a wrong mental image although a very common one.

Instead, what we should be focused on is generalising our notion of consciousness as a living process. So in fact, forget the machine model that comes from technology and actually apply some biological thinking.

Theoretical biology explains living systems with a whole bunch of concepts like anticipation, adaptation, semiosis, dissipation, hierarchy, modelling relations, that make consciousness seem much less of a mystery.

So computer analogies are fine as far as they go. Which isn't very far when it comes to living systems. Life needs to be described in its own terms at the end of the day.

For this reason, I think you make a big mistake when saying consciousness seems too vague and hard to define, so let's switch the conversation to qualia.

The idea of qualia is that there can be such a thing as "atoms of experience". Now you have not just a software/hardware divide, but a dualistic mind/matter divide. Effectively you have painted yourself into an intellectual corner with no way out (except by all sorts of crackpot magic like quantum coherence, panpsychism, etc).

So it is right to think that consciousness is a rather squishy and ill-defined concept once you examine it. You have started trying to generalise something specific - your current state of experience - to something more general, your brain biology. So you just need to keep going down that path, using actual concepts from biology.

BTW, Dennett ain't much of a biologist despite the fact he writes a lot about Darwin machines. It gives the game away that he treats biology as "just machinery".
 
  • #3
apeiron said:
The answer to your question is not agreed. But at least it can be clearly stated.

As you say, the guts of it is whether consciousness is "computational", or whether it "involves something more"?

If it is computational - a software pattern - then the presumption that follows is it could run on any kind of hardware in principle. So this could be biological. Or it could be something that is doing the same job. You could do it in silicon chips and if you made the same patterns, your machine would be aware.

Now we already can say that the brain, as a biological organ, certainly looks computational in many aspects. Neurons fire in ways that look like digital signals. Axons conduct the signals point-to-point. It looks like information processing.

So the question is whether it is "pure computation" (rather than pure biology). Or whether it is something more entangled, more subtle, more difficult (even impossible) to replicate at the hardware level.

We can think the brain is crisply divided by the software/hardware distinction - as it is in the Turing machine which is the basis of our concept of computation - but it is then a big if as to whether the brain actually is divided in this "pure" fashion.

I believe, having studied this issue plenty :smile:, that the mind is NOT pure computation. It is not a Turing machine. There is not a clean software~hardware divide that would allow you to identify some set of consciousness algorithms that you pick up (or download) and implement on some other general purpose hardware.

This is just a wrong mental image although a very common one.

Instead, what we should be focused on is generalising our notion of consciousness as a living process. So in fact, forget the machine model that comes from technology and actually apply some biological thinking.

Theoretical biology explains living systems with a whole bunch of concepts like anticipation, adaptation, semiosis, dissipation, hierarchy, modelling relations, that make consciousness seem much less of a mystery.

So computer analogies are fine as far as they go. Which isn't very far when it comes to living systems. Life needs to be described in its own terms at the end of the day.

For this reason, I think you make a big mistake when saying consciousness seems too vague and hard to define, so let's switch the conversation to qualia.

The idea of qualia is that there can be such a thing as "atoms of experience". Now you have not just a software/hardware divide, but a dualistic mind/matter divide. Effectively you have painted yourself into an intellectual corner with no way out (except by all sorts of crackpot magic like quantum coherence, panpsychism, etc).

So it is right to think that consciousness is a rather squishy and ill-defined concept once you examine it. You have started trying to generalise something specific - your current state of experience - to something more general, your brain biology. So you just need to keep going down that path, using actual concepts from biology.

BTW, Dennett ain't much of a biologist despite the fact he writes a lot about Darwin machines. It gives the game away that he treats biology as "just machinery".

Do you agree with Antonio Damasio and Gerald Edelman? They have the same belief that it's all biological circuitry and nothing more. Sometimes I tend to believe them. Sometimes I don't. Edelman is building machine with same neural circuits as humans. It can be taught like an infant and it seems to be working as modeled. Maybe if the computational is complex enough.. internal subjective experience can occur as an emergence? It has to do with sensor (or senses) and memory and internal analysis. Is there any proof why this can't give rise to internal subjective experience? Damasio has very sophisticated idea in his new book Self Comes to Mind that our neural circuits may just do this.
 
  • #4
riezer said:
Do you agree with Antonio Damasio and Gerald Edelman?

Neither of them have particularly good models. Neither of them have anything startling to say. One is an adequate populariser, the other is a famous egotist. Neither carry any particular weight in the field.

If I was pointing you to the best work that is computational in its language, but tries to capture the biological essence, then historically I would look to the cyberneticists like Ashby and MacKay, then the ART neural nets of Stephen Grossberg, and most recently to the Bayesian brain approach of Friston, Hinton and others.
http://en.wikipedia.org/wiki/Bayesian_brain

A predictive coding or anticipatory approach to consciousness explains pretty directly why such a system would have "internal states". It has to to work.

The standard computational model gets it back to front by thinking the brain works by turning sensory input into experential output - some kind of internal display that arises for no particular reason while the brain is trying to generate suitable motor output.

But the biological model I'm talking about says systems guess the state of the world from experience. They create running predictive models. Then they respond to the errors of prediction to update that model. So it is instead output before input. You need a state of experience to be able to experience.

It is not complicated at all. There is certainly no need to invoke "computational complexity". You just need to turn your notions of processing round so they face the right way.

Dennett of course is sort of talking about this with his "intentional stance". But that is the problem with Dennett. He sort of gets a lot of things vaguely right and then thinks he is being original. When it seems just obvious and better described to those already at work in the field.

Edelman had a similar "my big idea" approach. He made himself very unpopular even though he had raised big bucks for a research institute.

Damasio is just a decent neuroscientist who knows his stuff, but had no sharply focused theory of the kind that would produce actual models.
 
  • #5
riezer said:
In Daniel Dennett's Consciousness Explained, he described and explained how subjective experience or qualia or consciousness in general is just a result of biology and nothing more... Can we create pure mechanical sentient being with internal subjective experience?

riezer said:
Do you agree with Antonio Damasio and Gerald Edelman? They have the same belief that it's all biological circuitry and nothing more... Maybe if the computational is complex enough, internal subjective experience can occur as an emergence?


I think there are two almost unrelated issues being confused in this kind of discussion. One is about the kind of consciousness we humans experience. To say this is “biological circuitry and nothing more” would simply be foolish, since it’s obvious that language and culture play a major part in shaping not only what we experience, but also the unconscious background-processing that goes on in our brains.

The other issue is about “internal subjective experience” or “sentience”. This seems like a very deep mystery IF we treat it as an objective property that some things (like people and other animals) “have” and other things don’t.

There is no threshold of “complex computation” that determines whether something “has” its own point of view on the world. In our culture, we’re used to thinking of rocks and trees and molecules as “objects” – something seen from the outside. We think of ourselves and other humans as “subjects” each of which has its own “internal” point of view. But these are not objective categories! It’s just a question of the viewpoint we choose to take.

We can very reasonably view a person as an object – say, from the standpoint of biology or sociology or economics – if for these purposes the person’s own point of view is unimportant.

But we could also try to understand what the world looks like from an atom’s point of view, in its relationships with other atoms. Physicists don’t in fact do this, but I think eventually they will need to. This does not amount to assuming atoms “have consciousness” or any special “sentience”, beyond the many kinds of physical interactions we know about. It’s just a matter of rethinking what we know from a different point of view.

Unfortunately there’s a lot of valuable work in brain science getting mixed up with what to me are meaningless questions about “the neural basis of consciousness”. There is no objective truth as to whether something “has its own point of view” or “is merely an object.” The confusion arises only because we’re much more used to imagining the world from another person’s point of view than from the point of view of a cat or a worm or a tree.
 
  • #6
apeiron said:
But the biological model I'm talking about says systems guess the state of the world from experience. They create running predictive models. Then they respond to the errors of prediction to update that model. So it is instead output before input. You need a state of experience to be able to experience.

This makes sense to me. And it clearly applies to all kinds of biological systems, not just ones we would call “conscious”. In fact, something a lot like this happens in physics as well. In quantum mechanics, the wave function that describes the “state” of any physical system is a statistical projection of possibilities that gets updated – from the point of view of that particular system – to the extent actual information is received.
 
  • #7
ConradDJ said:
But we could also try to understand what the world looks like from an atom’s point of view, in its relationships with other atoms. Physicists don’t in fact do this, but I think eventually they will need to. This does not amount to assuming atoms “have consciousness” or any special “sentience”, beyond the many kinds of physical interactions we know about. It’s just a matter of rethinking what we know from a different point of view.[/quiote]



Why would a rethinking of what we know from a different POV be necessary?


Unfortunately there’s a lot of valuable work in brain science getting mixed up with what to me are meaningless questions about “the neural basis of consciousness”. There is no objective truth as to whether something “has its own point of view” or “is merely an object.” The confusion arises only because we’re much more used to imagining the world from another person’s point of view than from the point of view of a cat or a worm or a tree.


In the bolded sentence you used "we" implying that we are conscious. Why would you claim that "There is no objective truth as to whether something has its own point of view” ?
 
  • #8
ConradDJ said:
I think there are two almost unrelated issues being confused in this kind of discussion. One is about the kind of consciousness we humans experience. To say this is “biological circuitry and nothing more” would simply be foolish, since it’s obvious that language and culture play a major part in shaping not only what we experience, but also the unconscious background-processing that goes on in our brains.

I agree with this of course.

ConradDJ said:
The other issue is about “internal subjective experience” or “sentience”. This seems like a very deep mystery IF we treat it as an objective property that some things (like people and other animals) “have” and other things don’t.

And this is why I say that the generalisation of human/animal awareness is the important step in a theory of consciousness. You can't just say it is a result of an arrangement of physical atoms, or some pattern of information. You have to focus on the process, the systematic aspects, in generalising from our minds as a particular biological system to systems in general.
 
  • #9
Some people like to model, whereas others would like to know.


riezer said:
Is there a definite argument for example why biology (or mere calculational methods based on neural networks) is insufficient to produce qualia or internal subjective experience ?


Communication. It's only possible between minds/selves. Even what happens between 2 routers/modems is not communication, it's just interaction. There must be a conscious human being at the one end for interaction to become information and communication. The theory that biology can produce internal subjective experience is dead and its bodily fluids are being kept ventilated until one day a better theory comes to light.

Ask yourself(that's what philosophers do all the time anyway) what commonly used terms like "underlying reality; emergent; hidden variable(s)" might refer to?
 
  • #10
Maui said:
Communication. It's only possible between minds/selves. Even what happens between 2 routers/modems is not communication, it's just interaction. There must be a conscious human being at the one end for interaction to become information and communication.

This is an assertion rather than an argument, which is precisely the problem.

If you had an argument to offer, you would be able to say on what general grounds that routers are "only interacting" and what extra was involved in "consciously communicating".

Saying there must be this, or must be that, is not philosophy or science but rhetoric.

A strong computationalist might argue here along the lines that interaction is the exchange of information, so the material cause is exactly the same. And so all that differs is the informational complexity - or some other information-based measure.

I would dispute this and suggest instead that we have to be able to measure the meaning being exchanged, which involves the degree of information discarded, and so gets us into more biological concepts like semiosis.

But at least that would be an argument. Not bald assertions that are unsupported by either philosophy or science.
 
  • #11
apeiron said:
This is an assertion rather than an argument, which is precisely the problem.

If you had an argument to offer, you would be able to say on what general grounds that routers are "only interacting" and what extra was involved in "consciously communicating".



I didn't as it was self-evident and simple and i deemed it not requiring further explanation.
Two, three or 2 billion particles interacting carry no information in and of themselves. Information doesn't exist as such, UNTIL a mind is present, for information is only a defining characteristic of mind, not of particles(routers deal exclusively with electricity, not with information; whereas human beings deal with what is perceived as information, though it's still JUST electricity and electrical impulses). The icons on your desktop are electricity and carry no intrinsic information when there is no conscious mind around - they are not even icons without consciousness. I'll step a bit further and say that no human being has ever dealt with particles or fields, but with information about particles and fields. We still don't know what ínformation' is, as we don't know what a conscious mind is, so it's hard to speculate what might be involved in "consciously communicating". As always, it's far easier to spot how things are not, then being able to say how things really are.



Saying there must be this, or must be that, is not philosophy or science but rhetoric.

A strong computationalist might argue here along the lines that interaction is the exchange of information, so the material cause is exactly the same. And so all that differs is the informational complexity - or some other information-based measure.

I would dispute this and suggest instead that we have to be able to measure the meaning being exchanged, which involves the degree of information discarded, and so gets us into more biological concepts like semiosis.

But at least that would be an argument. Not bald assertions that are unsupported by either philosophy or science.



What you have been proposing so far are models. I have nothing against this approach(that you might deem scientific), as long as you realize how crippled those attempts really are when tested against reality(you are unconsciously injecting your mind and basing all your thesises on the inner workings of your conscious mind when you build your new model, then you go on and reject the very basis of your thesis).
 
Last edited:
  • #12
Maui said:
What you have been proposing so far are models. I have nothing against this approach(that you might deem scientific), as long as you realize how crippled those attempts really are when tested against reality.

How do you characterise your approach if it is not scientific? It can't therefore be natural philosophy either, so I'm not sure where that leaves you.

What is really ironic is that consciousness IS a model of reality. You yourself are saying we don't directly know the world, but represent it in terms of modeled concepts - like "particles", "waves", "bits".

So if you don't accept the epistemology~ontology divide, I'm really have no idea what kind of thought system you are appealing to here.

The OP was about the scientific/natural philosophy approach to reality. I'm not hearing anything from you of any substance as to why we should not stick to it.
 
  • #13
apeiron said:
How do you characterise your approach if it is not scientific? It can't therefore be natural philosophy either, so I'm not sure where that leaves you.


You are right, this is difficult(though some may find my attitide leaning towards idealism). As i said earlier in my defense, it's always easier to shoot and take down a model, than to propose a viable one. This is to say, i am about as clueless as the next person on these issues.


What is really ironic is that consciousness IS a model of reality. You yourself are saying we don't directly know the world, but represent it in terms of modeled concepts - like "particles", "waves", "bits".

So if you don't accept the epistemology~ontology divide, I'm really have no idea what kind of thought system you are appealing to here.

The OP was about the scientific/natural philosophy approach to reality. I'm not hearing anything from you of any substance as to why we should not stick to it.



I can't help, sorry. I have to observe the rules on overly speculative posts and what i may say would probably be meaningless anyway. These topics are like walking and balancing on a knife's edge
 
  • #14
Maui said:
I can't help, sorry. I have to observe the rules on overly speculative posts and what i may say would probably be meaningless anyway. These topics are like walking and balancing on a knife's edge

At least idealism IS a legitimate philosophical position :smile:, even if difficult to argue for.

But if the OP is "how do we know if computationalism is enough", then bringing in idealism does defocus the discussion.

I think the question about computationalism is a live one. In mind science, it is up against other "natural explanations" like non-linear dynamics of various stripes (Kelso, Freeman, Nunez, Harth). And then approaches like neural networks which sit sort of in-between.

So it seems clear - to science - that the brain/mind is not just about computation. But that larger general basis to theory is still a matter of much discussion.
 
  • #15
Let's discuss in terms of hard data and neuroscience and parts of the brains. What minimal part of the brain can exist that can exhibit qualia? Damasio said the brain stem is sufficient. It's like the spine and brain stem can already feel even with lack of frontal cortex. Zeki who wrote "Vision of the Brain" believes that perceptual centers like visual cortex is itself conscious due to experiences with synesthesia. He was saying that even without frontal cortex feedback. The isolated v4 for example can perceive colors and the qualia of it. But Edelman believes reentrant communications between the different modules of the brains is necessary for qualia. What is your position about this? Try to describe in terms of neuroscience and brain parts so we can lock on the mechanism of interaction.
 
  • #16
riezer said:
Let's discuss in terms of hard data and neuroscience and parts of the brains. What minimal part of the brain can exist that can exhibit qualia? Damasio said the brain stem is sufficient. It's like the spine and brain stem can already feel even with lack of frontal cortex. Zeki who wrote "Vision of the Brain" believes that perceptual centers like visual cortex is itself conscious due to experiences with synesthesia. He was saying that even without frontal cortex feedback. The isolated v4 for example can perceive colors and the qualia of it. But Edelman believes reentrant communications between the different modules of the brains is necessary for qualia. What is your position about this? Try to describe in terms of neuroscience and brain parts so we can lock on the mechanism of interaction.

It all depends on how you define qualia. If you choose some operational definition like "shows a behavioural response to stimulus", then even a spinal reflex gets covered by that. On those grounds, E.coli measureably responds to the world and is properly sentient.

But if you believe consciousness is something very special and all about the "ineffable redness of red", then you are asserting a subjective private definition that is not scientifically measurable. Nor even philosophically credible, I would argue.

This notion of qualia is inherently dualistic. And so not even worth bothering to discuss in terms of "hard data and neuroscience".

For instance, claims about V4 being able to "perceive colour in isolation" are nonsense. V4 is clearly part of an integrated visual hierarchy.

The proper question is instead how do we describe V4 in terms of computational notions of modularity vs distributed function? Or more biologically, in terms of the dichotomous drives of differentiation~integration?

If Zeki is saying colour perception is about this brain location, and Edelman is saying it is about this brain connection, are they arguing for different mechanisms or simply pointing to complimentary parts of the same differentiation~integration process?

The same with Damasio. Is it the either/or of lower brain vs higher brain. Or is it the wholeness that is apparent at every level of the brain (with varying degrees of plasticity~stability, the lower brain being more hardwired, the higher brain being more adaptable)?

To step back, we have here a situation where there are a number of well-qualified scientists (Edelman, Damasio, Zeki, Crick, Freeman, etc) who are publishing popular books to advertise their personal speculations about "a theory of consciousness".

In every case, they presume an unsophisticated monism. Consciousness "just is" the emergent result of some arrangement of neural mechanism. And given the brain is such a complex thing to explain to people, well let's play up this particular aspect and make it sound key.

This is trite. But neuroscience has for a very long time been an off-shoot of medicine, and so a very simple-minded reductionism is embedded in the culture of these people. It is also the kind of answer that the lay pop science reader is looking for. Hence there is a market for top neuroscientists to write neuroscience books that confirm rather than challenge prevailing mechanistic views of biological systems.

So again, what is your (or their) definition of qualia as something generally measureable? Only then would it be possible to speak to the neuro-evidence.
 
  • #17
apeiron said:
It all depends on how you define qualia. If you choose some operational definition like "shows a behavioural response to stimulus", then even a spinal reflex gets covered by that. On those grounds, E.coli measureably responds to the world and is properly sentient.

But if you believe consciousness is something very special and all about the "ineffable redness of red", then you are asserting a subjective private definition that is not scientifically measurable. Nor even philosophically credible, I would argue.

This notion of qualia is inherently dualistic. And so not even worth bothering to discuss in terms of "hard data and neuroscience".

For instance, claims about V4 being able to "perceive colour in isolation" are nonsense. V4 is clearly part of an integrated visual hierarchy.

According to Zeki (who was one of the original discoverers of color function of area V4 in the brain) in http://www.scribd.com/doc/49691724/The-Disunity-of-Consciousness-Zeki

"Processing sites are also perceptual sites. One conclusion from the clinical evidence is that a micro-consciousness for colour or visual motion is generated through activity at a distinct processing site,and therefore that a processing site is also a perceptual site. Such a conclusion is reinforced by studies of the visual motioncentre, area V5, which receives a direct visual input that bypasses the primary visual cortex (area V1)"

Please let me know how much you agree about the paper. I'm still thinking of a good definition of Qualia that won't make it vague.

The proper question is instead how do we describe V4 in terms of computational notions of modularity vs distributed function? Or more biologically, in terms of the dichotomous drives of differentiation~integration?

If Zeki is saying colour perception is about this brain location, and Edelman is saying it is about this brain connection, are they arguing for different mechanisms or simply pointing to complimentary parts of the same differentiation~integration process?

The same with Damasio. Is it the either/or of lower brain vs higher brain. Or is it the wholeness that is apparent at every level of the brain (with varying degrees of plasticity~stability, the lower brain being more hardwired, the higher brain being more adaptable)?

To step back, we have here a situation where there are a number of well-qualified scientists (Edelman, Damasio, Zeki, Crick, Freeman, etc) who are publishing popular books to advertise their personal speculations about "a theory of consciousness".

In every case, they presume an unsophisticated monism. Consciousness "just is" the emergent result of some arrangement of neural mechanism. And given the brain is such a complex thing to explain to people, well let's play up this particular aspect and make it sound key.

This is trite. But neuroscience has for a very long time been an off-shoot of medicine, and so a very simple-minded reductionism is embedded in the culture of these people. It is also the kind of answer that the lay pop science reader is looking for. Hence there is a market for top neuroscientists to write neuroscience books that confirm rather than challenge prevailing mechanistic views of biological systems.

So again, what is your (or their) definition of qualia as something generally measureable? Only then would it be possible to speak to the neuro-evidence.
 
Last edited by a moderator:
  • #18
riezer said:
According to Zeki (who was one of the original discoverers of color function of area V4 in the brain) in http://www.scribd.com/doc/49691724/The-Disunity-of-Consciousness-Zeki

"Processing sites are also perceptual sites. One conclusion from the clinical evidence is that a micro-consciousness for colour or visual motion is generated through activity at a distinct processing site,and therefore that a processing site is also a perceptual site. Such a conclusion is reinforced by studies of the visual motioncentre, area V5, which receives a direct visual input that bypasses the primary visual cortex (area V1)"

Please let me know how much you agree about the paper. I'm still thinking of a good definition of Qualia that won't make it vague.

This paper is OK by me. Just conceptually rather clumsy in the way it tries to express the basic idea that the brain is a nested hierarchy of processing activity.

So as I said, the brain is doing both integration and differentiation. It is organised into modules, yet just as much functioning as a coherent whole. It does both at the same time - and most people want to say it is either doing the one, or the other.

Zeki is correctly arguing against the idea - the Cartesean theatre model - that the activities of the brain must be outputted to some final consciousness display area. Instead, everything happens where it happens. And is connected as a whole already.

To me, this is arguing against a straw man. It is obvious that the brain does not output to a pineal gland or prefrontal zone to turn unconscious processing into conscious experience. But this is a really naive view that some do have, and Zeki is arguing with evidence against that.

But should the activity of V4 or V5 then be described as micro-consciousnesses? That is where I say the terminology is clumsy. But in context, I would happily live with it.

Zeki does not even use the term qualia. And he is not asserting that you could cut out V4 or V5 as chunks and they would generate subjective states that were colourful, or fast moving. They have to be part of the whole hierarchy of activity to contribute to such states.

To be able to experience the redness of a red object, you still need both the bottom-up activity of the visual pathways - the retinas alone are carrying out four levels of processing. And you need just as much, if not more, top-down activity to create an attentive state, shaped by expectation and memory.

Look at it like trying to scoop a whorl of turbulence out of a stream with a bucket. V4 goes into some measurable state when it sees red. A whorl develops that seems distinctive. But try and isolate it and you discover it was being formed by everything else that happened around it.
 
Last edited by a moderator:
  • #19
ConradDJ said:
But we could also try to understand what the world looks like from an atom’s point of view, in its relationships with other atoms. Physicists don’t in fact do this, but I think eventually they will need to. This does not amount to assuming atoms “have consciousness” or any special “sentience”, beyond the many kinds of physical interactions we know about. It’s just a matter of rethinking what we know from a different point of view.
Maui said:
Why would a rethinking of what we know from a different POV be necessary?


I’m working on a post for the BSM or Quantum forum on “The Observer”. In short, I think attempts to combine Quantum theory and Relativity while ignoring the viewpoint of the observer will probablly go nowhere. The alternative is to describe the physical world both from the objective “God’s-eye view” of classical physics, and from the local standpoint of individual systems in relation to one another. I will argue that while the objective viewpoint is immensely useful, we have neither logical nor empirical gounds for believing that it could be physically fundamental.

Maui said:
In the bolded sentence you used "we" implying that we are conscious. Why would you claim that "There is no objective truth as to whether something has its own point of view” ?


Because my point was that “having a point of view” is “not a real predicate,” in Kant’s phrase.

From your subjective standpoint, of course it’s true that you have your own point of view. But objectively...? When we look at things as “objects”, seen from outside, we can (if we want) ascribe “their own viewpoint” to them by trying to imagine the world from their viewpoint. There is no truth involved.

It’s often argued – “well, I have very good objective evidence that another person has consciousness, because they behave much like me, and I know I have consciousness.” This makes some sense IF what we mean is human consciousness.

But here’s Thomas Nagel from his famous discussion of “What it’s like to be a bat” –
http://instruct.westvalley.edu/lafave/nagel_nice.html

Conscious experience is a widespread phenomenon. It occurs at many levels of animal life, though we cannot be sure of its presence in the simpler organisms, and it is very difficult to say in general what provides evidence of it.

... fundamentally an organism has conscious mental states if and only if there is something that it is to be that organism—something it is like for the organism. We may call this the subjective character of experience.​

This “definition” is nonsense, to me. Apparently Nagel can very remotely imagine being a bat, so he says there is “something like” being a bat. But he doesn’t want to try imagining being a tree, or an atom, so it’s obvious to him that they’re not “conscious”.

“Subjective” means nothing more than “from one’s own point of view”. It has nothing to do with the “character of experience”... as if some experiences were “objective” and some “subjective”.

So the basic problem in debates about consciousness, the basic reason why the word itself is never well defined, is that we confuse the very special, extremely rich and articulate experience we have (as highly cultured humans) with the simple notion of “point of view” – which to me just means being somewhere in particular in space at a certain moment.

If by “consciousness” we mean, say, the experience of an organism that interacts with the world via a neural system – fine. But if we actually define “consciousness” in some specific way like this, instead of just pointing to it as something undefinable that we know we “have”, then the kind of question posed in this thread does not come up.
 
  • #20
I think part of the trouble with this question is that we run into problems of excessive self-reference. Take for example how we model time. It is useful to isolate points on a line representing time as snapshots of the evolving state of the system we are studying. Of course, there is no "single unit of time", the evolution of any system is a continuing process we can conceptually break down into smaller and smaller units, but we eventually run into a zeno's paradox type of problem.

I think consciousness is similar. We will never find an atom of perception. It is a dynamic process, dependent on time. Of course the tools we have to study it are dependent on the thing itself. Perhaps there is a Godelian problem here. Or perhaps to understand "subjective experience" we need intellectual tools that aren't yet available to us.
 
  • #21
Galteeth said:
I think consciousness is similar. We will never find an atom of perception. It is a dynamic process, dependent on time.

Good point, but there is the hope that the dynamic is also the coherent. And thus that there is some kind of "atom" to focus on.

So dynamic means all is ceaseless change. Never repeating pattern. Elusive because things can go off in every direction.

But dynamics can have organisation. It can have self-organising coherence. This is recognised in hierarchy theory by structural notions like holons and cogent moments. Like the whorls of turbulence in a stream, these become the "atoms" from which a process is composed.

So qualia are atoms of the wrong kind. They are the equivalent of a search for the irreducible substance of the mind. Little shards of experiencing that can be glued together to construct larger mental states.

The "atoms" of a dynamical view are self-similar patterns that arise over all scales as dissipative structure. Multi-scale coherence.

Taking the biological view, we can then be more specific. We can look for minds to express the more particular actions of adaptation or anticipation (different ways of saying much the same thing).

So consciousness becomes something composed of adaptation or anticipation, like little whorls of turbulence, occurring dynamically over all scales. Coherent activity that dissipates uncertainty and increases congruity.

Cut across the nervous system at any scale and you will thus see adaptation/anticipation as the driver of order. Synapses do it. Neurons do it. Cortical columns do it. Cortical areas do it. Cortical lobes and hemispheres do it. Brains do it.

Where is "consciousness" in this dynamic hierarchy view? It is fractally everywhere. Or - as Karl Pribram used to say - its like a hologram.

This is nothing like a computational view where the state of a system is entirely composed of atomistic bits. And some macro-state emerges in causally disconnected fashion, the "software state" that the hardware unwittingly represents.

Instead, the whole process is alive. Its atoms are the repetitive motif of an organisation that reappears at every scale of observation. The motif of adaptation/anticipation. The process of modelling the world and maintaining a good fit between reality and our impressions of it.

The mistake when it comes to dynamism is to seek the fundamental grain of explanation in the smallest scale - the atoms of substance from which larger wholes are constructed.

The fundamental level of explanation for a dynamical system is instead going to be the identification of the ever-present motif, pattern, organising form, self-ordering process. Which is why I keep stressing that the way to generalise consciousness is to understand it in terms of a process of adaptation and anticipation.
 
  • #22
Apieron. Actually. Neuroscience has already catched up. I mean. Gerald Edelman has stated something like consciousness is embodied. Meaning one needs existing experience to interpret a new experience. For example. A baby mind doesn't use calculations to behave. But direct from hardwired comparator system where previous experience were compared against the present. This is why no calculational method or CPU based brain system is being contemplated. This is why the Freudian unconsciousness is very powerful determinant of our behavior. Neuroscience is aware of what you were describing. Anyway from what sources did you learn all that you were describing since post #1?

What I'm interested is, is whether our current biology, chemistry and physics knowledge is sufficient to explain a human mind (Daniel Bennett mode). Or could information from beyond present physics have got to do with our sentience? There are many psychoactive drugs like DMT which shuts down part of the brain. When a brain is shut down, then it has accessed to a vaster information source. So perhaps the brain acts some kind of limiter of information to make us behave in this 3D world. And when the brain is destroyed, then some kind of essence survives in the vaster information source that is not yet discovered by physics. Do you think this is possible.. or do you believe our present physics is enough to explain the human mind and we just haven't discovered it akin to superconductivity mechanism of cooper pairs not explainable by Bohr and company because the emergence has enough complexities to hide many wonders and secrets?
 
  • #23
riezer said:
Apieron. Actually. Neuroscience has already catched up.

I am glad to hear it :frown:.

riezer said:
Anyway from what sources did you learn all that you were describing since post #1?

The usual ones. Papers, people, conferences.

riezer said:
When a brain is shut down, then it has accessed to a vaster information source. So perhaps the brain acts some kind of limiter of information to make us behave in this 3D world. And when the brain is destroyed, then some kind of essence survives in the vaster information source that is not yet discovered by physics. Do you think this is possible...

No, now I think you are talking unfounded nonsense. Sorry.

Looks like Evo's instincts were right after all. :mad:
 
  • #24
apeiron said:
I am glad to hear it :frown:.



The usual ones. Papers, people, conferences.



No, now I think you are talking unfounded nonsense. Sorry.

Looks like Evo's instincts were right after all. :mad:

No. I simply want to know if our higher ideals like love, empathy, poetry, etc. are all just a result of neurotransmitters and purely physical or they are due to extended something. So you are believer of the former. No problem about it. I also tend to believe in the former because neuroscience has proven so much like any brain damage could disable certain characters or personality.

Also I want to understand it so I can know what path to take in the years and decades ahead. Whether to live purely materialistic life and survival of the fitness with no conscience to kill if necessary or to take a life of spirituality where there is something more to life than all these evolution and biology and temporary short 90 years life in the billion of years of the Earth life span.
 
  • #25
Apeiron. The only way to prove your assertions is to actually build a sentient being from scratch. First some definitions:

Consciousness = Awareness of being aware.
Qualia = the feelings of awareness of being aware.

My questions are:

1. What is the minimum set of brain parts where this awareness of being aware can occur?
2. How do you create a silicon version of it all where it can produce the same behavior?

Anyway. I want to know what proof do you have that our self awareness is simply an emergence of present biology, chemistry and physics. Is there some papers or research about it that shows convincingly it is so. I want to understand that proof (if you have) so I can focus on the search for the neural correlates of consciousness if it has to come to this.

Note that although any damage of the brain or nervous system can affect personality. It still doesn't explain how awareness of being aware is possible in the first place. Any thoughts of how it is actually implemented in our biology or possibly in machines?
 
  • #26
riezer said:
Also I want to understand it so I can know what path to take in the years and decades ahead. Whether to live purely materialistic life and survival of the fitness with no conscience to kill if necessary or to take a life of spirituality where there is something more to life than all these evolution and biology and temporary short 90 years life in the billion of years of the Earth life span.

OK, I hope you are joking and not as nuts as you sound.

The scientific choice is NOT between being a meaningless meat machine and some kind of spiritual essence. We are biological and social beings, so "the right path" means following that one.

If you still feel unsure, please seek some counselling rather than trying to work it out for yourself from books.
 
  • #27
apeiron said:
OK, I hope you are joking and not as nuts as you sound.

The scientific choice is NOT between being a meaningless meat machine and some kind of spiritual essence. We are biological and social beings, so "the right path" means following that one.

If you still feel unsure, please seek some counselling rather than trying to work it out for yourself from books.

If life is all biology and nothing else. The animal kingdom have equal rights as humans. We are destroying the ecosystem and driving animals to extinctions. Humans are like cancers that spread and metastasize the pure world. Hence the population of the world need to be reduced by one third to give other living things a chance.

Ok I was just joking.. lol! Don't worry I'm not Hitler although their kinds could be helped immensely if we can discover that our higher ideals like love is beyond biology.

But then if it is just biology, no problem. I'm not biased in this search for truth about consciousness. But from the available evidence. It's more like all our neural circuitry just support our unconsciousess side. That is. Calculational network can only support the unconscious. It's still a great mystery how this awareness of being aware comes about. But if you can prove that our biology can produce it. Then please share your line of thought. What is important is the truth.. not any biases we may hold. For the truth is beautiful. The truth will set us free whatever it is.
 
  • #28
apeiron said:
Good point, but there is the hope that the dynamic is also the coherent. And thus that there is some kind of "atom" to focus on.

So dynamic means all is ceaseless change. Never repeating pattern. Elusive because things can go off in every direction.

But dynamics can have organisation. It can have self-organising coherence. This is recognised in hierarchy theory by structural notions like holons and cogent moments. Like the whorls of turbulence in a stream, these become the "atoms" from which a process is composed.

So qualia are atoms of the wrong kind. They are the equivalent of a search for the irreducible substance of the mind. Little shards of experiencing that can be glued together to construct larger mental states.

The "atoms" of a dynamical view are self-similar patterns that arise over all scales as dissipative structure. Multi-scale coherence.

Taking the biological view, we can then be more specific. We can look for minds to express the more particular actions of adaptation or anticipation (different ways of saying much the same thing).

So consciousness becomes something composed of adaptation or anticipation, like little whorls of turbulence, occurring dynamically over all scales. Coherent activity that dissipates uncertainty and increases congruity.

Cut across the nervous system at any scale and you will thus see adaptation/anticipation as the driver of order. Synapses do it. Neurons do it. Cortical columns do it. Cortical areas do it. Cortical lobes and hemispheres do it. Brains do it.

Where is "consciousness" in this dynamic hierarchy view? It is fractally everywhere. Or - as Karl Pribram used to say - its like a hologram.

This is nothing like a computational view where the state of a system is entirely composed of atomistic bits. And some macro-state emerges in causally disconnected fashion, the "software state" that the hardware unwittingly represents.

Instead, the whole process is alive. Its atoms are the repetitive motif of an organisation that reappears at every scale of observation. The motif of adaptation/anticipation. The process of modelling the world and maintaining a good fit between reality and our impressions of it.

The mistake when it comes to dynamism is to seek the fundamental grain of explanation in the smallest scale - the atoms of substance from which larger wholes are constructed.

The fundamental level of explanation for a dynamical system is instead going to be the identification of the ever-present motif, pattern, organising form, self-ordering process. Which is why I keep stressing that the way to generalise consciousness is to understand it in terms of a process of adaptation and anticipation.

I read the holographic mind (or at least the beginning of it.) The part about Pribram was so interesting but then it veered off into nonsense.

I think the question about "qualia" is expressed poorly. What I think people are trying to get at is, what are the specific properties of a dynamic process that lead to the emergence of subjective experience. I get the impression from Dennet that he doesn't seem to think there is such a thing as subjective experience. Which um, there is. It's kind of the a priori basis of everything else. So like if a sophisticated enough android could have similar dynamic processing, would that be enough to induce that same cogito ergo sum experience, or is there some necessary physical component. This is very difficult to explain. Like is the emergence from the information represented, or some unknown physical properties. I think the notion of "qualia" is an attempt to label that subjectivity.
 
  • #29
riezer said:
No. I simply want to know if our higher ideals like love, empathy, poetry, etc. are all just a result of neurotransmitters and purely physical or they are due to extended something. So you are believer of the former. No problem about it. I also tend to believe in the former because neuroscience has proven so much like any brain damage could disable certain characters or personality.

Also I want to understand it so I can know what path to take in the years and decades ahead. Whether to live purely materialistic life and survival of the fitness with no conscience to kill if necessary or to take a life of spirituality where there is something more to life than all these evolution and biology and temporary short 90 years life in the billion of years of the Earth life span.

This is a classic false dichotomy. Morality can be extrapolated from the value of things themselves, or from principles of self-interest. An ultimate arbitrer is not necessary.
 
  • #30
Galteeth said:
This is a classic false dichotomy. Morality can be extrapolated from the value of things themselves, or from principles of self-interest. An ultimate arbitrer is not necessary.

In school. There was this philosophy class where the professor mentioned the name of a philosopher who said something like morality and being goodwas really all about self-preservation because if all are bad, you can be struck anytime. So all agree to be good so each can be preserved while each have time to slowly plot out how to conquer. I forgot the name of this philosopher.. is it Kant or Hume or others?

Then I wrote him a 10 page paper explaining that what if good is intrinsic in the universe. I told him that in the universe there is order down to orbits of planets and atoms. What if there is a higher purpose upon which the universe is built and which make order the Big Bang against all odds. Then being good is a law of nature and not about a moral agreement for self preservation and subtle conquer.
 
  • #31
Galteeth said:
I read the holographic mind (or at least the beginning of it.) The part about Pribram was so interesting but then it veered off into nonsense.

I think the question about "qualia" is expressed poorly. What I think people are trying to get at is, what are the specific properties of a dynamic process that lead to the emergence of subjective experience. I get the impression from Dennet that he doesn't seem to think there is such a thing as subjective experience. Which um, there is. It's kind of the a priori basis of everything else. So like if a sophisticated enough android could have similar dynamic processing, would that be enough to induce that same cogito ergo sum experience, or is there some necessary physical component. This is very difficult to explain. Like is the emergence from the information represented, or some unknown physical properties. I think the notion of "qualia" is an attempt to label that subjectivity.

I'm rereading Antonion Damasio book now called "Self Comes to Mind" in order to rethink what you guys are saying that the word Qualia is vague and Apeiron saying it automatically entails dualism. So maybe we should replace the world qualia with something more specific. Damasio mentioned:

"The mere presence of organized images flowing in mental stream produces a mind, but unless some supplementary process is added on, the mind remains unconscious. What is missing from that unconscious mind is a self. What the brain needs in order to become conscious is to acquire a new property - subjectivity - and a defining trait of subjectivity is the feeling that prevades the images we experienced subjectively".
...
"Viewing the mind as a nonphysical phenomenon, discontinuous with the biology that creates and sustains it, is responsible for placing the mind outside the laws of physics, a discrimination to which other brain phenomena are not usually subject."
...
"Nonetheless, the possibility of explaining mind and consciousness parsimoniously, within the confines of neurobiology as currently conceived, remains open; it should not be abandoned unless the technical and theoretical resources of neurobiology are exhausted, an unlikely prospect at the moment"


So must we continue to use "qualia" or replaced it with "subjectivity"? Or maybe subjectivity automatically implies dualism too? If so, what must be the right word(s) to use to describe the above description of Damasio?
 
  • #32
riezer said:
So must we continue to use "qualia" or replaced it with "subjectivity"? Or maybe subjectivity automatically implies dualism too? If so, what must be the right word(s) to use to describe the above description of Damasio?

Subjectivity carries less ontological baggage than qualia. Neither must automatically entail dualism, but qualia usually ends up leaving its users with no other choice.

If you atomise experience to find its neural source, you are subtracting away all that is in fact producing it. So it is no surprise that you discover "a hard problem" and proclaim the only escape is a belief in dualism.

Subjectivity is at least a term that allows the mind might have a more complex source than atomistic explanations.

As to Damasio, he is not arguing for dualism. He is a neural realist.

Awareness has at least three levels that can be usefully described in terms of mechanism.

You have habit and attentive level processing, which can be explained by reference to neurology.

Then there is self-awareness and higher order thought which are language-scaffolded, socially-constructed. You will see that Dennett and Zeki, for instance, make this distinction - not that clearly, in their cases, but at least they do.

Damasio treats preconscious habitual and automatic brain activity as "unconscious", which is rather too strong. And he doesn't get the socially-constructed aspect of "the self".

So he has a simple two level analysis - the brain supporting an automatic flow of action and then some (unexplained) experiencing self neural mechanism. It is a kind of neural dualism.

A more sophisticated model would see the neural story as being about the interaction between habits and attention - the combination producing "conscious experiencing".

Then the social self comes in over the top of that to "experience the experiencing" - to take attentive note of what is going on in the interests of social self-regulation. It is thus a learned skill rather than a genetic capacity.

The point being that "subjectivity" is intrinsically complex. It has both biological and sociological causes.

If you try to break subjective experience up into atomistic shards of experiencing by going in search of the neural correlates of qualia, you are just never going to arrive at a scientific view of the mind.
 
  • #33
apeiron said:
Subjectivity carries less ontological baggage than qualia. Neither must automatically entail dualism, but qualia usually ends up leaving its users with no other choice.

If you atomise experience to find its neural source, you are subtracting away all that is in fact producing it. So it is no surprise that you discover "a hard problem" and proclaim the only escape is a belief in dualism.

Subjectivity is at least a term that allows the mind might have a more complex source than atomistic explanations.

As to Damasio, he is not arguing for dualism. He is a neural realist.

Awareness has at least three levels that can be usefully described in terms of mechanism.

You have habit and attentive level processing, which can be explained by reference to neurology.

Then there is self-awareness and higher order thought which are language-scaffolded, socially-constructed. You will see that Dennett and Zeki, for instance, make this distinction - not that clearly, in their cases, but at least they do.

Damasio treats preconscious habitual and automatic brain activity as "unconscious", which is rather too strong. And he doesn't get the socially-constructed aspect of "the self".

I remember reading in Damasio former book "The Feelings of What Happens: Body and Mind in the Making of Consciousness" where he didn't agree and he gave his reasons:

"On several occasions when I was in medical scholl and in neurology training, I remember asking some of the wisest people around me how we produced the conscious mind. Curiously, I always got the same answer: language did it"

Damasion didn't agree and the details is in page 107 of that book. Quoting briefly:

"The best evidence, in this regard, comes from patients with what is known as global aphasia. This is a major breakdown of all language faculties. Patients are unable to comprehend language whether auditory or visually. In other words, they understand no speech when spoken to and they cannot read a single word or letter; they have no ability to produce speech beyond stereotypical words, largely curse words; they cannot even repeat a word or sound if you ask them to. There is no evidence that is, in their awake and attentive minds, any words or sentences are being formed. One the contrary, there is much to suggest that theirs is a wordless thought process.

Yet, while it is out of the question to maintain a normal conversation with a global aphasic, it is possible to communicate, richly and humanly, if only you have the patience to accommodate to the limited and improvised vocabulary of nonliguistic signs the patient may develop. As you familiarize yourself with the tools at the patient's disposal, it will never even cross your mind to ask if that human being is or is not conscious. In terms of core consciousness, that human being is no different from you and me, despite the inability to translate thought into language and vice versa"

There are much more neurological details in that chapter that shows language is not connected to self-awareness. Hope you have the book or have read it. Comment?



So he has a simple two level analysis - the brain supporting an automatic flow of action and then some (unexplained) experiencing self neural mechanism. It is a kind of neural dualism.

A more sophisticated model would see the neural story as being about the interaction between habits and attention - the combination producing "conscious experiencing".

Then the social self comes in over the top of that to "experience the experiencing" - to take attentive note of what is going on in the interests of social self-regulation. It is thus a learned skill rather than a genetic capacity.

The point being that "subjectivity" is intrinsically complex. It has both biological and sociological causes.

If you try to break subjective experience up into atomistic shards of experiencing by going in search of the neural correlates of qualia, you are just never going to arrive at a scientific view of the mind.
 
  • #34
riezer said:
Yet, while it is out of the question to maintain a normal conversation with a global aphasic, it is possible to communicate, richly and humanly, if only you have the patience to accommodate to the limited and improvised vocabulary of nonliguistic signs the patient may develop. As you familiarize yourself with the tools at the patient's disposal, it will never even cross your mind to ask if that human being is or is not conscious. In terms of core consciousness, that human being is no different from you and me, despite the inability to translate thought into language and vice versa"

Speech production is hierarchical so losing the parts of the brain that handle the later motor areas responsible for final articulation still leaves the earlier parts intact. So aphasia is not evidence against the argument that higher order thought is language-structured.

I haven't read this latest book from Damasio. But there is no single piece of evidence that proves the thought vs language debate either way. You have to consider the full range of evidence if you want to arrive at a conclusion. I'd be surprised if Damasio was being so naive here. What other arguments does he present?
 
  • #35
apeiron said:
Speech production is hierarchical so losing the parts of the brain that handle the later motor areas responsible for final articulation still leaves the earlier parts intact. So aphasia is not evidence against the argument that higher order thought is language-structured.

I haven't read this latest book from Damasio. But there is no single piece of evidence that proves the thought vs language debate either way. You have to consider the full range of evidence if you want to arrive at a conclusion. I'd be surprised if Damasio was being so naive here. What other arguments does he present?

Damasio has actual patients with the entire left hemisphere surgically removed and still have self awareness. Damasio concludes:

"Langauge hardly needs consciousness as one more among important abilities that humans should thank it for."

That book "Feelings of What Happens" was written in 1999. In his new book "Self Comes to Mind". He mentions about children who suffer from hydranencephaly who still has sentience as in:

"For a variety of reasons, children can be born with intact brain-stem structures but largely absent telencephalic structures, namely, the cerebral cortex, the thalamus, and the basal ganglia"

and he described the children as:

"These children, however, are anything but vegetative. On the contrary, they are awake and behaving".

Damasio concludes:

"The condition gives the lie to the claim that sentience, feelings, and emotions arise only out of the cerebral cortex"

So I guess the spine and brain stem is enough to give rise to subjectivity. Children with hydranencephaly just have brain stem more complex than the reptiles. Anyway. If Damasio were right, the brain stem was the seat of core subjectivity (?) There are many nuclei in the brain stem which could act like complete nano processing units or super computers in themselves.
 

Similar threads

Back
Top