The Flaw in the Definition of Consciousness

In summary, Chalmers' definition of consciousness is flawed because it presupposes the existence of a central, indivisible, self. His new definition, which accounts for all of the things that the previous definition accounted for, but uses less assumptions, is accepted. However, it is not a requirement that reductive explanations of consciousness be possible. My new definition of consciousness is "the state of advanced computational ability that allows for innovation and the illusion of a central perspective".
  • #36
Originally posted by Fliption
Talk about infinite regress, sheesh. This is like saying a man put himself together and when I ask how he put his arms on, you say "why, of course he picked them up with his arms and stuck them on".

Mentat, I've read everything you've written, but I just don't see how calling something an illusion eliminates the need to explain it. As an example, a mirage can still be drawn by the person experiencing it piece by piece even though it doesn't actually exists. How can you do the same for "feeling"?

No, no, the feeling does exist. I'm not postulating that subjective experiences are illusions, merely that the concept of one, complete, experience is an illusion. Instead, I'm proposing that experience is an ongoing process, and that it itself is simply the computation of new stimuli by the brain; but, instead of trying to explain how we come to experience something complete (like the color red), I think we should be explaining how our brains process the incoming information, relate it to those already stimulated arrays, and then process the illusion that we experienced an entire picture (or sound or smell) instead of billions of discreet units of information.
 
Physics news on Phys.org
  • #37
Originally posted by hypnagogue
I find this a little confusing. Perhaps you can try to clarify some more?

I'll try...

It appears to me that most philosophers of the mind are trying to explain how a complete experience can be had by a person (these complete experiences are what they believe constitute consciousness). They are not trying to understand how the brain processes incoming data, or which parts of the brain are useful for (for example) visual stimuli, but they want to know how all of that comes together to form conscious experience (please correct me if I'm totally wrong about their expectations).

It is my opinion that, since there really is no "Final Draft" of the experience - there are just those discreet units of information being processed, which those philosophers are not really interested in - the question of how all that information "becomes" a complete conscious experience is completely moot. Instead, they should be studying those individual processes to see how it is that, in retrospect, our brains look back at all of that information and see, not discreet units of information, but a complete "picture". Of course, the quick answer (in terms of the paradigm I'm currently pursuing) is "it's a computing mechanism that is merely for convenience" (convenience in storage, recall, and communication to others).

It's like trying to study why I see a complete "picture" of the library, inspite of the constant shifting (saccades) of my eyes. I really do process that constant shifting, but my brain "compacts" it all into a concise "image".

I've got to go now...I hope I didn't seem to rushed, but the library's about to close.
 
  • #38
It appears you're talking what would be called an 'easy' problem and not the hard problem. You're talking about how it is that the brain treats diverse packets of information as coherent wholes. This can be treated entirely as an objective issue of styles of information processing, and so it has no fundamental link to the question of how it is that feeling exists, even if it is tangentially related to some aspects of how humans feel/subjectively experience under normal conditions.

The hard problem is not fundamentally about how subjective experience appears to be holistic or gestalt; the hard problem is fundamentally about how subjective experience comes to exist in the first place (regardless of whether it is holistic or disjointed).
 
  • #39
Originally posted by hypnagogue
The hard problem is not fundamentally about how subjective experience appears to be holistic or gestalt; the hard problem is fundamentally about how subjective experience comes to exist in the first place (regardless of whether it is holistic or disjointed).

If we can't fundamentally and systematically reason our way through various methods of cognitive science and neuroscience objectivity on conscious experience physical or mental; how are we ever going to yielf an explanation for the "extra ingredient" posed by Chalmers to functionally assess the given maxim of provability? I just don't see how quantum pheonomena is still going to give the empirical objectivity we need in conscious experience? Although, Chalmers has the right approach on the hard problem and has the easy problem mapped out systematically, in what way is he or any other going to conclude the basis of reasoning he needs for that explanitory bridge?
 
  • #40
Originally posted by hypnagogue
It appears you're talking what would be called an 'easy' problem and not the hard problem. You're talking about how it is that the brain treats diverse packets of information as coherent wholes. This can be treated entirely as an objective issue of styles of information processing, and so it has no fundamental link to the question of how it is that feeling exists, even if it is tangentially related to some aspects of how humans feel/subjectively experience under normal conditions.

The hard problem is not fundamentally about how subjective experience appears to be holistic or gestalt; the hard problem is fundamentally about how subjective experience comes to exist in the first place (regardless of whether it is holistic or disjointed).

It appears to me (correct me if I'm wrong) that you are using the term "subjective experience" to mean something completely separate from all of the things that are normally used to describe it...

What I mean is, I can reductively explain a particular experience, and how the brain processes the lump of information as a gestalt (in retrospect), and you can accept all of this, but you still ask that I explain "subjective experience"...what else is there to explain?

Remember my first post in this thread? In it, I mentioned Hume, who talked about how, if you remove all of a persons innate characteristics ("nature") and their learned ones ("nurture"), you will not have a naked self remaining, but you will have nothing at all, since "self" cannot be reasonably defined as anything but a collection of those aforementioned things. Well, it seems to be the same with experiences, if you take away the computation of new and old stimuli, the use of recall, and the apparent (thought illusory) synergy of these bits of information into a whole experience, you don't have a "naked experience" left, you don't have anything left since "experience" can't (so far as I can tell) be coherently defined as anything besides those aforementioned things.
 
  • #41
Jeebus, you mentioned the "extra ingredient" that Chalmers requires for an explanation of consciousness. That's what's symbolized by the "tip" of the "pyramid" in my illustration (third post down on the first page). The problem is, I think, that the "tip" doesn't really exist. It's just a bunch of the same blocks in the same kind of configuration as the blocks below them...they're just "higher". This is why I made threads like "Faulty expectations of a theory of consciousness" and "Vitalist nonsense versus Science": because people are expecting something "extra" out of a theory of consciousness that they don't expect out of any other Scientific phenomenon. This is exactly what the vitalists did, with regard to "life". They were convinced that you could explain every minute function of a living being, and still not discover how those processes "become 'life'". Their problem, of course, was that they thought there was "something more" to life, than those physical functions, when it turns out that the physical functions are all you need to explain life.

Now, I know that it has been said that the two cases are not analogous, but I say they are. It's not like a vitalist ever said "It may be possible to explain life, if you could know all of the physical functions, but you'll never know all of the physical functions". No, they said just what the Chalmerean (there's an interesting new word) philosophers are saying about consciousness now (just substitute "conscious" for "living", and "consciousness" for "life"): "You can explain all of the functions that take place in a particular living being, but that still doesn't bring you any closer to explaining how life can arise from all of those processes...(and this is my favorite part) I can clearly imagine all of those processes occurring in a particular being, without the presence of life in that being (conversely "...without it being alive" which is also replaceable with "conscious")".
 
  • #42
Mentat, you need to stop parading that analogy, because it just doesn't work.

With life, the thing that needs explaining is purely a set of objectively observable functions (reprodcution, growth, locomotion, etc). The non-physical vital spirit is an explanatory posit to try to explain how it is that the functions of life work, not something that needs to be explained in its own right. Once it is shown that physics can completely account for the functions of life, there is no longer a compelling reason to believe in the vital spirit; there is no fundamental question of the form "Why is it that reproduction, growth, locomotion, etc. are associated with life?"

With consciousness, the thing that needs explaining is not objective at all, but instead is subjective experience. (Again, that is not to beg the question, but rather to assert that any explanation, even one grounded in objective theory, must ultimately arrive at subjective experience if it is to be successful.) Subjective experience is NOT an explanatory posit like the vital spirit; rather, it is the central thing in need of explanation. No matter how much we address the objective processes of the objective brain, we are still always faced with the same question that must ultimately be addressed: "why is it that brain activity X is associated with subjective experience?"

It should not be surprising that investigations into consciousness are unique among all scientific inquiries. All other phenomena (including life) are by definition objective in nature, with only objective properties in need of explanation, and so may all be treated in the same general way by science. Investigation of consciousness is a unique undertaking in all of scientific inquiry, precisely because such investigation involves explanation of subjective experience, which does not reveal itself in objective observation.
 
  • #43
Originally posted by hypnagogue
Mentat, you need to stop parading that analogy, because it just doesn't work.

With life, the thing that needs explaining is purely a set of objectively observable functions (reprodcution, growth, locomotion, etc). The non-physical vital spirit is an explanatory posit to try to explain how it is that the functions of life work, not something that needs to be explained in its own right.

Look, if it's not vitalism, then at least look at my example in its own right. Forget what the vitalists were after, that's irrelevant. I'm talking about the people (and I've actually met a few...they still exist) who think that all of these explanations of the physical processes involved in a living being are falling short of explaining "life" itself, since they can (and there arguments do indeed sound much like what I'm saying here - even if I do embellish from time to time for the purpose of making the similarity with your argument absolutely clear) "imagine all those processes occurring, and yet the thing not being alive". Much like someone could imagine a the proper configuration of particles, without them being "liquid". And like someone else could imagine the curvature of spacetime, without their being a perceived gravitational attraction. And like still others, could imagine the computation, memorization, and recall of information about stimuli gathered from any/all of the 5 senses, along with a "trick" (or "helpful tool") for compactification that gives the illusion of a complete, indivisble experience (while such a complete, final, draft never really existed); all this and yet there be no consciousness. I ask again, what is missing?

With consciousness, the thing that needs explaining is not objective at all, but instead is subjective experience. (Again, that is not to beg the question, but rather to assert that any explanation, even one grounded in objective theory, must ultimately arrive at subjective experience if it is to be successful.)

No, if I (along with those references which I've mentioned before) am correct, then you need only arive at a theory of how the illusion of a complete experience gets processed along with the rest of the information. There is nothing else to add to this. What Chalmerean philosophers are doing (AFAICT) is positing first and foremost the existence of a set of complete, indivisible, experiences, which (they say) must then be reductively explained. This, however, may be a straw-man argument, since it is not so obvious that we actually have complete experiences (instead of merely processing the illusion of such a thing along with the rest of the on-going computation in the brain), since we, as the subjective "experiencer", could not possibly tell the difference.

It's like a joke that used to be the quote of one of the members here; something to do with a philosopher asking a student why people used to think the sun moved, while the Earth remained motionless. The student said, because that's how it appears...it is the most obvious conclusion, since that is what it would look like if the sun really did move. The philosopher then said, "Oh, so what would it have looked like if the Earth were revolving around the Sun?".

Chalmers is, IMHO, trying to refute all possible explanations of what keeps the Sun moving around the Earth.
 
  • #44
Originally posted by Mentat
What I mean is, I can reductively explain a particular experience

Really? You can show how the firing of neurons logically entails redness? You can discover processes that appear to be necessary and/or sufficient for redness, but can you really explain how they bring about redness?
 
  • #45
Originally posted by Mentat
Forget what the vitalists were after, that's irrelevant. I'm talking about the people (and I've actually met a few...they still exist) who think that all of these explanations of the physical processes involved in a living being are falling short of explaining "life" itself, since they can (and there arguments do indeed sound much like what I'm saying here - even if I do embellish from time to time for the purpose of making the similarity with your argument absolutely clear) "imagine all those processes occurring, and yet the thing not being alive".

"Alive" in the sense of the vital spirit is a notoriously shaky concept. The vital spirit cannot be observed at all, so how can we begin to talk about it?

Subjective experience can very plainly be observed (from the 1st person view), so it immediately has credibility and calls for a legitimate explanation. Unlike the vital spirit, it cannot be written off or ignored.

Much like someone could imagine a the proper configuration of particles, without them being "liquid". And like someone else could imagine the curvature of spacetime, without their being a perceived gravitational attraction.

That's a strawman. "Cannot be imagined otherwise" is just another way of saying "logically entailed." From the definitions of H2O and spacetime, given materialistic assumptions, those phenomena are logically entailed by their prospective causes. It remains to be shown how the prospective cause of brain functioning can logically entail subjective experience even in principle using only materialistic assumptions.

And like still others, could imagine the computation, memorization, and recall of information about stimuli gathered from any/all of the 5 senses, along with a "trick" (or "helpful tool") for compactification that gives the illusion of a complete, indivisble experience (while such a complete, final, draft never really existed); all this and yet there be no consciousness. I ask again, what is missing?

What is missing is experience! You claim to know how the illusion of indivisible experience is formed, but you avoid the question of how any experience at all can be created by a bundle of neurons.

There is nothing else to add to this. What Chalmerean philosophers are doing (AFAICT) is positing first and foremost the existence of a set of complete, indivisible, experiences, which (they say) must then be reductively explained. This, however, may be a straw-man argument, since it is not so obvious that we actually have complete experiences (instead of merely processing the illusion of such a thing along with the rest of the on-going computation in the brain), since we, as the subjective "experiencer", could not possibly tell the difference.

What is clear is that we have experience, regardless of how we wish to classify it as divisible or indivisible. What is not clear at all is how physics can entail experience of any kind.

Chalmers is, IMHO, trying to refute all possible explanations of what keeps the Sun moving around the Earth.

What Chalmers is doing is trying to steer us towards a sound theory of consciousness. Ignoring the hard problem is not a satisfactory approach, however much more it might make consciousness amenable to scientific study. If we ever want a complete theory of consciousness we will need to face up to and surmount the hard problem at some point, because it cannot be written off like so many vital spirits as you suggest.
 
Last edited:
  • #46
Originally posted hypnagogue
What Chalmers is doing is trying to steer us towards a sound theory of consciousness. Ignoring the hard problem is not a satisfactory approach, however much more it might make consciousness amenable to scientific study. If we ever want a complete theory of consciousness we will need to face up to and surmount the hard problem at some point, because it cannot be written off like so many vital spirits as you suggest.

I was thinking the other day while reading Facing Up to the Problem of Consciousness by Chalmers and he said:

"We are already in a position to understand certain key facts about the relationship between physical processes and experience, and about the regularities that connect them. Once reductive explanation is set aside, we can lay those facts on the table so that they can play their proper role as the initial pieces in a nonreductive theory of consciousness, and as constraints on the basic laws that constitute an ultimate theory...

And I thought about this and made an idea that sparked something definitively unrealistic but probable. As he defined the 'easy problems' of consciousness, what if that is the main factors of the 'hard problem' … what if there aren't any more factors and algorithms that go into the equation given? What if that problem is set and categorized and the answer is already there? That is the experience. Those factors of the easy problem 'make up' and contain the information needed for subjective experience?

I dunno.
 
  • #47
Jeebus, Chalmers realizes that solving the 'easy' problems will be instrumental and indispensable in any attempt to solve the 'hard' problem, but there are principled reasons (which Chalmers discusses in "Consciousness and Its Place in Nature") to believe that just solving the easy problems will not be enough.
 
  • #48
I don't see what the difficulty is...Mentat is right, you guys need to just get in line!

If I'm not mistaken, M, the point you are getting at is that it is possible that the process is the experience, and that there is nothing else that needs to be explained?
 
  • #49
Originally posted by hypnagogue
Jeebus, Chalmers realizes that solving the 'easy' problems will be instrumental and indispensable in any attempt to solve the 'hard' problem, but there are principled reasons (which Chalmers discusses in "Consciousness and Its Place in Nature") to believe that just solving the easy problems will not be enough.

Thanks for the info.

I am reading it now and I came upon a paragraph that peeked my interested for further knowledge of.

Chalmers said: "What makes the easy problems easy? For these problems, the task is to explain certain behavioral or cognitive functions: that is, to explain how some causal role is played in the cognitive system, ultimately in the production of behavior. To explain the performance of such a function, one need only specify a mechanism that plays the relevant role. And there is good reason to believe that neural or computational mechanisms can play those roles.

My question is … doesn't behavior, in a broad sense, of the neurophysical system of materialistic functions approach -- directly compatible or parallel to cognitive experience on the physical level without the reductive explanation?

I know then further down Chalmers states:

(1) Mary knows all the physical facts.

(2) Mary does not know all the facts.

This isn't likely. Physical facts depict normal facts. If something it is not physical it is not factual to the human brain. If it can't be senesed or even verifiable, then the fact is there is no fact in question. If there is no empirical evidence for a zombie, there is no fact for me to believe that it ever existed ab ovo.

This leads to my question why Chalmers says 'materialism is false' without any empirical evidence. There were no facts given for the knowledge argument to follow but subjective choplogic. Where did he come up with this conclusion? He then explains the epistemic gap but that doesn't give me reasonable doubt to why facts are not facts without evidence for that fact.

Wish to clarify for me?
 
  • #50
Originally posted by Jeebus
My question is … doesn't behavior, in a broad sense, of the neurophysical system of materialistic functions approach -- directly compatible or parallel to cognitive experience on the physical level without the reductive explanation?

Can you rephrase this? I'm not sure from your wording exactly what you are getting at.

This isn't likely. Physical facts depict normal facts. If something it is not physical it is not factual to the human brain.

"Physical" just refers to properties that are detectable in the objective, 3rd person sense. If you define all facts as physical facts, you are begging the question by assuming that materialism coherently accounts for all existing phenomena. There could well be some property that is not what we would properly call physical but which is characteristic of human brains nonetheless.

If it can't be senesed or even verifiable, then the fact is there is no fact in question. If there is no empirical evidence for a zombie, there is no fact for me to believe that it ever existed ab ovo.

There is also no objective empirical evidence for consciousness, yet I doubt you would claim that consciousness does not exist.

And 'zombies' are a philosophical tool used to clarify the problems of consciousness, not actual entities that are presumed to exist.

This leads to my question why Chalmers says 'materialism is false' without any empirical evidence.

If you mean empirical evidence in the sense of objective information, then by definition such evidence would always be consistent with materialism, so you have no grounds for ever expecting such evidence to support the idea that materialism might be false. If you allow empirical evidence to include your own subjective experience, then you have very compelling evidence against materialism, for all the familiar reasons I've been explaining.

There were no facts given for the knowledge argument to follow but subjective choplogic.

The reasoning is simple. Forget Mary. For further clarity, let's go back to the non-conscious computer/demon D which draws conclusions from objective facts using the axioms of materialism. D can have complete information about a human brain, but D would never have reason to suspect that that human brain possesses anything like subjective experience. This is because consciousness cannot be logically entailed using only materialistic assumptions. (Re-read the 'faulty expectations' and 'liquid' threads if you doubt this.) It follows that D knows all the physical (objective) facts, but D does not know all the facts; in particular, D does not know anything about subjective experience in spite of its complete knowledge of the human brain.
 
Last edited:
  • #51
Originally posted by hypnagogue
Really? You can show how the firing of neurons logically entails redness? You can discover processes that appear to be necessary and/or sufficient for redness, but can you really explain how they bring about redness?

That's a non-sequiter. If the process is necessary and sufficient for redness, then what does it mean to explain how the process "brings about" redness? The process is the experience of redness; that's why we call it "sufficient". You might as well, on this line of questioning, ask "You can discover the wavelength that is classified as 'red', but can you really explain how that wavelength brings about it's own 'redness'?" Or, more to the point of the "liquid liquid" example, "You can discover the arrangements of particles that are necessary and/or sufficient for the substance to be a liquid, but can you really explain how that arrangement brings about it's 'liquidity'?".
 
  • #52
Originally posted by hypnagogue
"Alive" in the sense of the vital spirit is a notoriously shaky concept. The vital spirit cannot be observed at all, so how can we begin to talk about it?

Define "observe". If "observe" entails any kind of perception, then I can indeed perceive the vital spirit, because I can perceive that I am alive.

Besides, I wanted to drop the whole "vital spirit" part of that, and get to the more important matter: The vitalists only needed the vital spirit to explain something that didn't really exist in the first place. As it turns out, there is nothing special about "life". Indeed, "life" is an illusion, since there are no clear-cut definitions of what is and is not "alive" (as I have shown on many older threads). We have settled for the scientific approach, and dropped the philosophical notion that life is a product of cellular functions. Life is not a product of cellular functions, but is simply a word used to encompass all of those many functions, for convenience in communication. Nothing more.

It is my opinion (currently) that Chalmers has erected the same brand of straw-man by first postulating that there is such a thing as a Final Draft of "the actual (complete; indivisible) experience", and then trying to figure out how neuronal functions "give rise" to this thing that doesn't really exist in the first place. That's what a "straw-man" is, isn't it?

Subjective experience can very plainly be observed (from the 1st person view), so it immediately has credibility and calls for a legitimate explanation. Unlike the vital spirit, it cannot be written off or ignored.

Subjective experience can be plainly observed? How plainly, exactly? I never notice the constant saccades of my eyes or seperateness of the functions of my visual cortex (each function taking place on it's own, and never "meeting up" with the others). No, subjective experience is, indeed, observed in the 1st person, but it is a compactification of information that did not get processed at the same time, and did not arrive at some final destination. This compactification may be computed (in the brain) as "reality", but it clearly cannot be.

That's a strawman. "Cannot be imagined otherwise" is just another way of saying "logically entailed." From the definitions of H2O and spacetime, given materialistic assumptions, those phenomena are logically entailed by their prospective causes. It remains to be shown how the prospective cause of brain functioning can logically entail subjective experience even in principle using only materialistic assumptions.

From the definitions of H2O and spacetime, you are right, they are indeed the logical outcome of their underlying processes. But, have you ever read Consciousness Explained, by Dan Dennett? From the evolutionary innovations on the proto-human brain, it is the logical necessity that their be a brain that plays this constant trick on itself.

What is missing is experience!

No. What is missing is a complete experience. Sub-experience is all over the place, but that one thing appears to be missing. The reason, as I've stated before, that this thing is "missing" is because it doesn't really exist. You are looking for the "end-product" of an on-going process...that's not logically consistent.

You claim to know how the illusion of indivisible experience is formed, but you avoid the question of how any experience at all can be created by a bundle of neurons.

Any experience at all? You have, I'm sure, understood the ways I've explained the computation, memorization, and recall of the neocortex. From this, you have a workable framework for the processes by which the brain processes the world around it. With all of this information being processed, but never meeting up at any place in the brain (or anywhere else, for you Dualists :wink:), the question isn't "How do they every sum up to experience?", it's "Do they ever sum up to experience", and, "If not, what is the evolutionary reason for having a brain that convinces itself that they do?". These things are answered in the books I've mentioned before.

What is clear is that we have experience, regardless of how we wish to classify it as divisible or indivisible. What is not clear at all is how physics can entail experience of any kind.

No, no, no, if the experience is "divisible", then it is not a coherent picture of anything, but merely a set of "sub-experiences", which are the individual computations of different kinds of information, occurring in different parts of the brain (you couldn't expect "texture" to be processed right along with "color" or "shape", could you?), and you have no final product to explain/reduce. Chalmers is indeed asking for an explanation of that final, indivisble, "product" which I'm saying doesn't exist.

Ignoring the hard problem is not a satisfactory approach, however much more it might make consciousness amenable to scientific study.

Ignoring a problem is not - you're right - a satisfactory approach at all. But Dennett is not ignoring the "hard problem". He's examining it directly, and showing it to be a straw-man, with no substance at all (aside from those things which Chalmers refers to as the "easy problems").
 
  • #53
Originally posted by Zero
I don't see what the difficulty is...Mentat is right, you guys need to just get in line!

Nice to know I have a fan :smile:.

If I'm not mistaken, M, the point you are getting at is that it is possible that the process is the experience, and that there is nothing else that needs to be explained?

Very close. The process is a set of "sub-experiences", or minor computations of different aspects of a stimulus. However, our brain has this little habit (extremely useful one, since we wouldn't be sentient without it) of looking back on previous sets of information (processed at different times, in different parts of the brain) as though they once formed a complete, indivisible, "experience" - even though they never really did.

So, you're pretty much right-on, Zero; the process is what we call the "experience", but they are asking for an explanation of an extra part of this process that doesn't really exist (IMHO).
 
  • #54
Originally posted by Mentat
Or, more to the point of the "liquid liquid" example, "You can discover the arrangements of particles that are necessary and/or sufficient for the substance to be a liquid, but can you really explain how that arrangement brings about it's 'liquidity'?".

Yes, you can. If I give you a certain set of general conditions C that are necessary and sufficient for a set of H2O molecules to be in a macroscopic liquid state, all I have done is given you necessary and sufficient conditions. Using this information, you can always determine whether or not a set of H2O molecules will be in a macroscopic liquid state based on a microscopic description, but you will not necessarily understand the underlying concepts of how the microscopic arrangement logically entails (accounts for) the macroscopic fluidity.

To make the macroscopic intelligible in terms of the microscopic, you need bridge principles connecting the two. You need to assert that water is composed of H2O molecules and then explain eg how electrostatic attractions between H2O molecules under conditions C allow them to 'roll over' each other without totally escaping each other, which allows for macroscopic properties such as taking the shape of the container. This is an explanitory step above and beyond simply stating necessary and sufficient conditions.
 
  • #55
Originally posted by hypnagogue
Yes, you can. If I give you a certain set of general conditions C that are necessary and sufficient for a set of H2O molecules to be in a macroscopic liquid state, all I have done is given you necessary and sufficient conditions. Using this information, you can always determine whether or not a set of H2O molecules will be in a macroscopic liquid state based on a microscopic description, but you will not necessarily understand the underlying concepts of how the microscopic arrangement logically entails (accounts for) the macroscopic fluidity.

To make the macroscopic intelligible in terms of the microscopic, you need bridge principles connecting the two. You need to assert that water is composed of H2O molecules and then explain eg how electrostatic attractions between H2O molecules under conditions C allow them to 'roll over' each other without totally escaping each other, which allows for macroscopic properties such as taking the shape of the container. This is an explanitory step above and beyond simply stating necessary and sufficient conditions.

But all you stated was more necessary conditions. How does the ability to take on the shape of the container you are in bring about liquidity? Of course, it doesn't; that's just part of the definition of "liquid" itself. However, the only reason I can say that with impunity is because nobody has postulated that there is anything else to it. Nobody has attributed any reality to the illusion that (for example) a liquid is a coherent "blob" of material, instead of a collection of very tiny particles whose own position is probabilistic in nature.
 
  • #56
Originally posted by Mentat
Nice to know I have a fan :smile:.



Very close. The process is a set of "sub-experiences", or minor computations of different aspects of a stimulus. However, our brain has this little habit (extremely useful one, since we wouldn't be sentient without it) of looking back on previous sets of information (processed at different times, in different parts of the brain) as though they once formed a complete, indivisible, "experience" - even though they never really did.

So, you're pretty much right-on, Zero; the process is what we call the "experience", but they are asking for an explanation of an extra part of this process that doesn't really exist (IMHO).
I'd say the "extra part" is either a flaw in reasoning or recollection. The reasoning flaw is in assuming the existence of something that is so far unproven, and unneeded to explain things. The other flaw is one of perception, in assuming that small bits cannot make up a bigger "whole"(although calling consciousness a "whole" is iffy at best). I'd describe it as similar to the way our brains interpret optical illusions, where we seek to fill in "gaps", even when there is no logical reason to do so.
 
  • #57
Originally posted by Mentat
Define "observe". If "observe" entails any kind of perception, then I can indeed perceive the vital spirit, because I can perceive that I am alive.

What does it mean to perceive that you are alive?

Besides, I wanted to drop the whole "vital spirit" part of that, and get to the more important matter: The vitalists only needed the vital spirit to explain something that didn't really exist in the first place.

Quite right. But subjective experience obviously exists. I am experiencing the color black right now as I look at my keyboard; my experience of blackness exists self-evidently, and no amount of semantic obfuscation can force me to deny this. My metaphysical ideas of what accounts for this blackness may or may not be false, but it is not false that my experience of blackness exists.

It is my opinion (currently) that Chalmers has erected the same brand of straw-man by first postulating that there is such a thing as a Final Draft of "the actual (complete; indivisible) experience", and then trying to figure out how neuronal functions "give rise" to this thing that doesn't really exist in the first place. That's what a "straw-man" is, isn't it?

Chalmers does not postulate that something like a complete, indivisible experience exists. He only makes plain the observation that subjective experience of some sort exists, and proceeds from there.

Subjective experience can be plainly observed? How plainly, exactly? I never notice the constant saccades of my eyes or seperateness of the functions of my visual cortex (each function taking place on it's own, and never "meeting up" with the others). No, subjective experience is, indeed, observed in the 1st person, but it is a compactification of information that did not get processed at the same time, and did not arrive at some final destination. This compactification may be computed (in the brain) as "reality", but it clearly cannot be.

Ah, so saccades of the eyes are subjective experience as well? Come on, that's nonsense. The fact to be explained is not so much that you do not notice the saccades of your eyes as it is that you notice your eyes from a 1st person perspective to begin with.

Information does not get processed at the same time-- so what? The fact remains that I have subjective experience, and the fact remains that a good theory of consciousness should make it intelligible how that is so. By this criterion, the neural reductionist theory of consciousness, taken on its own without any further fundamental assumptions, is not a good one.

From the definitions of H2O and spacetime, you are right, they are indeed the logical outcome of their underlying processes. But, have you ever read Consciousness Explained, by Dan Dennett? From the evolutionary innovations on the proto-human brain, it is the logical necessity that their be a brain that plays this constant trick on itself.

Perhaps consciousness was necessary to help our ancestors survive, but this approach only begs the question. Evolution can only endow us with consciousness if consciousness is an ontological possibility in the first place. How is subjective experience ontologically possible? The reductionist approach makes it evident how cognitive functions are possible in the same sense that it makes evident how the functions of a pocket calculator are possible, but it so far has said nothing meaningful about subjective experience.

No. What is missing is a complete experience. Sub-experience is all over the place, but that one thing appears to be missing.

Explain what sub-experience is and how it is entailed by physics. If you define sub-experience as so many cognitive functions, however, you would be better served to simply call it sub-functions or functions. Experience implies feeling, and it is not clear how objective functions can account for feeling even in principle.

Any experience at all? You have, I'm sure, understood the ways I've explained the computation, memorization, and recall of the neocortex.

These are not experiences. These are functions. You can explain the functional workings of human memory, but in no richer sense than you can explain computer memory. The difference is that a human experiences memory while a computer (as we plausibly assume) does not. And precisely what you have not explained is eg the experience of memory.

From this, you have a workable framework for the processes by which the brain processes the world around it.

Agreed. But you do not have a workable framework for the processes by which the brain experiences the world around it.

With all of this information being processed, but never meeting up at any place in the brain (or anywhere else, for you Dualists :wink:), the question isn't "How do they every sum up to experience?", it's "Do they ever sum up to experience", and, "If not, what is the evolutionary reason for having a brain that convinces itself that they do?".

Simply saying "how does the brain convince itself that it is conscious?" begs the question. In order for the brain to convince itself of anything there must be a 1st person perspective for which the convincing is done. (This does not assume an indivisible self, only a certain perspective.) You have assumed the existence of the 1st person perspective when in reality the task is to show how it exists in the first place. You might as well try convincing a rock that it is conscious.

No, no, no, if the experience is "divisible", then it is not a coherent picture of anything, but merely a set of "sub-experiences", which are the individual computations of different kinds of information

Still haven't explained how computations can account for consciousness. For that you need an extra assertion such as "computation so and so is conscious in such and such a way as a simple fact of nature." That is an explanation, but not a reductive explanation.

Ignoring a problem is not - you're right - a satisfactory approach at all. But Dennett is not ignoring the "hard problem". He's examining it directly, and showing it to be a straw-man, with no substance at all (aside from those things which Chalmers refers to as the "easy problems").

Actually, Chalmers shows how Dennett's reasoning is flawed insofar as Dennett has a wonderful tendency to argue in circles.
 
  • #58
Originally posted by Mentat
But all you stated was more necessary conditions.

No, I explained how those conditions which we established at the start make it intelligible that the microscopic properties completely account for the macroscopic ones.

How does the ability to take on the shape of the container you are in bring about liquidity? Of course, it doesn't; that's just part of the definition of "liquid" itself.

Indeed.

However, the only reason I can say that with impunity is because nobody has postulated that there is anything else to it.

Because there is nothing else to it. Everything that calls out for an explanation has been explained.

Subjective experience is one such phenomenon that calls out for explanation. It is not postulated, but rather it self-evidently exists. And to this point it has not yet been explained.
 
  • #59
I think the point you are missing, hypnagogue, is that consciousness is computation, and that makes all of your speculation meaningless. Saying that the workings of the brain define subjective experience accounts for everything in a neat little bundle.
 
  • #60
  • #62
I love your style Zero. I knew I shouldn't have bothered with your attitude, but I won't make that mistake again. *plonk*
 
  • #63
Originally posted by hypnagogue
I love your style Zero. I knew I shouldn't have bothered with your attitude, but I won't make that mistake again. *plonk*
Nice...you don't have a leg to stand on, and you blame ME?!?
 
  • #64
Regarding the indivisible self, I can tell you for a fact that the self IS divisible. We on europa have a science that is far ahead of earth's. Medical experiments were done many years ago to not only separate the 2 halves of a brain, but transplant them into separate people. Yes, we did this to abducted humans but Earth science will soon be able to perform the same experiment and I am sure someone will somwhere as soon as nerve regeneration technology and a few small problems like rejection are solved. When half a brain is removed from someone and transplanted to another body where the original brain has been removed, two distinct and unique individuals are created. Experiments have also shown that there is no psychic link between them so the only conclusion that can be made is that the self is divisible. Earth science should be advanced enough to perform this experiment within the next 50 years or so.
 
  • #65
Originally posted by Jeebus
My question is … doesn't behavior, in a broad sense, of the neurophysical system of materialistic functions approach -- directly compatible or parallel to cognitive experience on the physical level without the reductive explanation?[/b]
Originally posted by hypnagogue
Can you rephrase this? I'm not sure from your wording exactly what you are getting at.

All right let me try this again. Thanks for the info by the way.

Do you think that behaviour, itself, is congruent to materialistic functions of physical compatability? And do you think that materialistic functions, in turn are directly compatible with the opinion of experience? I think the two: behavior and experience are directly parallel to one another. Meaning, if you are experiencing one, then the other must follow.
 
  • #66
Originally posted by Zero
I'd say the "extra part" is either a flaw in reasoning or recollection. The reasoning flaw is in assuming the existence of something that is so far unproven, and unneeded to explain things. The other flaw is one of perception, in assuming that small bits cannot make up a bigger "whole"(although calling consciousness a "whole" is iffy at best). I'd describe it as similar to the way our brains interpret optical illusions, where we seek to fill in "gaps", even when there is no logical reason to do so.

I think Daniel Dennett used a similar illustration, and it's a good one (IMO). Our brain is trying to make sense of all this data, while compactifying (I know there's a better word than that, used with regard to computers...compressing?) it all (and "filling in the blanks", as you put it) in order to make recall easier.
 
  • #67
Originally posted by Mentat
I think Daniel Dennett used a similar illustration, and it's a good one (IMO). Our brain is trying to make sense of all this data, while compactifying (I know there's a better word than that, used with regard to computers...compressing?) it all (and "filling in the blanks", as you put it) in order to make recall easier.
I think the best word would probably be "integrating"...we integrate partial data into "whole bits" for easier processing, including the "internal data" we call consciousness. It is conceptual shorthand, and useful most of the time.

I'm looking over at my BEAUTIFUL guitar, not 10 feet from me. Intellectually, I know it is made out of wood, metal, paint, etc. However, I never ever think of it as the sum of its components, I always think of it as being of a whole. "Consciousness" is really the same thing, except it is a collection of processes as well as physical parts.
 
  • #68
Originally posted by hypnagogue
What does it mean to perceive that you are alive?

I don't know really; I just know, at any given time, that I am alive.

Quite right. But subjective experience obviously exists. I am experiencing the color black right now as I look at my keyboard; my experience of blackness exists self-evidently, and no amount of semantic obfuscation can force me to deny this. My metaphysical ideas of what accounts for this blackness may or may not be false, but it is not false that my experience of blackness exists.

Your experience of the color black does exist, but as a convenient computational tool of the brain, to contrast one wavelength of light from another. My point is that the concept of a complete picture of a black keyboard (to stick to your example) must clearly be an illusion of compactification (and "filling in the blanks") as the part of the brain that processes "black" is not the same that processes the shape and texture of the keys, and that is not the same as the part that recalls previous such images, and these separate parts never meet up...meaning that there are separate computations occurring, and yet you are fooled into believing that there is one coherent image in your "mind's eye".

Chalmers does not postulate that something like a complete, indivisible experience exists. He only makes plain the observation that subjective experience of some sort exists, and proceeds from there.

Then define "subjective experience", in Chalmer's terms.

Ah, so saccades of the eyes are subjective experience as well? Come on, that's nonsense. The fact to be explained is not so much that you do not notice the saccades of your eyes as it is that you notice your eyes from a 1st person perspective to begin with.

Information does not get processed at the same time-- so what? The fact remains that I have subjective experience...

Is that fact - which you are defending - that you have subjective experience, or that you had a subjective experience. Because of processing outside information in terms of previously-processed information (which is part of Chalmers' "easy problem") is what you call "subjective experience", then we have nothing to debate.

...and the fact remains that a good theory of consciousness should make it intelligible how that is so. By this criterion, the neural reductionist theory of consciousness, taken on its own without any further fundamental assumptions, is not a good one.

It should make it intelligible how what is so? How a computer (organic or otherwise) relates new stimulus to previous stimuli?

Perhaps consciousness was necessary to help our ancestors survive, but this approach only begs the question. Evolution can only endow us with consciousness if consciousness is an ontological possibility in the first place. How is subjective experience ontologically possible? The reductionist approach makes it evident how cognitive functions are possible in the same sense that it makes evident how the functions of a pocket calculator are possible, but it so far has said nothing meaningful about subjective experience.

I ask again, what is subjective experience, in Chalmers' terms or in your own.

Explain what sub-experience is and how it is entailed by physics. If you define sub-experience as so many cognitive functions, however, you would be better served to simply call it sub-functions or functions. Experience implies feeling, and it is not clear how objective functions can account for feeling even in principle.

Unless feelings are physical functions, instead of being "accounted for" by them. Again, you're going on the assumption that (for example) an excitation of cells in my finger - due to being poked by a needle - "gives rise" to pain; whereas scientists seem pretty well content to say that the excitation of cells is pain, and thus one needn't account for pain "in terms of excited cells"...this would be a non-sequiter.

These are not experiences. These are functions. You can explain the functional workings of human memory, but in no richer sense than you can explain computer memory.

Why is that distinction so imporant?

The difference is that a human experiences memory while a computer (as we plausibly assume) does not. And precisely what you have not explained is eg the experience of memory.

The "experience" of memory, or the experience of a memory?

Agreed. But you do not have a workable framework for the processes by which the brain experiences the world around it.

What if "processing" = "experiencing"? What if all things that process must also "experience", since the two terms are synonymous? That is what Dennett would call the equivalence of content and consciousness (I think).

Simply saying "how does the brain convince itself that it is conscious?" begs the question. In order for the brain to convince itself of anything there must be a 1st person perspective for which the convincing is done. (This does not assume an indivisible self, only a certain perspective.) You have assumed the existence of the 1st person perspective when in reality the task is to show how it exists in the first place. You might as well try convincing a rock that it is conscious.

The brain has a 1st person view because of the evolved ability for self-recognition. An ape can show this by recognizing itself in the mirror. There is nothing special about this. It's a matter of degree that separates a dog's licking itself from a human's pondering about himself.

Still haven't explained how computations can account for consciousness. For that you need an extra assertion such as "computation so and so is conscious in such and such a way as a simple fact of nature." That is an explanation, but not a reductive explanation.

Unless I say "computation = consciousness, since consciousness is just another term for the complex computation of external stimuli that our mind does all the time".
 
  • #69
Originally posted by Zero
I think the best word would probably be "integrating"...we integrate partial data into "whole bits" for easier processing, including the "internal data" we call consciousness. It is conceptual shorthand, and useful most of the time.

I'm looking over at my BEAUTIFUL guitar, not 10 feet from me. Intellectually, I know it is made out of wood, metal, paint, etc. However, I never ever think of it as the sum of its components, I always think of it as being of a whole. "Consciousness" is really the same thing, except it is a collection of processes as well as physical parts.

Also a good analogy. Indeed, Broad would probably say that "guitar" is an "emergent" phenomenon from those materials that you mention - whereas a Dennett-like philosopher of guitars would simply say that "guitar" = "such-and-such material" and so it would be foolish to try and figure out how a guitar can "arise" from those materials, since it is those materials.

Now, I think hypnagogue would point out that a guitar is a bad analogy to subjective experience, since, when you take it from a microscopic perspective and build toward more and more complexity, the logical outcome is a guitar; whereas, when you build up from the cellular structure to the structure and function of neurons, the logical outcome is a brain...not subjective experience.

Then, I would say something like: "Subjective experience" is a vague term that is clouding the issue. You can build up from cellular functions into a machine that has an extension (the neocortex) which has no other purpose but to process/experience (synonymous terms, AFAICT) the world around it, and thus the logical outcome is a "subjective experiencer".
 
  • #70
Originally posted by Mentat
Also a good analogy. Indeed, Broad would probably say that "guitar" is an "emergent" phenomenon from those materials that you mention - whereas a Dennett-like philosopher of guitars would simply say that "guitar" = "such-and-such material" and so it would be foolish to try and figure out how a guitar can "arise" from those materials, since it is those materials.

Now, I think hypnagogue would point out that a guitar is a bad analogy to subjective experience, since, when you take it from a microscopic perspective and build toward more and more complexity, the logical outcome is a guitar; whereas, when you build up from the cellular structure to the structure and function of neurons, the logical outcome is a brain...not subjective experience.

Then, I would say something like: "Subjective experience" is a vague term that is clouding the issue. You can build up from cellular functions into a machine that has an extension (the neocortex) which has no other purpose but to process/experience (synonymous terms, AFAICT) the world around it, and thus the logical outcome is a "subjective experiencer".
Maybe, to extend the analogy, you describe the brain as a guitar, and "consciousness" as the music which emerges from it? We know there is nothing metaphysical about a G chord, but it an apt description, since there is a similar(if false) "non-physical" dimension to music and consciousness
 

Similar threads

Replies
3
Views
1K
Replies
246
Views
32K
Replies
62
Views
11K
Replies
5
Views
3K
Replies
20
Views
3K
Replies
21
Views
5K
Replies
68
Views
9K
Back
Top