Why reductive explanations of consciousness must fail

In summary, Chalmers argues that while physical explanation is sufficient for explaining structures and functions in many domains, it is unable to fully explain conscious experience. This is because conscious experience is not just a matter of structures and functions, but also involves the subjective experience of being aware. This cannot be reduced to purely physical processes, as it is conceptually coherent that these processes could exist without experience. While physical explanations have been successful in many other domains, they are unable to fully explain consciousness. This is because consciousness is a unique and puzzling phenomenon that cannot be fully understood through reductionist methods.
  • #141
Originally posted by Canute
I don't think it's implausible, I think it's incorrect. I would define conscious as 'what it is like to be', the most common definition.


"What it is like to be..." ...a frog, a man, an electron?

I'm less clear on how to define 'information'.

Here's how I would define information within the context of what I have proposed: Input -- of whatever quality -- from a source other than self that causes a change -- however minute -- in the self.


I would say so. For them (string theorists) everything is epiphenomenal on strings.

What might "epiphenomenal" mean? Per the American Heritage Dictionary -- henceforth AHD -- it means "besides". Besides what?


Not quite nothing at all though.

Since I did not stop to take inventory of my entire internal world to see if ANYTHING was "axiomatic", I left room for the possibility that something might be.

Whatever it reduces to, how would I know. For most microphenomenalists this is not an issue, since they take consciousness to be irreducible and reduce everything to it instead.
If you say so.

Did you think I was criticising 'microphenominalism'? I think there's a lot of truth in it.

It did sound a bit pejorative but I'm over it now. In any case, in the greater scheme of things, an entire lifetime is a "microphenomenon" ...perhaps an "engram" in the mind of the Universe (should the Universe HAVE a mind). Meanwhile, our cells are "having experiences" and "communicating with" one another (via chemicals and electricity) and may thus be "conscious" of whatever they can sense and respond to. Likewise, particles, stars and galaxies.

You see, Canute, I am more than a microphenomenonalist; I'm a full-on panpsychist who is proposing that the detection of information -- simple or complex -- that alters in any way the behavior of the detecting "entity" -- simple or complex -- is perhaps all the "reducing" we need to be doing with regard to "consciousness"
 
Last edited:
Physics news on Phys.org
  • #142
Roughly speaking I'm also a panpsychist. I don't know why you're getting shirty with me, I haven't said a word against panpsychism.

However you can't redefine consciousness at will. You have a theory that it is connected with information flow, fair enough, but it is widely agreed that what it is is what it is like to be, after Nagel's discussion of what it's like to be a bat.
 
  • #143
Let me just weigh in here on Nagel. If you built a system, an AI, that EXACTLY reproduced all the physical states, interactions and transitions in a bat's body, and put it into an environment that EXACTLY simulated all the physical characteristics of the bat's environment, then that AI system in that environment would experience "What it's like to be a bat."

I want to emphasize that only physical effects are being reproduced/simulated here. If you claim my claim is false then specify specifically, and not just with an empty name like "consciousness", what would be missing.
 
  • #144
Originally posted by selfAdjoint
I want to emphasize that only physical effects are being reproduced/simulated here. If you claim my claim is false then specify specifically, and not just with an empty name like "consciousness", what would be missing.

Before I answer this, I guess we're assuming that bats are conscious to begin with? We don't know this for certain but we can reasonably assume it. Having said that then my answer would be this: While observing the AI bat do a lot of the things the same as a real bat, you can point to the specific code/hardware wiring that allows each of those individual things to happen. But if the bat truly does have consciousness, you cannot point to any such thing. It is an unnecessary addon with no way to point to it's cause in your AI design. If someone asked, "how did you achieve it being conscious?"...you're answer would be "I have no idea".

Since we're talking about bats your point has more weight because we don't know for certain what a bat feels. But if the AI were a human, and all of the facts were the same would you claim the exact same thing? Can you claim an AI human is conscious simply from it's behaviour without understanding how it was done in the design?
 
Last edited:
  • #145
Originally posted by selfAdjoint
Let me just weigh in here on Nagel. If you built a system, an AI, that EXACTLY reproduced all the physical states, interactions and transitions in a bat's body, and put it into an environment that EXACTLY simulated all the physical characteristics of the bat's environment, then that AI system in that environment would experience "What it's like to be a bat."
From which piece of scientific research do you conclude that? From a scientific point of view it might be true, but there's no evidence that it is. There's not even any scientific evidence that bats are conscious (or people come to that).

I want to emphasize that only physical effects are being reproduced/simulated here.
No 'experience' then. Or are you saying that experience is a physical substance?

If you claim my claim is false then specify specifically, and not just with an empty name like "consciousness", what would be missing. [/B]
Feelings, experience and what it is like to be a bat. If you do a search on Harnard + Dennett you may find their email argument on this point, on which Dennett has to give way.
 
  • #146
Originally posted by Canute
Roughly speaking I'm also a panpsychist. I don't know why you're getting shirty with me, I haven't said a word against panpsychism.

My apologies ..although, technically, I was being "pi**y" (rhymes with "missy") ...my preferred conversational style. It's a bit like Touret's Syndrome, except harder to control. Mostly, I'm playing.

However you can't redefine consciousness at will. You have a theory that it is connected with information flow, fair enough, but it is widely agreed that what it is is what it is like to be, after Nagel's discussion of what it's like to be a bat.

I'm sure you know that there have been MANY cases in science where definitions that were "widely agreed upon" required modification before getting it right.

You say you're "roughly" a panpsychist, yet do not grant "consciousness" (at a very simple level) to electrons? Perhaps "what it's like to be an electron" is having the sole capacity of sensing the positive charge of a proton.

What does it mean to you to be a panpsychist ...and where do you draw the line to what might be conscious and what cannot be conscious?
 
Last edited:
  • #147
... Redefine consciousness? May I ask exactly what definition of consciousness you have that is constant amongst all that talk of it? As far as I know, every author has their own definition in which they work with...
 
  • #148
Originally posted by M. Gaspar
I'm sure you know that there have been MANY cases in science where definitions that were "widely agreed upon" required modification before getting it right.
As Dark Wing says, there is no scientific defintion of consciousness. However in philosophy of mind the one I gave is most common, and not disputed.

You say you're "roughly" a panpsychist, yet do not grant "consciousness" (at a very simple level) to electrons? Perhaps "what it's like to be an electron" is having the sole capacity of sensing the positive charge of a proton.
Hmm. I haven't said anything about whether I thought electrons weren't (or were) conscious.

What does it mean to you to be a panpsychist ...and where do you draw the line to what might be conscious and what cannot be conscious? [/B]
I'd go along with Buddhism, in which the essence of everything is emptiness. I don't think things like thermostats or cricket bats are conscious.
 
Last edited:
  • #149
Originally posted by Canute
As Dark Wing says, there is no scientific defintion of consciousness. However in philosophy of mind the one I gave is most common, and not disputed.

I dispute it.

Hmm. I haven't said anything about whether I thought electrons weren't (or were) conscious.
So say.


I'd go along with Buddhism, in which the essence of everything is emptiness.

Yes, another philosophy that has gotten a few things wrong ...IM"H"O. Although -- in the "physical domain" at both the QM and GR "levels" -- things APPEARS TO BE "empty space" ...we now "know" (always belongs in quote) that "vacuums" are actually SEETHING with virtual energy and in the "non-physical domain" there might -- as an example -- be the "force" of "intention" that fills -- and directs -- the void. [Buddhists also think that life's about escaping suffering by letting go of desires ...when, in fact (should be in quotes as well) life may very well be about how we HANDLE suffering.]


I don't think things like thermostats or cricket bats are conscious.
How lovely for you have used "thermostats" as an example of what might NOT be "conscious" because, in fact, I think "consciousness" is a matter of DEGREE! IOW, it might be on a continuum from VERY SIMPLE to VERY COMPLEX ...based on the complexity of the detection and response system of the entity.
 
  • #150
Originally posted by M. Gaspar
I dispute it.
Ok.

So say.
I stated a fact.

Yes, another philosophy that has gotten a few things wrong ...IM"H"O. Although -- in the "physical domain" at both the QM and GR "levels" -- things APPEARS TO BE "empty space" ...we now "know" (always belongs in quote) that "vacuums" are actually SEETHING with virtual energy and in the "non-physical domain" there might -- as an example -- be the "force" of "intention" that fills -- and directs -- the void. [Buddhists also think that life's about escaping suffering by letting go of desires ...when, in fact (should be in quotes as well) life may very well be about how we HANDLE suffering.]
Hmm. I don't think I'm clever enough to unpick this muddle. You seem to be confusing 'emptiness' and 'nothing'.

And of course our life is about how we handle suffering, if you use suffering in the Buddhist sense of the word.

How lovely for you have used "thermostats" as an example of what might NOT be "conscious"
It's a well known example from the literature, not mine.
 
  • #151
From which piece of scientific research do you conclude that? From a scientific point of view it might be true, but there's no evidence that it is. There's not even any scientific evidence that bats are conscious (or people come to that).

No, no scientific research, this is a PHILOSOPHY thread, and my construction is that familiar philosophical device, the idealized contraption. Here is a device that perfectly simulates all the physical aspects of a bat in its environment. And I say it's not a Zombie, but you apparently say it is. Upon what basis do you claim that?
 
  • #152
Originally posted by Dark Wing
This is where we diverge. I believe that it is a conceptual necessity just the same, and in fact is the only case possible with the laws set out inside the system. Agreed, consciousness cannot simply just pop out of nowhere, and cannot just be formed from some higher level of complexity. which means that you have to think at least that consciousness, or at least the building blocks for it are always there, everywhere. This is bordering on pan psychism, I know, and that is a trap that i whish to avoid. So i will try to explain my thoughts on how consciousness might work.

I agree that, for a theory of consciousness to make sense, it must make reference to some sort of building blocks for consciousness; either in the form of an irreducible and fundamental entity, or in the form of some 'things' that are not themselves conscious but somehow combine to create consciousness.

Now the question becomes: are these building blocks included in our contemporary materialistic ontology? This is precisely where I believe that contemporary materialism must fail in any attempts to really explain consciousness, because I do not think any of the building blocks given to us in a materialistic ontology can do the job of showing us how to explain or deduce consciousness. We need more building blocks.

I would say that anything that shows an ability to react and interact with its environment would show sufficient conditions for the start of consciousness. I do not take consciousness to be a "you have it or you don’t" thing, it is a matter of complexity, and a matter of how well you are able to interact with your environment. something that could only show signs of conditioning as its environmental interaction would not be as conscious as something who could also deliberate over a reaction to a stimulus. since we witness the ability to condition in every form of biology that i have encountered, i would say that things biological are the basis for consciousness, and it is a necessary thing that it is.

As a side point here: your criterion for judging whether an entity is conscious or not is the degree to which it can interact with and be conditioned by its environment, why should physical constitution matter? I understand that you want to start off on surer footing by starting with safer assumptions, but we could (relatively) easily build a silicon based robot that could do the same things. All I am suggesting is that, if biological constitution is to be the most fundamental factor for consciousness in your hypothesis, then that should your primary assumption. Deriving (as opposed to fundamentally asserting) the necessity of biology for consciousness from an entity's ability to interact with the environment seems to be faulty, since you could just as well derive that a silicon robot should be conscious by the same criterion.

Like you, I have been weary of functionalism as a good starting ground for any hypothesis for consciousness. However, recently I read an argument with a functionalist flavor put forth by Chalmers that gives me pause. If you are interested, it might be appropriate to start another thread on the topic.

I know that you can't say "just because everything we see does thins, therefore all things must do this", and I am aware that you are arguing that this might just be the observed phenomena that is present and not the necessary: but I believe that the only reason we observe this time and time again is because the configuration of certain thing will make a conscious mind, just while a slightly different configuration creates gold. There is something about biology that does this. What is it exactly? that’s what we are yet to find: we do not have the formula of H2O for the brain yet.

OK, back to the building block discussion. Suppose for the sake of argument that we eventually isolate the motion of electrons as the most fundamental necessary and sufficient physical correlate for consciousness: whenever we see electrons moving about in such and such patterns, we are confident that there will be such and such conscious experience on the part of the system.

Now, what in our materialist ontology could account for this? Electron charge? Electron mass? Electron spatiotemporal configuration? What combination of these could you throw together to show a priori that consciousness must be the result? I argue that no combination of these could be thrown together to show that consciousness must result. Rather, at this point, we would have to rework our ontology to grant an entirely new property to electrons, such that we would be able to see a priori that such and such configuration of electrons must result in consciousness. This new property would have to be either a fundamentally irreducible aspect of consciousness on the part of electrons, or it would have to be some kind of microphenomenological property of electrons such that electrons by themselves are not conscious, but when combined in patterns just so, their microphenomenological properties combine to result in consciousness.

This argument applies to any H2O formula we may wish to hypothesize for consciousness. You say we have not found the formula yet; I say that for any formula built solely from materialist building blocks, we will still not be able to show a priori that this formula must necessarily result in consciousness. We just need more building blocks than materialism will give us.

not a zombie if it is a matter of configuration. Science has only recently had the technology to even consider such things: what needs to be looked at is the point where physics becomes biology: find out what about biology makes it biology, and not just another chunk of jasper on the plain. it is a round peg all right, but that’s because the way it has been thought of and talked about has lead to massive confusion. (not that you are involved in that confusion, your point is very aside from that) It could be that it impossible for a person with the right configuration NOT to have a mind.

I don't think anyone will argue that it is impossible for a person with the right configuration in our world not to have a mind. The question is whether or not it is a metaphysical impossibility; if the world were different somehow, would consciousness still be the necessary result of the right brain configuration? For instance, in our world it is impossible for an electron not to be attracted to a proton. In a metaphysical world with different laws of physics, this would not necessarily be the case.

It is a metaphysical impossibility for a world with identical H2O molecules and identical laws of physics to ours that these H2O molecules not combine to form (given suitable circumstances) a macrophysical substance with properties identical to water in our world. A very straightforward argument involving physical structures and functions can be given to support this claim. It is not at all clear, however, that a metaphysical world that is physically identical to ours should necessarily support consciousness. If it is claimed that this metaphysical world must support consciousness, no substantive argument can be given to support this claim, even in principle, for all the familiar reasons. This is another way of getting at the suggestion that there must be something more than just the physical involved in the phenomenon of consciousness.

So, with that in mind, the match analogy may still stand. It is in virtue of its constituents that it is so. There is no logical reason why it should not be so with the brain, its just not an area that has had a lot of attention till recently, and we are still figuring out what each neuron of the brain does 9the whole 70-80 were devoted to one-one link ups of neuron and response research, they are yet to head lower).

A brain functioning without there being consciousness under this thought would not be conceivable.


I agree that there is no logical reason to say that it is not in virtue of some property of the brain and its constituents that consciousness exists. However, there is much logical reason to say that such a property is not included in our current materialist ontology.
 
  • #153
Originally posted by selfAdjoint
No, no scientific research, this is a PHILOSOPHY thread, and my construction is that familiar philosophical device, the idealized contraption. Here is a device that perfectly simulates all the physical aspects of a bat in its environment. And I say it's not a Zombie, but you apparently say it is. Upon what basis do you claim that?

The idea is not that your artificial bat must be a zombie. The idea is that we can't be certain what effect the different physical constitution has on its purported consciousness. Upon what basis do you claim that the artificial bat must have the exact same experience as the natural bat?

The argument you have put forth so far leaves much room for doubt. In fact, it does little more than beg the question; your argument rests firmly on the assumption that physical transitions and such are all that is responsible for consciousness, whereas this is precisely the issue that is open to question. To advance, you must propose an argument detailing how it must be that the functional constitution of the bat is sufficient for explaining its first person experiences.
 
Last edited:
  • #154
Originally posted by selfAdjoint
No, no scientific research, this is a PHILOSOPHY thread, and my construction is that familiar philosophical device, the idealized contraption. Here is a device that perfectly simulates all the physical aspects of a bat in its environment. And I say it's not a Zombie, but you apparently say it is. Upon what basis do you claim that? [/B]
I didn't claim that you were wrong. If you look I wrote that what you said might be true. I was just pointing out that it was pure conjecture unsupported by any evidence.

Edit - Whoops, just noticed Hypnogogue said this for me.
 
Last edited:
  • #155
The argument you have put forth so far leaves much room for doubt. In fact, it does little more than beg the question; your argument rests firmly on the assumption that physical transitions and such are all that is responsible for consciousness, whereas this is precisely the issue that is open to question. To advance, you must propose an argument detailing how it must be that the functional constitution of the bat is sufficient for explaining its first person experiences.

Exactly. I'm trying to firm up the discussion by pinning down the issues in a case where we don't have all the baggage of human consciousness to contend with. IMHO that's exactly what Nagle did in switching from talking about qualia in people to presumptive qualia in bats. The point is exactly that nobody knows what goes on in a bat's mind so the discussion can remain pure of special pleading.

If you don't like the bat, here's another one. Could an AI be built to sense colors the way people do, with the three receptor bands and intensity differencing and maybe a neural network for identification and memory, and if it could then be run through experiences with colors, some good some bad according to a carefully designed program so it had various associations with various colors, and if it then "discussed" its experience of colors with researchers and showed complex discussion behavior, not programmed in advance, could you then say the device was experiencing color qualia?
 
  • #156
So you are attempting point out the uncertainty of our knowledge of consciousness? I don't think any except the most extreme on either side really dispute that notion. At this stage of our understanding (and possibly forever), we just don't know enough to answer your question with much more than educated speculation. But this is a different matter from the subject of whether or not materialism can explain consciousness in principle.
 
  • #157
Originally posted by hypnagogue
So you are attempting point out the uncertainty of our knowledge of consciousness? I don't think any except the most extreme on either side really dispute that notion. [/B]
I would. Did you mean to say 'scientific knowledge of consiousness'.
 
  • #158
Originally posted by Canute
I would. Did you mean to say 'scientific knowledge of consiousness'.

Yes, that's what I meant. Sorry for my lack of clarity.
 
  • #159
Originally posted by Canute
Hmm. I don't think I'm clever enough to unpick this muddle. You seem to be confusing 'emptiness' and 'nothing'.

Per AHD (American Heritage Dictionary)...

Emptiness: Holding or containing nothing; vacant; meaningless, devoid, lacking force or power.

Nothing: no thing; not anything; insignificance; obscurity; absence of anything perceptable; someone or something of no consequence.

So let's dissect it...

Is a life -- or the Universe -- "holding or containing nothing"? While my life might not be a "big thing" within the cosmic context, it still contains SOMETHING ...which, at the very least -- and, ironically, the very most -- are EXPERIENCES that may or may NOT be being "recorded" in the Memory of the Universe ...and which would comprise the "spiritual plain".

Insignificant? Well, maybe I am -- or maybe I'm not -- but I know for sure that if ANYTHING IS "significant" it is the Universe.

Obsurity? perhaps for me, although I think our existence as a species -- among many others -- makes a "contribution" to the Collective by the way of these Experiences we're having. These "lifetimes" might "live on" in some sort of "information storage system" which, in my estimation, is what the "spiritual plain" of the Universe might be.

Absence of anything perceptable? Well, we seem to perceive a lot. And, there seems to BE a lot to perceive. In fact, there may be MORE than that which can BE "perceived". For instance, I think there might be a "force" that we could call "intention" that operates at the QM level ...in effect, "plucking the strings" -- to "go" this way or that -- in the PROCESSS OF MANIFESTATION!

Someone or something of no consequence? Everything has "consequence": it's a Cause & Effect Universe.

Not anything? Can the Universe be nothing?

Does it "lack meaning"? It may. I don't know ...yet.

Can It be "lacking in power" when it MAY be "all energy all the time"?

...as well as "all INFORMATION all the time" ...which will take me next to a thread on the theory of consciousness which I want to present under Theory Development. Since "consciousness" is "at least" a "part" of the Universe -- thus a part of "cosmology"...I think it would be appropriate to discuss it there.

However, I will have to ask them if a serious discussion about "An Evolving Theory About Information Exchange that Might Explain Consciousness as a Fundamental Ingredient/Process of the Universe and Point to a Possibility about Creation at the QM Level" is "off topic" under Theory Development?

Btw, I think you're "clever enough". :wink:
 
Last edited:
  • #160
Sorry, this is my fault, I wasn't clear. When I mentioned 'emptiness' I meant it as a Buddhist would mean it. It doesn't matter what that means, but I didn't mean 'nothing'. As you pointed out 'nothingness' appears to be seething with things.
 
  • #161
Hello again Hypnagogue, Sorry it took so long to reply...

Originally posted by hypnagogue
I agree that, for a theory of consciousness to make sense, it must make reference to some sort of building blocks for consciousness; either in the form of an irreducible and fundamental entity, or in the form of some 'things' that are not themselves conscious but somehow combine to create consciousness.
good. we have a starting point.


Now the question becomes: are these building blocks included in our contemporary materialistic ontology?

Well, it depends. Yes, the materialistic stance is quite shallow, and no doubt it needs to be fleshed out, but the basis for it may still be there. Take a look at Place and Smart's work on identity theory: (I know they are Australian, but hell, we do have some minds all the way down here)It simply states that’s a Mind state IS a Brain state. What they have done is set up a field to explore: what is a brain state? if you can figure out what that is, then you have the next step to the reduction: take it down to biology, and then ultimately physics, and you have your building blocks for consciousness: but I will address that better where you have mentioned it bellow.


As a side point here: your criterion for judging whether an entity is conscious or not is the degree to which it can interact with and be conditioned by its environment, why should physical constitution matter?

I think that we should at least start at a point where we know that consciousness is the case. (i am aware that people will argue that we are not conscious, and that we are all just robots, but i am going to presume consciousness on the basis of Searls "seeming" argument). If it is so that biology is conscious, then we can figure out what the constitution of biology is, and then see what the essential ingredients of the physics/biology boundary are. we can then say that they are the essential building blocks of consciousness, as they make biology, and biology is conscious, as it can react and interact with its environment. We can never argue necessity of biology for consciousness. But we can say "check it out, we have a working example, let's see how that happens"

Deriving (as opposed to fundamentally asserting) the necessity of biology for consciousness from an entity's ability to interact with the environment seems to be faulty, since you could just as well derive that a silicon robot should be conscious by the same criterion.

So, what makes a biological cell that is reacting and interacting with its environment different from a robot that is showing the same behavioral patterns? nothing according to that definition. so the theory has to be expanded to show us how to tell immitation from the real thing (it is called "artificial Intelligence" after all :eek:)

Besides that my explanation of consciousness is based on biological or at least physical causation, and that programmed robots ignore the causation part of the initial condition for consciousness and just write the consciousness on top to be run on a bunch of silicon mapping, there seems to be a tesable and varifiable way of seeing if a robot is conscious in the same kind of sense that a human is conscious: that it attributed meaning to its environment. It is reacting in a meaningful and productive way TO ITSELF as well as to the environment. I think searle has pretty much covered this one with his Chinese room argument that Dennett (or anyone else for that matter) has yet to reply decently. all i have seen in the literature is personal attacks and insults on searle, demanding that he fall in line with the rest of the functionalist community. All i am saying is that my work on consciousness is an expose' on what we know to be conscious, and would need some work to expand as a proof or disproof of AI, even though AI breaks every rule i set up for a conscious being (as it has no causation) yet will act like one as it is programmed to.

Like you, I have been weary of functionalism as a good starting ground for any hypothesis for consciousness. However, recently I read an argument with a functionalist flavor put forth by Chalmers that gives me pause. If you are interested, it might be appropriate to start another thread on the topic.

Absolutely. That would be great to see. as much as i am not too keen on chalmers work, he is an interesting writer to read.

OK, back to the building block discussion. Suppose for the sake of argument that we eventually isolate the motion of electrons as the most fundamental necessary and sufficient physical correlate for consciousness: whenever we see electrons moving about in such and such patterns, we are confident that there will be such and such conscious experience on the part of the system.


not so much "these atoms moving like such means we will have this conscious experience" more i am saying that a certain formation of atoms will produce consciousness in the system: the nature of the conscious experience will be dictated by the biology: what kind of biology does this thing have in order to experience the environment with? all of that is higher-level stuff that we may or may not predict on an atomic level. all i am interested in is what combinations make consciousness possible: experiencing consciousness is another question all together.

What combination of these could you throw together to show a priori that consciousness must be the result? I argue that no combination of these could be thrown together to show that consciousness must result. Rather, at this point, we would have to rework our ontology to grant an entirely new property to electrons, such that we would be able to see a priori that such and such configuration of electrons must result in consciousness.


Yes, the ontology that i follow in place and smart does not explicitly state this, but it is implied that you can go to the biology and find out what physical constituents made it possible to form. have that, and you have your physical energy level of consciousness.

This new property would have to be either a fundamentally irreducible aspect of consciousness on the part of electrons, or it would have to be some kind of micro phenomenological property of electrons such that electrons by themselves are not conscious, but when combined in patterns just so, their micro phenomenological properties combine to result in consciousness.

exactly what i am saying. It’s the combination that matters. certain combinations make one thing, other combinations make consciousness.
and if you know the combination that makes something biology, then you will know a priori that a certain amount of yay atoms on this combination will make consciousness. It’s like baking a microscopic cake.

The metaphysical question of "even if it might not be so in our universe, but is it possible for consciousness to NOT result by this mix in another universe" is to me a wonderful question to speculate, but essentially one with no answer. how can we ever know whether consciousness of this sort is contingent here or a necessary factor of existence? that sort of thing keeps one awake at night.

even if in another universe something other than the pure physical is needed to support consciousness, it means nothing to us here. I will argue that we have all the ingredients for consciousness right here in front of us, we are just not looking hard enough for them.
 
Last edited:
  • #162
Originally posted by Dark Wing
It’s the combination that matters. certain combinations make one thing, other combinations make consciousness.
and if you know the combination that makes something biology, then you will know a priori that a certain amount of yay atoms on this combination will make consciousness. It’s like baking a microscopic cake.
So, if it is up to the combination, perhaps there is a critical distinction that needs to be made which I have never seen anyone make: Perhaps there is no such THING as consciousness, perhaps there is a myriad of phenomena that each may be 'conscious experiences'.

So a reductive explanation of 'Consciousness' will fail, because there is no such thing as 'Consciousness', there is instead attributes of consciousness. If you follow me...

I guess this is similar to saying there is no such thing as 'The Biological World', there are only creatures which may be said to be biological.

Does this make sense/Help?
 
  • #163
That's not a million miles from the Buddhist view.
 
  • #164
Is there anything in the physical Universe that doesn't receive and respond to something? Elementary particles receive and respond to the weak and strong forces. Larger systems "sense" and respond via gravity to each other's masses.

Perhaps we are being too narrow when we define consciousness as a process that "emerges" when a (biological) system becomes sufficiently complex. Perhaps consciousness could be said to be the sensing and responsiveness to ANY information, however minimal.

Perhaps we are being "elitist" to confer consciousness only to those biological systems with brains ...brains being "merely" a biological device that has evolved to process (receive and respond to) a LOT of information. Even one-celled creatures who, say, have an affinity to light, are sensing and responding to SOMETHING. This certainly constitutes an "awareness" of sorts, and possibly what could be considered a rudimentary consciousness.

Perhaps consciousness is on a continuum from very simple to very complex, and thus is FUNDAMENTAL to every part and parcel (particle and system) of the Universe. And if true, it would be a "reductive explanation of consciousness" that has SUCCEEDED.
 
  • #165
Originally posted by M. Gaspar
Is there anything in the physical Universe that doesn't receive and respond to something? Elementary particles receive and respond to the weak and strong forces. Larger systems "sense" and respond via gravity to each other's masses.

Perhaps we are being too narrow when we define consciousness as a process that "emerges" when a (biological) system becomes sufficiently complex. Perhaps consciousness could be said to be the sensing and responsiveness to ANY information, however minimal.

Perhaps we are being "elitist" to confer consciousness only to those biological systems with brains ...brains being "merely" a biological device that has evolved to process (receive and respond to) a LOT of information. Even one-celled creatures who, say, have an affinity to light, are sensing and responding to SOMETHING. This certainly constitutes an "awareness" of sorts, and possibly what could be considered a rudimentary consciousness.

Perhaps consciousness is on a continuum from very simple to very complex, and thus is FUNDAMENTAL to every part and parcel (particle and system) of the Universe. And if true, it would be a "reductive explanation of consciousness" that has SUCCEEDED.

A lot of philosophical considerations point to consciousness being a fundamental aspect of reality (this thread for example). But supposing that consciousness is on some level fundamental is actually the antithesis of a reductive explanation.

As for the biological view, I don't think it's elitist as much as it is pragmatic. We know for a fact that humans are conscious and we have good reason to believe that other animals are conscious as well. The further the systems we consider stray from being human, the less confidence we can have that these systems are conscious. So it is more a matter of starting in an area where we can be confident, learning what we can from that starting point, and then extrapolating to more general systems as our knowledge and theoretical frameworks progress. It may be true that an amoeba (or a rock) is conscious on some level, but for now that is just speculation.
 
  • #166
Originally posted by Another God
So, if it is up to the combination, perhaps there is a critical distinction that needs to be made which I have never seen anyone make: Perhaps there is no such THING as consciousness, perhaps there is a myriad of phenomena that each may be 'conscious experiences'.

So a reductive explanation of 'Consciousness' will fail, because there is no such thing as 'Consciousness', there is instead attributes of consciousness. If you follow me...

I guess this is similar to saying there is no such thing as 'The Biological World', there are only creatures which may be said to be biological.

Does this make sense/Help?

I don't know how I feel about that. You can perhaps say that there is no intrinsic property that differentiates a biological system from a non-biological one, but from the 1st person view at least, there seems to be an obvious intrinsic difference between a conscious system and a non-concsious system.

Besides, even if we accept that what we need to describe are attributes of consciousness, all the familiar arguments still apply as to why we could not explain these attributes reductively in the materialist framework.
 
  • #167
Originally posted by Dark Wing
Well, it depends. Yes, the materialistic stance is quite shallow, and no doubt it needs to be fleshed out, but the basis for it may still be there. Take a look at Place and Smart's work on identity theory: (I know they are Australian, but hell, we do have some minds all the way down here)It simply states that’s a Mind state IS a Brain state.

But this is still unintelligible under the conventional materialist framework. I am not saying it is impossible in reality-- obviously that is not the case. I am just saying that if you analyze a system that turns out to be a human brain using a materialistic analysis, you will never deduce that this system has consciousness. It may very well be the case that a mind state is a brain state; but if this is so, then the implication is that there is something about brain states that is not recognized in materialism.

I think that we should at least start at a point where we know that consciousness is the case.

A very sensible approach indeed, but I don't think using brains as the starting point necessitates that we build the theory explicitly around biology. If anything I think that should come as a result of empirical research rather than a starting assumption. (I don't think we need this assumption to do productive research on consciousness.)

If it is so that biology is conscious, then we can figure out what the constitution of biology is, and then see what the essential ingredients of the physics/biology boundary are. we can then say that they are the essential building blocks of consciousness, as they make biology, and biology is conscious, as it can react and interact with its environment. We can never argue necessity of biology for consciousness. But we can say "check it out, we have a working example, let's see how that happens"

Doesn't this research paradigm just boil down to determining what makes biology, and then just baldly asserting that those things that make biology also make consciousness? I may have misunderstood, but it sounds as if you are begging the question here.

Besides that my explanation of consciousness is based on biological or at least physical causation, and that programmed robots ignore the causation part of the initial condition for consciousness and just write the consciousness on top to be run on a bunch of silicon mapping, there seems to be a tesable and varifiable way of seeing if a robot is conscious in the same kind of sense that a human is conscious: that it attributed meaning to its environment. It is reacting in a meaningful and productive way TO ITSELF as well as to the environment.

In what way does a machine act any less causally than a life form?

If you suppose that there is a one to one mapping of brain states onto mind states, then in principle the entire behavioral proclivities of a person should be encoded in their neural firing patterns. In principle, these neural firing patterns could be emulated perfectly by a complex computer. So, in principle, you could build a computer (with a robot body and so on) that would act indistinguishably from its human counterpart; how then would you conclude that the human attributes meaning in such and such a way and that the computer/robot does not?

I am familiar with the Chinese Room argument and I suppose you might invoke it here. However, from the point of view of materialism, the human brain might as well be just as void of semantics as the computer which blindly runs instructions. It is just an input-output device, after all; just as it is eminently unclear how/at what point a computer running instructions would somehow become conscious, it is equally unclear how/at what point a human brain interpretting sensory input should somehow become conscious. We know the CR argument does not apply to human brains not from some special caveat in the argument that explicitly distinguishes how biological brains are different from all other cases; rather, we know the CR argument does not apply to human brains because we are human brains and we have 1st person evidence of our own consciousness to the contrary. It could very well be equally the case for an AI robot; it could be that although the CR argument indicates that the robot should not be conscious, in fact it is conscious, and it knows this from its own 1st person subjective experience. So the only way to refute the CR argument for any physical system under our current understanding is to be that system, and accordingly I don't think it can be relied upon to guide our intuition. If anything, it is simply another way of showing how our current understanding is wholely inadequate.

Obviously there is something somewhere along the line that introduces semantics (consciousness); whatever that 'thing' is, we know the human brain has it, but the robot might have it as well (and, strictly from the 3rd person view, it would certainly at least appear to have it, although appearance does not constitute a proof). To state outright that that special thing must be biology is, again, too much of an assumption for me. It could be the case, but it could equally not be the case.

Absolutely. That would be great to see. as much as i am not too keen on chalmers work, he is an interesting writer to read.

OK, good, I will put up a post on Chalmers' functionalism argument sometime soon.

Yes, the ontology that i follow in place and smart does not explicitly state this, but it is implied that you can go to the biology and find out what physical constituents made it possible to form. have that, and you have your physical energy level of consciousness.

Even if it did turn out that some property of biology accounts for consciousness, this would in turn imply that biology possesses some fundamental property pertaining to consciousness that is entirely omitted in the current materialist framework.

The metaphysical question of "even if it might not be so in our universe, but is it possible for consciousness to NOT result by this mix in another universe" is to me a wonderful question to speculate, but essentially one with no answer. how can we ever know whether consciousness of this sort is contingent here or a necessary factor of existence? that sort of thing keeps one awake at night.

The metaphysical argument is used more to highlight the notion that materialism alone is insufficient to explain consciousness. The argument simply stated says that it is conceivable that there be a metaphysical world that is physically identical to ours but in which a human brain is not conscious. The conclusion is that there is some non-physical property of brains that accounts for consciousness. (Or, if you prefer, it is conceivable that there is some metaphysical world where the CR argument serves as a sound refutation of consciousness in human brains; there is computation but no consciousness.)
 
  • #168
Ok Hypnagogue, I think i have lost you somewhere, i am a little confused, so let's go through this a little slower so I know what you are arguing.

-What do you think the current materialistic stance is? you keep saying that you don't believe that by looking at the brain you can deduce consciousness within. And you claim that this means that the paradigm of materialism is lacking in explanation of anything that it trying to prove/study/look at. Basically, materialism is redundant in that it cannot explain what it is trying to as it does not even know what it is looking at.

Well, exactly. Because the word "consciousness" is completely misleading. there is no such thing. you can't deduce consciousness from looking at the brain, as there is no Consciousness there. But you can deduce movement and interaction of environmental stimulus, and you can see responses at neuro-chemical level. so i guess all i am saying is that consciousness is life. is movement of matter and energy. and, if you take the identity theory side of materialism, then this is exactly what they are saying. functionalism is looking at a reduction, eliminative materialism is basically looking at physics and claims there is nothing to reduce, and the concept of deducting consciousness is a complete farce. it just depends of what you mean by the materialistic stance: there are many... I am obviously very confused, please re-explain for my ignorance.

Originally posted by hypnagogue
It may very well be the case that a mind state is a brain state; but if this is so, then the implication is that there is something about brain states that is not recognized in materialism.

again, please explain to me what materialism is then. I have always taken it to be the study of the physical: i know the functionalist stance takes it from top down, but again, i am not sure what you are saying here, i am sorry.

A very sensible approach indeed, but I don't think using brains as the starting point necessitates that we build the theory explicitly around biology. If anything I think that should come as a result of empirical research rather than a starting assumption. (I don't think we need this assumption to do productive research on consciousness.)

it does not necessitate anything, really, it just gives us a good ground to understand the case of consciousness that we know exists. it is all very well to claim all kinds of things about AI, but if we don’t even know what we are dealing with when we say things like this, then its all pure speculation and word games. Wittgenstein has a lot to say about the misleading terms of mind philosophy, as does the churchlands, who put all of this to "folk psychology" but that again is another topic.

Doesn't this research paradigm just boil down to determining what makes biology, and then just baldly asserting that those things that make biology also make consciousness? I may have misunderstood, but it sounds as if you are begging the question here.
no, it says that those things that are biology are consciousness. consciousness is not made. it is just a term we have put on something that is. we think of it as something being made, we think of it as a bi product, we think of it as something other than simply movement of atoms, but its not. its just the physical world doing its thing.

In what way does a machine act any less causally than a life form?
If you suppose that there is a one to one mapping of brain states onto mind states, then in principle the entire behavioral proclivities of a person should be encoded in their neural firing patterns. In principle, these neural firing patterns could be emulated perfectly by a complex computer. So, in principle, you could build a computer (with a robot body and so on) that would act indistinguishably from its human counterpart; how then would you conclude that the human attributes meaning in such and such a way and that the computer/robot does not?
in general it does act less causally than a life form, as it has no attachment or ability to link into the world at all (hence the Chinese room argument). BUT in the 2nd case: where we build something PHYSICAL that actually fires and does things like a brain, then fine, you can attribute to it everything that a human has. why? there is no such thing as 1-1 mapping of the mind to the brain. there is only brain activity. you can't take the experience of brain activity and write a program for it or even explain it without a concept of the neural activity like the cognitivists do. you can't do it. we are experiencing neural activity. you can scrape consciousness off like that and write it separate to causation.

Identity theorists and elimininative materialists all claim that is you make something like the brain, then you have a good reason to say it works like one. Functionalists like to claim that a pile of tin canes could form the function of the mind. they are very different claims.

I am familiar with the Chinese Room argument and I suppose you might invoke it here. However, from the point of view of materialism, the human brain might as well be just as void of semantics as the computer which blindly runs instructions.
No. the difference is in the seeming. there HAS to be more to the story, as we actually do attribute meaning to our actions and the world around us NO MATTER HOW WRONG THOSE ATTRIBUTIONS MIGHT BE. A computer has no way of making those attributions: we are an inside agent attributing to the outside world: we are attributing meanings to the computers action: outside in instead of inside out. a computer is only dealing with symbols: it has not way of understanding it at all. What the materialist stance, in my opinion is looking at, is why that might be the case. some choose to take it down to our constituents. that is the paradigm in which we are working here.

So the only way to refute the CR argument for any physical system under our current understanding is to be that system, and accordingly I don't think it can be relied upon to guide our intuition. If anything, it is simply another way of showing how our current understanding is wholly inadequate.

it is showing why the approach of functionalism has a lot of fault, and why we have to start relying on a real physical basis for what we are doing by stopping these stupid analogies and looking at exactly what we are dealing with. instead of saying "the brain is like a hydraulics system" or "a brain is like a computer" we have to give this a rest and say "a brain is like a brain, let's find out what that is" instead of running around and studying computers like they are going to suddenly turn over all the answers to the universe to us on their little sliver and green hard drives. That’s all Searle was trying to point out. it can be used as a guide to show that mainstream thought is heading wildly in the wrong direction, and we should start seriously exploring the other ideas that are around before too much embarrassment is caused, and Dennett fall flat on his face when Cog finally shows them that its not possible.

Obviously there is something somewhere along the line that introduces semantics (consciousness); whatever that 'thing' is, we know the human brain has it, but the robot might have it as well (and, strictly from the 3rd person view, it would certainly at least appear to have it, although appearance does not constitute a proof). To state outright that that special thing must be biology is, again, too much of an assumption for me. It could be the case, but it could equally not be the case.
the human brain does not "have" it, it IS it. and you can argue that its all the same in physics: that’s fine: we have found a universal consciousness causation in which we are all linked: but to me, consciousness is a meaningless term, and is simply a re-description of a physical system brought on by misunderstanding of how the body functions.

Even if it did turn out that some property of biology accounts for consciousness, this would in turn imply that biology possesses some fundamental property pertaining to consciousness that is entirely omitted in the current materialist framework.
or that biology does not contain anything: it is just the structure that allows consciousness to be experienced: maybe something (like a rock) can posses the fundamentals of consciousness, but not the means in which to experience it: so even then a robot may have the attributes to HAVE consciousness: it does respond etc. BUT it has no means of "experiencing" it, as it has no way of meaningfully connecting to the world.

The metaphysical argument is used more to highlight the notion that materialism alone is insufficient to explain consciousness. The argument simply stated says that it is conceivable that there be a metaphysical world that is physically identical to ours but in which a human brain is not conscious. The conclusion is that there is some non-physical property of brains that accounts for consciousness. (Or, if you prefer, it is conceivable that there is some metaphysical world where the CR argument serves as a sound refutation of consciousness in human brains; there is computation but no consciousness.)

even if it is conceivable, so what? just because you can conceive something means nothing. i can conceive of a pink elephant in another universe, does that mean that it is relevant to what’s happening here? in another universe, maybe there is a non-physical substance that causes mind. there is little evidence that that be the case here. we have found out non-physical substance- energy.
 
  • #169
Originally posted by M. Gaspar
Is there anything in the physical Universe that doesn't receive and respond to something? Elementary particles receive and respond to the weak and strong forces. Larger systems "sense" and respond via gravity to each other's masses.

well, i was more talking about conditioned responces: being able to change behaviour patterns via conditioning: but if you want to use just basic movement in general, then why not?

Perhaps we are being too narrow when we define consciousness as a process that "emerges" when a (biological) system becomes sufficiently complex. Perhaps consciousness could be said to be the sensing and responsiveness to ANY information, however minimal.
sure, as long as it can actually change its behaviour as well. maybe this is a more quantum mechanics question.

Perhaps we are being "elitist" to confer consciousness only to those biological systems with brains ...brains being "merely" a biological device that has evolved to process (receive and respond to) a LOT of information. Even one-celled creatures who, say, have an affinity to light, are sensing and responding to SOMETHING. This certainly constitutes an "awareness" of sorts, and possibly what could be considered a rudimentary consciousness.

i would confer consiousness to anything that has the ability to react to its environment: not just a blind action, but can actually be conditined away from its usual behaviour. whether energy IS actually consiousness, or whether it is only certian structures of ebergy that allow a structure to interact meaningfully with the environment - that is somthing proberly worth reaserching.

Perhaps consciousness is on a continuum from very simple to very complex, and thus is FUNDAMENTAL to every part and parcel (particle and system) of the Universe. And if true, it would be a "reductive explanation of consciousness" that has SUCCEEDED.
sure. i am all for it being a structual diffence: i just say that physics is the basis for it. it is the building blocks and starting point for all of this.
 
  • #170
Originally posted by Canute
That's not a million miles from the Buddhist view.

Sure. all things coneceted, consiousness is a continuim, and all life is one. why not? take physics as your basis you hardly have anywhere else to go.
 
  • #171
Originally posted by Dark Wing
Sure. all things coneceted, consiousness is a continuim, and all life is one. why not? take physics as your basis you hardly have anywhere else to go.
I don't think physicists would agree that universal consciousness follows from physics.
 
  • #172
Originally posted by Dark Wing
-What do you think the current materialistic stance is? you keep saying that you don't believe that by looking at the brain you can deduce consciousness within. And you claim that this means that the paradigm of materialism is lacking in explanation of anything that it trying to prove/study/look at. Basically, materialism is redundant in that it cannot explain what it is trying to as it does not even know what it is looking at.

Well, exactly. Because the word "consciousness" is completely misleading. there is no such thing. you can't deduce consciousness from looking at the brain, as there is no Consciousness there. But you can deduce movement and interaction of environmental stimulus, and you can see responses at neuro-chemical level. so i guess all i am saying is that consciousness is life. is movement of matter and energy. and, if you take the identity theory side of materialism, then this is exactly what they are saying. functionalism is looking at a reduction, eliminative materialism is basically looking at physics and claims there is nothing to reduce, and the concept of deducting consciousness is a complete farce.

I find your position here a little confusing in light of things you have said previously:

I think that we should at least start at a point where we know that consciousness is the case. (i am aware that people will argue that we are not conscious, and that we are all just robots, but i am going to presume consciousness on the basis of Searls "seeming" argument).

On the one hand you present consciousness with a very behaviorist kind of flavor, saying it is nothing more than the movement of matter and energy, and that really there is no such thing as consciousness; on the other hand you say that consciousness is the case, on the basis of Searle's "seeming" argument.

Consciousness seems to be a bad word to use in these discussions, since it always gets twisted around at some point. It's too ambiguous. The relevant component of consciousness that I am talking about is experience or feeling (or equally well seeming, I suppose.) Despite confusions about what consciousness is and if it really exists or not, can we agree that it is certainly the case that humans have 1st person subjective experiences? I am presuming you answer yes to this question, otherwise, you deny the manifestly true and we cannot proceed.

Now, to state that problem very simply and succintly: how is it that "movement of matter and energy" can "seem" to be anything at all? Based on even our most complete understanding, there is nothing in matter and energy that should ever give rise to "seeming." That is the crux of the issue at hand. What in physics can account for "seeming," even in principle, the same way the structure of H2O molecules accounts for water? The answer would seem to be 'nothing at all.' Accordingly, there should be more to our descriptions of reality than there currently is, in order to fully account for experience/feelings.

again, please explain to me what materialism is then. I have always taken it to be the study of the physical: i know the functionalist stance takes it from top down, but again, i am not sure what you are saying here, i am sorry.

Materialism: the stance that only the physical exists, and the description of that physical ontology (a catalogue of properties / fundamental entities such as charge, mass, spacetime, and so on, as interrelated by the laws of physics).

no, it says that those things that are biology are consciousness. consciousness is not made. it is just a term we have put on something that is. we think of it as something being made, we think of it as a bi product, we think of it as something other than simply movement of atoms, but its not. its just the physical world doing its thing.

Again, the way you have described things here, I don't think there is anything being done to objectively ascertain that biology is consciousness; it just assumed from the beginning and then carried through to the end.

No. the difference is in the seeming. there HAS to be more to the story, as we actually do attribute meaning to our actions and the world around us NO MATTER HOW WRONG THOSE ATTRIBUTIONS MIGHT BE. A computer has no way of making those attributions: we are an inside agent attributing to the outside world: we are attributing meanings to the computers action: outside in instead of inside out. a computer is only dealing with symbols: it has not way of understanding it at all.

You could just as well say that a human brain is only dealing with symbols and has no way of understanding it at all. Of course, we know this is not the case, but it really should be the case if you follow the logic of the CR argument. The only reason we know that CR does not apply to the human brain is from 1st person experience. If this is not the case, explain to me what caveats exist in the CR argument such that CR does not apply to human brains. Are these really justified by the argument or are they ad hoc patch-ups to make it compatible with reality?

the human brain does not "have" it, it IS it. and you can argue that its all the same in physics: that’s fine: we have found a universal consciousness causation in which we are all linked: but to me, consciousness is a meaningless term, and is simply a re-description of a physical system brought on by misunderstanding of how the body functions.

I disagree. No matter what you say about consciousness, it is really impossible to deny that experience or feelings exist. If you look over the materialist ontology (spacetime, mass, charge, matter, energy etc, and the laws of physics), from what in this ontology can it follow, even in principle, that experience or feelings should exist? I argue that experience cannot logically follow from any of these things, and thus should join them as an ontologically fundamental building block.

even if it is conceivable, so what? just because you can conceive something means nothing. i can conceive of a pink elephant in another universe, does that mean that it is relevant to what’s happening here? in another universe, maybe there is a non-physical substance that causes mind. there is little evidence that that be the case here. we have found out non-physical substance- energy.

Conceivability ties into explanatory power, which ultimately ties into our understanding of the world. If we possesses a good explanation of a certain phenomenon, then any rational agent who accepts our axioms and understands our logic will not even be able to conceive of an event contrary to that predicted by the explanation. (Here, again, the relevant axioms are the fundamental, nonreducible components of materialism: spacetime, mass/energy, laws of phsycis, etc.)

Example: suppose we have two competing theories about the properties of water. On the first theory, the properties of water are determined by the water god Wata. This is not a very good explanation for several reasons, one of which being that it leaves us free to rationally conceive of something to the contrary; if Wata determines the properties of water, then why didn't he decide to make water look red instead of clear/green/blue? (For that matter, why is it Wata and not Raja who determines the properties of water?) I can easily conceive under this theory that water should turn out to be red; it has not been adequately explained to me why it must be the case that water has its characteristic properties as observed to exist in nature.

The second theory is the standard scientific one involving H2O molecules. On this theory, we start off with the characteristic materialist properties of H2O molecules-- their atomic structure and bonding propensities, and so on-- and from these, we show how a large collection of such molecules under the proper conditions must combine to form a macroscopic substance with the properties of water. Under this explanation, it is not even possible to rationally conceive that H2O molecules could combine to comprise a substance with properties different from water. The explanandum (thing to be explained) follows as a necessary consequence of the explanation, leaving no room for a rational imagination to contradict it.

If one can fully understand a materialist theory of consciousness and still rationally conceive of a metaphysical world physically identical to ours where human brains are not conscious (do not experience perceptual feelings), then the implication is that that materialist theory of consciousness is a pretty lousy one. Specifically, the axioms (materialist ontology) are insufficient; it has not shown how the explanandum (consciousness/seeming/experiencing/feeling) must be a necessary consequence of the explanation.

If my explanations still seem confusing or incorrect to you, you may want to check out a couple of papers by Chalmers that essentially embody what I am trying to say here:

http://www.u.arizona.edu/~chalmers/papers/facing.html
http://www.u.arizona.edu/~chalmers/papers/nature.html
 
Last edited by a moderator:
  • #173
Originally posted by hypnagogue
If one can fully understand a materialist theory of consciousness and still rationally conceive of a metaphysical world physically identical to ours where human brains are not conscious (do not experience perceptual feelings), then the implication is that that materialist theory of consciousness is a pretty lousy one. Specifically, the axioms (materialist ontology) are insufficient; it has not shown how the explanandum (consciousness/seeming/experiencing/feeling) must be a necessary consequence of the explanation.
Unless I am mistaken, I believe that this point is the crux to the whole discussion, and that Dark Wing's claim is that although it is possible to conceive of such a situation, we know from first hand experience that it is not the case with 'myself', and since you are biologically congruent to myself, it is not the case with you, and since Apes are biologically analogous to us, it is not the case with them, and since Dogs, mice, fish are all related to us, then it is also not the case with them.

The problem of other minds is a genuine problem, but if we accept current scientific theory (materialism, evolution, causality) then we are forced to accept that my mind (brain) is essentially no different to you brain, which in turn is only different by a matter of degrees to every other mind in nature. As such every single mind in nature most likely results in 'experiences' akin to our own.

The obvious next step is to realize that with this constant throughout nature, it is feasibly to claim that there is something about the brain which necessarily gives rise to experience.

While I understand your point, and in fact agree with it, the contingent evidence points to the conclusion that biological brains of this vague layout must give experience. I don't know why, its just what they do. (I think this is what Dark Wing is saying)

Admittedly, staying with the problem of consciousness side is much easier. There are less jumps involved in the logic and it is safe. Following the line of reasoning that I just attempted to layout has two rather large assumptions/claims/hopes in it. The problem of other minds is ignorable because of common descent (we have to assume that 'seeming' is available in degrees, and is something that has evolved the whole way alongside us), and then there is the somewhat dodgy attempt to explain this constant by claiming that is simply must be a consequence of having a brain (or being biological, however far down we have to go).
 
  • #174
In otherwords, our abilites to imagine a world where there could be brains without experience is simply a trick played on us by our ignorance.*

Just like someone who doesn't understand Newtons/einsteins laws could imagine a world where the sun revolved around the earth. It seems entirely reasonable, but as soon as u understand the details of the system and the laws that dictate them, it no longer actually seems possible.

*That is, of course, if Dark Wing is correct, which is something that may not be known for...a very long time for all we know.
 
  • #175
Originally posted by Another God
In otherwords, our abilites to imagine a world where there could be brains without experience is simply a trick played on us by our ignorance.

I never claimed (for example) that an actually existing brain in this world which is identical to mine could possibly not have the same conscious states that I have. By extension, I am not skeptical that you yourself are experiencing some kind of conscious states, or that any other normally functioning person does as well.

There is a distinction to be drawn here betwen nomological (or natural) possibility and metaphysical possibility. These are two very different things. For instance, it is metaphysically possible that the speed of light in some metaphysical world be different from c, although this is (to our best understanding) nomologically impossible (impossible in this actual world we find ourselves in). In general, those things which appear to be contingent facts of nature (such as the value of c) are nomologically fixed but are not metaphysically necessary in any strong logical sense.

My claim is not that the activity of the brain in nature cannot/does not/could potentially not account for consciousness in this world. Quite the contrary, I believe that a brain performing the appropriate activities under the appropriate circumstances will always be conscious.

My claim is that the brain as it is modeled by materialism cannot theoretically account for consciousness. This is a claim about our model of reality, not reality as it actually is. I claim this because it appears to be a logical impossibility to theoretically derive the existence of consciousness starting from materialistic assumptions. Given that I fully accept the natural existence of consciousness and its natural relationship with the natural brain, the only rational route is to re-examine our assumptions (materialism).

We have a fact of existence (consciousness) that is impossible to derive from materialism; hence, materialism must be an insufficient model of reality; hence, we must modify materialism by adding more assumptions/axioms/contingent entities than it currently possesses, such that with our revised model of reality we will be able to satisfactorily explain consciousness as we observe it to exist in nature. (On some level, this will involve making at least some aspects of consciousness, or those things that somehow combine to create consciousness, irreducible and fundamental.) That is, with our revised model, we should be able to derive the properties of consciousness as they are related to brain function as a necessary consequence of our starting assumptions. Our explanation should be good enough that it leaves no rational room for imagining a reality to the contrary of what is observed in nature, given that we accept the starting assumptions as true. This explanitory rigor characterizes the strength of our explanitory power for (eg) the properties of water in terms of H2O molecules, and it is precisely the kind of explanitory rigor that is impossible to build into an explantation of consciousness using just our current materialist assumptions.
 
Last edited:
Back
Top