Faulty expectations of a theory of consciousness.

In summary: But the same cannot be said for consciousness. There might be a complete set of physical facts about consciousness, but it's impossible to conceive of those facts without also conceiving of a conscious agent.
  • #36
Originally posted by Mentat
Do we? This strikes at the very heart of the matter, Fliption...what if I can imagine the counter-action of very powerful, swirling, winds without their being a hurricane at all. What is it that causes a hurricane to arise from these physical processes?

Is not the rational answer "nothing, the hurricane doesn't 'arise' from these processes it is those processes"? Isn't that exactly what I - along with Dennett and precious few others - have been saying for quite some time on this topic?

Then a hurricane is a poor analogy. The word "hurricane" is define by humans as the very physical processes that you used to describe it i.e very powerful swirling winds. So it is correct to say that a hurricane doesn't arise from these winds. It is the winds. By definition this is true. However, this is not the case with consciousness. Consciousness as it is being discussed here has not been defined as any physical process. It is defined as "what it is like to be". It is defined this way because this is what needs to be explained. So to simply equate consciousness to some physical process the way you do a hurricane leaves an explanatory gap. How do you get from brain process to "what it's like to be"?

I was trying to show the difference between hurricanes and conscouness earlier and my last sentence threw you off the point entirely. That point was on the idea you posted.
 
Physics news on Phys.org
  • #37
Originally posted by Mentat
Do we? This strikes at the very heart of the matter, Fliption...what if I can imagine the counter-action of very powerful, swirling, winds without their being a hurricane at all. What is it that causes a hurricane to arise from these physical processes?

Is not the rational answer "nothing, the hurricane doesn't 'arise' from these processes it is those processes"? Isn't that exactly what I - along with Dennett and precious few others - have been saying for quite some time on this topic?

What we have, initially, are two sets of phenomena; the macroscopic hurricane and the microscopic actions of atoms and molecules. The question of how to coherently connect the two is best phrased not with the term 'arising' but with 'accounting for': How can the microscopic actions of atoms and molecules account for the macroscopic phenomenon of a hurricane? What we need is a bridge principle to show how the explanation of latter completely satisfies our need to explain the former. The bridge principle is simple: the actions of those individual atoms and molecules, when taken as a whole, are the hurricane.

Why does this bridge principle work? Well, imagine that we are not concerned with hurricanes at all, but rather that we start off with a description of a set of atoms and molecules (from what turns out to be a hurricane) and we want to work out what this set of microscopic phenomena will 'look like' on the macroscopic scale. So we do a bunch of calculations using the laws of physics and, lo and behold, we have derived something that, on a macroscopic scale, looks exactly like a hurricane! Every salient feature of the hurricane that is in need of explaining has been completely derived from that set of microscopic phenomena. Given this result, our bridge principle stating that the hurricane is that set of dynamic atoms and molecules makes perfect sense, and appears to have been completely justified. So what we have is a successful reductive explanation.

Now, you propose a similar bridge principle for consciousness. You say that we can coherently connect conscious experience with physical neurons by just saying that subjective experience is the activity of the neurons.

Why does this bridge principle not work? Well, imagine that we are not concerned with consciousness at all, but rather that we start off with a description of a set of atoms and molecules (from what turns out to be a brain) and we want to work out what this set of microscopic phenomena will 'look like' on the macroscopic scale. So we do a bunch of calculations using the laws of physics and, lo and behold, we have derived something that, on a macroscopic scale, looks exactly like a brain, but looks nothing like subjective experience. Every salient feature of the objective brain that is in need of explaining has been completely derived from that set of microscopic phenomena. But no salient feature of subjective experience that is in need of explaining has been derived whatsoever from that set of microscopic phenomena. Given this result, our bridge principle stating that subjective experience is that set of dynamic atoms and molecules makes no sense at all, and indeed appears to have been invalidated. After all, if subjective experience is the activity of neurons, why has thoroughly analyzing the activity of neurons given us asbolutely no indication of subjective experience? The end result is that we do not have a successful reductive explanation of subjective experience.

For instance, in the case of the hurricane, we see completely clearly why a bunch of molecules moving in such and such a way must account for a macroscopic wind current. But in the case of consciousness, we do not see at all why a bunch of molecules moving in such and such a way must account for a subjective experience of a certain color.

The way around this is to add certain fundamental postulates to our model of reality, such as "neurons doing such and such will always be accompanied by a subjective experience of such and such." Only after taking such postulates to be axiomatically true could we (trivially) see why molecules moving in such and such a way must account for a subjective experience of a certain color.

But in assuming such a fundamental, axiomatic existence for subjective experience, we have not reductively explained the existence of certain subjective experiences under certain circumstances at all. Rather, we have just taken it for granted that certain subjective experiences exist under certain circumstances. Using this approach we may be able to explain more complex features of subjective experience in terms of simpler ones, but we will still not be able to explain those most basic components of subjective experience in terms of anything else. So we will not have a truly reductive explanation of subjective experience.
 
  • #38
Good post, hypna.

I have a question relating to subjective experience. Since axiomatic principles would not reductively explain the certain instances of subjective experience … and why the neurons do what they do in general purposes. Do you think it is possible that Evolutionary Psycology could explain someday the fundamental, algoristic co-option of neurons and subjection subconscious experience are correlated and are directly parallel to each other? or is (EP) not a matter of human-based subconsciousness in the realm of subjectivity?
 
Last edited:
  • #39
Mentat

I think Hypno answered your post very clearly so just a couple of additional points.


Originally posted by Mentat
It is a theory of what goes on in our neocortexes which gives rise to experience, memory, and creativity.
Not really it isn't. It is a theory predicated on the ad hoc conjecture that what goes on in out neocortex gives rise to experience. No account is given of how this occurs and even whether it does.

That's the point though: Consciousness, if it is to be explained as a scientific phenomenon - in the manner of a hurricane - cannot "evolve" or "arise" from the functions of the neocortex, it must literally be those functions, as Dennett (I'm sure your elated that I'm bringing him up again) already predicted in Consciousness Explained.
Ah, He who must not be named. You say 'must literally be those functions'. Why is that, and what evidence leads you (or Calvin) to say this?


It's been observed to be hexagonal. That's really the beauty of it, in my opinion: There is no why, and thus it is just like every other event in the Universe (as understood in the eyes of the scientist).
Hexagonal arrays of neurons have been observed. That is all that has been observed. There is no reason to assume that these have anything to do with causing consciousness, (even if they correlate with certain states of consciousness).



No, this is simply a very special process, and there have been rudimentary (at best) experiments that link certain memories to certain synchronously-firing arrays. As I said at the outset, this is merely a way of showing that it is possible, in principle, to explain consciousness; however impossible it may currently be, in practice.
But this does not show that it is possible. In fact it suggests it is not possible, since it is forced to merely assume consciousness. As you acknowledge here..

As to how Calvin knows that these patterns are thoughts: he doesn't. It's a simple postulate that may turn out to be true. Anyway, it's a necessary one, since the most discreet unit of memory must be established before an understanding of the re-stimulation of memories (integral to consciousness) can be created.

And without some sort of additional theory a dip in spacetime is just a dip in spacetime, no more. You see, you are again demanding more of a scientific theory of consciousness than you would of any other such scientific theory.
That is not the case. No scientific theory that successfully accounts for the existence of a phenomena starts by just assuming its existence. It would be back to front.


It's like in quantum mechanics: Any form of energetic reaction (aka "observation") can collapse the wave-function of a particle, and thus produce seemingly absolute properties out of the typical chaos of that realm. Sure, it's not conceivable, and not really understood yet; but no one says it's impossible in principle to understand it because we can't make the connection of why energetic reactions should collapse the wave-function ITFP...right? It's just the way it is, and the scientific quest is to understand exactly how it is, not why it is that way and not some other way.
Can;t argue with that, but here you are talking about a physical reaction that is scientifically observable. Consciousness is not scientifically observable.

Ok...hold on, isn't psychology a science?
It's a long-standing debate. Psychologists would like it to be, and go to some trouble to make it look it look like it is, but the jury is still out.

Psychologists often refer to our free will. Of course, we may have free will, and all the "thermostat" scientists still may be correct at the same time...we'd simply have to say that the ability to make true conscious choices does exist,
Science is very clear on this issue. The physical universe is taken to be 'causally complete', entirely explicable in terms of physical interactions. Consciousness cannot be causal in this view. For science consciousness is a waste product, a whiff of wasted steam from the engine of a train driven by, well, er, steam, as Gilbert Lyle, Dennett's teacher, argued.

Lyle argued that we are making a mistake in thinking that consciousness is a real thing in need of a real explanation, saying that to do this is to make a category error of the same type as 'she came home in floods of tears and a sedan chair'. (I always liked that).

Even if consciousness was causal science could not accept that freewill exists, since it contradicts the doctrine of physical determinism.

Are you serious? I don't mean to be offensive in any way, I just think we must be discussing two different things here. I was talking about a physical, scientific, reductionist theory of consciousness. This would indeed give rise to a test of consciousness that would be a matter of measuring whether some particular process was occurring or not...that process being consciousness. That seems logical to me.
And me also. A scientific theory of consciousness would indeed give rise to a test for it. The question is whether such a theory is possible.

What exactly is it that you expected from a scientific theory of consciousness?
I would be content with a proof that it is a scientific entitity. Some indication of the physical mechanism by which it is caused would also be required.

And if consciousness of a thought is simply thinking about that thought, then where is the problem?
This approach is taken by 'higher-order thoughts' theories (HOTs). They don't work since ultimately they disappear up their own backside in self-reference.

Just so long as you're not repeating that as a sort of mantra, and then setting out to prove it against all odds,
Yeah, that's the danger. I don't think I'm doing this.

I'm appreciative of your taking the time to educate me.
Thank you also. We're all educating each other as far as I'm concerned.

I just don't want anyone to irrationally assume that something "can't be done", without first looking at all the possibilities. I, myself, am not perfectly convinced that it can be done. I'm just taking that side because most people take the other one, and because the current theories intrigue me :smile:. [/B]
That seems a good approach. In fact it's very difficult to prove that consciousness cannot be explained scientifically (mainly because there is no scientific definition of it, so there's nothing to get ones teeth into). What is not so difficult is to show that all the ways of scientifically explaining proposed so far won't work, and that certain classes of explanation cannot ever work.

Apart from a short break during the reign of Behaviourism thinkers have been specualting about the causes of consciousness for centuries. (Much of the best work was done in the 19th century) I feel that if a scientific explanation was possible then by now someone would have come up with something at least workable in principle.

It is easy to forget that so far we do not have an in principle explantion for the existence of matter. Nobody has yet ventured a book called 'Matter Explained'.
 
  • #40
Originally posted by Fliption
Then a hurricane is a poor analogy. The word "hurricane" is define by humans as the very physical processes that you used to describe it i.e very powerful swirling winds. So it is correct to say that a hurricane doesn't arise from these winds. It is the winds. By definition this is true. However, this is not the case with consciousness. Consciousness as it is being discussed here has not been defined as any physical process. It is defined as "what it is like to be". It is defined this way because this is what needs to be explained. So to simply equate consciousness to some physical process the way you do a hurricane leaves an explanatory gap. How do you get from brain process to "what it's like to be"?

I was trying to show the difference between hurricanes and conscouness earlier and my last sentence threw you off the point entirely. That point was on the idea you posted.

Fliption, you make a very interesting point. You say that consciousness is defined as "what it is like to be"...perhaps that's the problem. Doesn't that make the assumption that it really is like something be you, and you don't just think it is? IOW (and I'll probably start a thread on this), doesn't this definition pre-suppose that there is a central self and that it's like something to be that central self?

The second assumption is perfectly sound, but the first one is up for a lot of debate.

(btw, the reason it must pre-suppose a central, indivisible, self is that, if there were no such "self", but were instead mere processes of the brain that could - with the right amount of complexity in computation - produce the illusion of "me" and the illusion that it is something that it is like to be "me", then reductionism is possible...if, however, there is a central "self", and that "self" is sentient, then it is like something to be that one, indivisible, being, and that may not be reductively explanable).
 
  • #41
Originally posted by hypnagogue
Now, you propose a similar bridge principle for consciousness. You say that we can coherently connect conscious experience with physical neurons by just saying that subjective experience is the activity of the neurons.

Why does this bridge principle not work? Well, imagine that we are not concerned with consciousness at all, but rather that we start off with a description of a set of atoms and molecules (from what turns out to be a brain) and we want to work out what this set of microscopic phenomena will 'look like' on the macroscopic scale. So we do a bunch of calculations using the laws of physics and, lo and behold, we have derived something that, on a macroscopic scale, looks exactly like a brain, but looks nothing like subjective experience.

Kudos on this post, Hypna. My hat's off to you.

There is (of course ) one thing I don't get : Do we know what a subjective experience looks like? And, if we do, do we know what it looks like from all perspectives? Really, what does a hurricane feel like to a hurricane? This question is, obviously, a non-sequitor, since the hurricane is not conscious, and thus that extra perspective doesn't exist. However, it does exist in the case of brains. If we can work up from the molecular scale, and produce a functioning brain, then, to that brain, it will indeed "feel like" consciousness. But it will only feel like consciousness from that perspective.

Do you see what I'm getting at?
 
  • #42
Originally posted by Mentat
There is (of course ) one thing I don't get : Do we know what a subjective experience looks like? And, if we do, do we know what it looks like from all perspectives? Really, what does a hurricane feel like to a hurricane? This question is, obviously, a non-sequitor, since the hurricane is not conscious, and thus that extra perspective doesn't exist. However, it does exist in the case of brains. If we can work up from the molecular scale, and produce a functioning brain, then, to that brain, it will indeed "feel like" consciousness. But it will only feel like consciousness from that perspective.

Do you see what I'm getting at?

Yes, I do, and it is certainly a reflection of the central problem. We have no knowledge of subjective experience strictly from objective observation. What we do have is individual knowledge of subjective experience coming directly from our own first hand perceptions of it, and from this we can try to infer associations between objective observations (such as listening to another human's verbal reports) and subjective experience, based on the assumption that this person is indeed conscious in the same general sense that we are. But this approach entirely presupposes our own firsthand knowledge of subjective experience, since just going by a purely objective, 3rd person approach there should be no reason to believe or even postulate such a thing as subjective experience in the first place.

Of course, this is a massive problem for any attempt at a scientific (objective) account of consciousness. It seems to strongly suggest that there is some component contributing to the existence of consciousness that we cannot objectively observe.

I think the best we can do at present is to guess at the nature of this unobservable component, using observations of brain activity in conjunction with assumptions of the validity of verbal reports and the like, to infer what subjective experience does and does not 'look like' from the 3rd person perspective of observing a human brain. This approach entirely acknowledges that there is nothing in our current 3rd person understanding of physical reality that can suggest, a priori, that system A is conscious but B is not, but rather that our key intuition about the existence and nature of subjective experience comes from our own personal, first-hand knowledge of it. Rather than try to artificially break our epistemic limitations when we try to understand and explain consciousness, we should work within them and use them to structure our approach. That is, rather than try to write off the explanitory gap by supposing that consciousness is entirely explainable via physical reductionism, we should acknowledge that the explanitory gap reflects a deep and significant fact about the nature of reality, and then very carefully go about fleshing out what exactly what that deep and significant fact might be.
 
  • #43
Hypno

I couldn't possibly agree with you more. It mystifies me why this approach is so rarely explored. It leads straight to the solution. Perhaps it is too tainted with Buddhism for most academics. But it has to be the only way forward.
 
  • #44
Originally posted by Mentat
with the right amount of complexity in computation - produce the illusion of "me" and the illusion that it is something that it is like to be "me", then reductionism is possible...if, however, there is a central "self", and that "self" is sentient, then it is like something to be that one, indivisible, being, and that may not be reductively explanable).

Who is experiencing the illusion?
 
  • #45
Originally posted by Mentat
(btw, the reason it must pre-suppose a central, indivisible, self is that, if there were no such "self", but were instead mere processes of the brain that could - with the right amount of complexity in computation - produce the illusion of "me" and the illusion that it is something that it is like to be "me", then reductionism is possible...if, however, there is a central "self", and that "self" is sentient, then it is like something to be that one, indivisible, being, and that may not be reductively explanable). [/B]
Not quite sure what you mean but this does not seem to be quite right.

The reason that 'what it is like' is so widely accepted as a definition, and can be accepted by Buddhists etc., is that it does not entail the existence of a 'self'. In most (all?) idealist accounts of consciousness 'self' is an illusion (or evolved epiphenomenon). Thus 'self' is not necessary to the existence of consciousness or experience.

In other words, 'what it is like' entails the existence of an experience, but (at the limit) it does not entail the existence of a self that is apart from the experience. Consciousness can exist in a state of selflessness.

Thus Buddhists, who in one way claim that the universe arises from consciousness, also sometimes assert that consciousness does not exist. There are subtle differences in these two applications of the term 'consciousness', related directly to the treatment of 'self'.
 
  • #46
Originally posted by Jeebus
Good post, hypna.

I have a question relating to subjective experience. Since axiomatic principles would not reductively explain the certain instances of subjective experience … and why the neurons do what they do in general purposes. Do you think it is possible that Evolutionary Psycology could explain someday the fundamental, algoristic co-option of neurons and subjection subconscious experience are correlated and are directly parallel to each other? or is (EP) not a matter of human-based subconsciousness in the realm of subjectivity?

I think evolutionary psychology's stance on consciousness will follow from neurobiology and philsophy, not the other way around. After learning more about the nature of consciousness and what functions it serves, we would be able to make better sense of how it is evolutionarily advantageous. But conjecturing how consciousness might be evolutionarily advantageous without getting a better handle on it in its own right would seem to be the wrong approach, perhaps even question begging.

However, analyzing in depth the ontogeny of the human brain and its relationship with the development of consciousness could lead to some fruitful results, and I suppose that has a bit of EP flavor to it.
 
  • #47
Originally posted by Fliption
Who is experiencing the illusion?

No one in particular. As explained in another thread, the many sub-experiences (I just really like that term, for some reason) - which are the basic computations of incoming information by the brain - do not ever produce a complete, final draft, but they do process the illusion that there is such a thing, so that it looks like it in retrospect. I think this is actually a very useful tool for the brain to have developed, as it allows for the compactification of lots of information.
 
  • #48
Originally posted by Canute
Not quite sure what you mean but this does not seem to be quite right.

The reason that 'what it is like' is so widely accepted as a definition, and can be accepted by Buddhists etc., is that it does not entail the existence of a 'self'. In most (all?) idealist accounts of consciousness 'self' is an illusion (or evolved epiphenomenon). Thus 'self' is not necessary to the existence of consciousness or experience.

In other words, 'what it is like' entails the existence of an experience, but (at the limit) it does not entail the existence of a self that is apart from the experience. Consciousness can exist in a state of selflessness.

Thus Buddhists, who in one way claim that the universe arises from consciousness, also sometimes assert that consciousness does not exist. There are subtle differences in these two applications of the term 'consciousness', related directly to the treatment of 'self'.

I see what you are getting at, but the central "illusion" is not really of the self, but of the idea that there is a coherent gestalt arising from those little sub-experiences. The reason I mention the self is because, when one introspects at what is like to just "be 'me'", one is succumbing to the same illusion as always, but in this case it is in reference to their very "selves". And, since it is often said that consciousness=a state in which it is "like something" to be "me", it appeared intrinsically related.
 
  • #49
Originally posted by Mentat
I see what you are getting at, but the central "illusion" is not really of the self, but of the idea that there is a coherent gestalt arising from those little sub-experiences. The reason I mention the self is because, when one introspects at what is like to just "be 'me'", one is succumbing to the same illusion as always, but in this case it is in reference to their very "selves". And, since it is often said that consciousness=a state in which it is "like something" to be "me", it appeared intrinsically related.
This is a confusing issue. You're right that when on introspects then in a way one is succumbing to the illusion of 'me'. 'Me' is where all introspection has to start. But when one introspects sufficiently successfully you find 'me' isn't really there, but somehow 'your' experience still is. I haven't got too far with this, but far enough to believe it.

This is why I feel in discussions of consciousness it's very important to distinguish between mind and consciousness, or at least be careful about how they are defined. Mind and brain may both arise from some fundamental state of consciousness.

(Is anyone else having trouble here. Posts keep disappearing. This post of Mentat's that I quoted isn't there any more, not the rest of that page. Is it just me?)
 
Last edited:
  • #50
Originally posted by Canute
This is a confusing issue. You're right that when on introspects then in a way one is succumbing to the illusion of 'me'. 'Me' is where all introspection has to start. But when one introspects sufficiently successfully you find 'me' isn't really there, but somehow 'your' experience still is. I haven't got too far with this, but far enough to believe it.

What does that mean? I don't mean to be offensive, I'm just confused at how clear something has to be to a person before they "believe" it.

Anyway, if you can accept such an outlook - wherein there is no "self" and no final "draft" of "experience", but merely a collection of "sub-experiences" - than what more do you want from a theory of consciousness than that which Calvin, LeDoux, Edleman and Tononi, and Dennett have proposed.

This is why I feel in discussions of consciousness it's very important to distinguish between mind and consciousness, or at least be careful about how they are defined. Mind and brain may both arise from some fundamental state of consciousness.

(Is anyone else having trouble here. Posts keep disappearing. This post of Mentat's that I quoted isn't there any more, not the rest of that page. Is it just me?)

I can still see my post there...has this happened to you on other threads?
 
  • #51
Originally posted by Mentat
What does that mean? I don't mean to be offensive, I'm just confused at how clear something has to be to a person before they "believe" it.
Yes, I didn't put it very well. What I meant was that it makes rational and reasonable sense, but that on top of that my experience confirms it. It is not provable so it has to be experience that decides it in the end.

Anyway, if you can accept such an outlook - wherein there is no "self" and no final "draft" of "experience", but merely a collection of "sub-experiences" - than what more do you want from a theory of consciousness than that which Calvin, LeDoux, Edleman and Tononi, and Dennett have proposed.
But that doesn't follow. It is widely agreed that the writers you mention do not explain consciousness. I certainly agree that they don't.

Also saying that self is an illusion is not at all the same as saying that consciousness is an illusion. (That was my point).

I can still see my post there...has this happened to you on other threads? [/B]
Yes it has, it's driving me nuts. I keep losing track of discussions and posting out of sync. I think it's something to do with pages not updating but I can't pin it down.
 
  • #52
Originally posted by Canute
But that doesn't follow. It is widely agreed that the writers you mention do not explain consciousness. I certainly agree that they don't.

They don't explain how all of the information-processing of the brain sums up to a Final Draft of conscious experience, if that's what you mean...but they are not trying to. They have each shown, in their own way, that such a Final Draft is never really produced, but is an illusion (a "trick" that the brain plays on itself, in Dennett's terms) which is processed right along with the rest of the information, which is useful (for the compacitifcation, long-term memorization, and recall...as well as for the evolution of sentience) but misleading (in philosophical discussion, one can take the illusion of compactification to be the real thing, and can spend eternity trying to explain how the Final Draft can "arise" from information-processing, but will never find the answer, since the argument is based on a faulty premise).

Also saying that self is an illusion is not at all the same as saying that consciousness is an illusion. (That was my point).

Well, that's true.
 
  • #53
Mentat

I must admit I don't really understand your point about final drafts. However I'm not sure how it's relevant. 'Final drafts' is a term from Dennett that may or may not have some relevance to consciousness.

But heterophenonology, the theory behind the term, cannot work as an explanation of consciousness since it excludes what we normally call consciousness, as has been pointed out by many of Dennet's colleagues, notably Gefffrey Harnard.

So to prove that the term 'final drafts' has any meaning in relation to consciousness one would first have to meet the well-rehearsed and so far wholly unanswered objections of just about everyone who isn't Daniel Dennett.

Somewhere online is an email discussion between Harnard and Dennett on this issue which pretty much settles the matter, but I've lost it. A search on the names together might uncover it.

I think you should change your mind. It really just cannot ever make sense to argue that consciousness is anything other than exactly what it appears to be, to you (or I), and what it has been like at other times, in your own experience and in your own words, as best you can tell or remember. If you are anything like me then what it seems to be like is a completely unified experience of what it is like to be conscious as me at this moment, and what it has been like in other remembered moments. That is what conciousness is, what it is that we're supposed to explaining.

This seems so completely obvious that I cannot understand how anyone could argue otherwise. I don't mean you, I mean the thousands of professional academics who get paid to think clearly and deeply about these issues and who agree with you. The arguments go back and forth endlessly in the literature.

Half of these tenured academics and professional researchers writing about consciousness seem to be off their rockers to me, but perhaps I'm off mine. However I'm suspicious. The problem of consciousness has turned into a goldmine for academic philosophers and many others in the research and publication industry, I sometimes wonder if they're really trying to solve it.
 
Last edited:
  • #54
Originally posted by Canute
I think you should change your mind. It really just cannot ever make sense to argue that consciousness is anything other than exactly what it appears to be, to you (or I), and what it has been like at other times, in your own experience and in your own words, as best you can tell or remember. If you are anything like me then what it seems to be like is a completely unified experience of what it is like to be conscious as me at this moment, and what it has been like in other remembered moments. That is what conciousness is, what it is that we're supposed to explaining.

That is certainly what it feels like. But then, to quote FZ+, "If reality were exactly the way it seemed, we wouldn't need science at all".

BTW, I want to clarify here and now that I don't believe any of the things I've stated about consciousness to be necessarily true. I just don't think they are necessarily false either, and I'm preferring them over the alternative specifically because it contradicts the belief that "seems obvious".

This seems so completely obvious that I cannot understand how anyone could argue otherwise. I don't mean you, I mean the thousands of professional academics who get paid to think clearly and deeply about these issues and who agree with you. The arguments go back and forth endlessly in the literature.

Half of these tenured academics and professional researchers writing about consciousness seem to be off their rockers to me, but perhaps I'm off mine. However I'm suspicious. The problem of consciousness has turned into a goldmine for academic philosophers and many others in the research and publication industry, I sometimes wonder if they're really trying to solve it.

I sometimes wonder if they are trying to solve the wrong question. The point, IMO, is to understand how a being is conscious and/or sentient. It is not to answer the "hard problem", since that is a compilation of postulates that needn't exist at all for sentience/consciousness to exist.
 
  • #55
Originally posted by Mentat
That is certainly what it feels like. But then, to quote FZ+, "If reality were exactly the way it seemed, we wouldn't need science at all".
I have to disagree. If we didn't study the way reality seems then science wouldn't exist, as how things seem is all that science can study.

BTW, I want to clarify here and now that I don't believe any of the things I've stated about consciousness to be necessarily true. I just don't think they are necessarily false either, and I'm preferring them over the alternative specifically because it contradicts the belief that "seems obvious".
Fair enough. But obviousness is nevertheless a useful guide to the truth.

I sometimes wonder if they are trying to solve the wrong question.
Yeah, that as well.

The point, IMO, is to understand how a being is conscious and/or sentient. It is not to answer the "hard problem", since that is a compilation of postulates that needn't exist at all for sentience/consciousness to exist. [/B]
But the hard problem is indistinguishable from the problem of understanding how a being can be conscious and/or sentient. That is the hard problem.
 
  • #56
Originally posted by Canute
I have to disagree. If we didn't study the way reality seems then science wouldn't exist, as how things seem is all that science can study.

That cannot be the case. Quantum Mechanics, just for one example, does not study how things seem, but studies how they are regardless of the fact that they actually seem to be the exact opposite of what QM has shown.

Fair enough. But obviousness is nevertheless a useful guide to the truth.

Maybe.

Yeah, that as well.

I just wish more people would ponder that (as, I believe, Dennett has - and has arrived at the conclusion that they are indeed asking the wrong question).

But the hard problem is indistinguishable from the problem of understanding how a being can be conscious and/or sentient. That is the hard problem.

That's not exactly how it was presented to me. The "hard problem" as I've seen it described is the problem of showing how certain physical functions can produce consciousness.

If one can avoid the problem of how something "produces" consciousness, by showing that consciousness is synonymous with those physical functions, then one will make the "hard problem" moot while consciousness is still being examined.
 
  • #57
Originally posted by Mentat
That cannot be the case. Quantum Mechanics, just for one example, does not study how things seem, but studies how they are regardless of the fact that they actually seem to be the exact opposite of what QM has shown.

In the double slit experiments, it seems (appears) that light can either accumulate on one specific point of a barrier or be dispersed across this barrier. From these appearances we infer certain properties of light that might contradict our usual notions of how light seems to be. The point is that science is based on observation. Our knowledge of reality stemming from observation is by definition mediated, not direct, and in this sense it is built entirely from appearances. (Appearances here does not necessarily mean how things literally appear in subjective experience.)

That's not exactly how it was presented to me. The "hard problem" as I've seen it described is the problem of showing how certain physical functions can produce consciousness.

If one can avoid the problem of how something "produces" consciousness, by showing that consciousness is synonymous with those physical functions, then one will make the "hard problem" moot while consciousness is still being examined.

You really need to circumvent this objection of yours. It amounts to a strawman. Think in terms of 'accounting for,' 'making intelligible how,' or whatever-- not 'produces' or 'gives rise to.' The hard problem is an epistemic problem relating to how we can know or understand the processes which underlie the phenomenon of consciousness, not a problem of explaining literal 'products.'
 
Last edited:
  • #58
Originally posted by Mentat
That cannot be the case. Quantum Mechanics, just for one example, does not study how things seem, but studies how they are regardless of the fact that they actually seem to be the exact opposite of what QM has shown.
I don't want to be brutal but it is absolutely and completely certainly the case, and no scientist or philospher has ever disagreed. And what Hypnogogue said. This is why it is said that this is a world of appearances, in which we cannot know the essence of things.

I just wish more people would ponder that (as, I believe, Dennett has - and has arrived at the conclusion that they are indeed asking the wrong question).
IMHO all metaphysical questions withour exception are the wrong questions. But that would take all day to explain.

That's not exactly how it was presented to me. The "hard problem" as I've seen it described is the problem of showing how certain physical functions can produce consciousness.
Isn't this exactly the same as the problem of how "to understand how a being is conscious and/or sentient" as you put it?

If one can avoid the problem of how something "produces" consciousness, by showing that consciousness is synonymous with those physical functions, then one will make the "hard problem" moot while consciousness is still being examined. [/B]
If you can do that you will be international academic superstar overnight. Great minds have been trying for decades, if not centuries, perhaps even milenia.

Your arguments are all dealt with in full in the literature if you want to get a more trustworthy refutation of them.
 
  • #59
Originally posted by hypnagogue
In the double slit experiments, it seems (appears) that light can either accumulate on one specific point of a barrier or be dispersed across this barrier. From these appearances we infer certain properties of light that might contradict our usual notions of how light seems to be. The point is that science is based on observation. Our knowledge of reality stemming from observation is by definition mediated, not direct, and in this sense it is built entirely from appearances. (Appearances here does not necessarily mean how things literally appear in subjective experience.)

And so what does "appearance" mean when referring to subjective experience itself? What if your subjective experience appears one way that it really isn't (and don't say that it can't be because it is the experience, since it is still perfectly logical to say that what "appears" to be a complete subjective thought is really a collection of "simple" thoughts, each being identical to an impression...ergo, reductively explanable phenomenon with no synergistic reality)?

You really need to circumvent this objection of yours. It amounts to a strawman. Think in terms of 'accounting for,' 'making intelligible how,' or whatever-- not 'produces' or 'gives rise to.' The hard problem is an epistemic problem relating to how we can know or understand the processes which underlie the phenomenon of consciousness, not a problem of explaining literal 'products.'

And the "hard problem" itself, IMO, amounts to a strawman. What is it, exactly that you are trying to explain? You haven't defined subjective experience in any logical terms, you have only stated that you definitely have "it".

The "hard problem" is based on the assumption that there is a subjective experience, but I still haven't seen that term defined.
 
  • #60
Originally posted by Canute
Isn't this exactly the same as the problem of how "to understand how a being is conscious and/or sentient" as you put it?

Nearly. I just don't think that the subjective experience and the physical function are at all different from each other, which precludes the drawing of a bridge between the two, since there is only one thing to explain.

The hard problem lacks substance, IMO, simply because it raises straw-men at every turn. For example, if I say that I see a red ball, the "hard problem" philosophers will ask "What is the relationship between the stimulation of your visual cortex and the experience of a 'red ball'?". But they are asking a moot and empty question. My question in return is: "What exactly do you people expect to happen when a visual cortex processes a certain wavelength of light? How do you separate the processing of that wavelength from the experience of the color, when the experience of the color is the only method available to a visual cortex to process that wavelength?"
 
  • #61
Originally posted by Mentat
"What exactly do you people expect to happen when a visual cortex processes a certain wavelength of light? How do you separate the processing of that wavelength from the experience of the color, when the experience of the color is the only method available to a visual cortex to process that wavelength?"

Are you saying that computers that are programmed to distinguish colors are "experiencing colors"? Or are they really just mathematically comparing the wavelengths themselves?
 
  • #62
Originally posted by Fliption
Are you saying that computers that are programmed to distinguish colors are "experiencing colors"? Or are they really just mathematically comparing the wavelengths themselves?

What's the difference?
 
  • #63
Originally posted by Mentat
What's the difference?

You don't understand the difference between experiencing red versus green and the act of comparing two numbers?
 
  • #64
Originally posted by Fliption
You don't understand the difference between experiencing red versus green and the act of comparing two numbers?

Well, if one were to minus the "mathematically" part of your question, and just say that the computer is comparing two wavelengths of light...then what is the difference?
 
  • #65
Originally posted by Mentat
Well, if one were to minus the "mathematically" part of your question, and just say that the computer is comparing two wavelengths of light...then what is the difference?

Hmmm, but it is a crucial word. If you were charged with programming a computer to distinguish colors, how would you do it? You would probably have some sort of hardware that measured the wavelength and then the computer would compare the wavelength's mathematically and even label them by comparing them to a table where each wavelength is associated with a color. Where does experiencing the color come in? The only way I know of that this program can be written is to use math.

This is sort of like explaining colors to a blind man. He can do math on wavelengths but he is not experiencing colors.
 
Last edited:
  • #66
Imagine a robot with two brainhalfs both trying to controle the same body and both trying to do what serves them best.

And also, an independent part of the boddy downloads texts from the internet, and another transforms texts into simple movies shown to the robot, so that the robot could learn without doing that much.
 
Last edited:
  • #67
Originally posted by Fliption
Where does experiencing the color come in? The only way I know of that this program can be written is to use math.

This is sort of like explaining colors to a blind man. He can do math on wavelengths but he is not experiencing colors.
To have "Experience" you will need to build a history (including values and goals). A goal can be: get as much energy if you can. Or if possible try to expand your processor power. You may include in your program the possibility that he can look for certain wavelengths at his own initiative. If you program the computer that he can activate an extra processor or get extra energy when he locates "three red cards or dots" then it's possible that he first looks for wavelengths of red because you give a goal or target. Run the program and see what happens. Make the computer remember his previous experience. Somewhere such tests have been done in relation to robots (looking for the power-line) and simulations of virtual populations and AI.

So what we see is that some values are coupled or associated, and judged to have an higher or lower priority level. The program gets more dimensions. Therefor we must include multi-dimensions and priority levels when we think about consciousness.

The blind man can not experience colors with his eyes but he can be motivated by telling him that for every red card he gets $10 and tell him that some people can feel colors with their fingers (btw. which is true).
 
Last edited:
  • #68
Originally posted by Mentat
Nearly. I just don't think that the subjective experience and the physical function are at all different from each other which precludes the drawing of a bridge between the two, since there is only one thing to explain.
Only nothing to explain, I think you mean? If consciousness is brain then there is nothing to talk about or explain.

The hard problem lacks substance, IMO, simply because it raises straw-men at every turn. For example, if I say that I see a red ball, the "hard problem" philosophers will ask "What is the relationship between the stimulation of your visual cortex and the experience of a 'red ball'?".
Quite right, and scientists ask exactly the same question. I think you may have misinterpreted the scientific view on this issue, confusing it with Dennett's.

But they are asking a moot and empty question. My question in return is: "What exactly do you people expect to happen when a visual cortex processes a certain wavelength of light? How do you separate the processing of that wavelength from the experience of the color, when the experience of the color is the only method available to a visual cortex to process that wavelength?" [/B]
So what exactly is this 'experience of colour' you are referring to here? Surely you meant to say 'brain state'. Henceforth 'experience' is a word you cannot use because you say it does not exist, it is identical to brain state. You can't have it both ways.
 
Last edited:
  • #69
Originally posted by Fliption
Hmmm, but it is a crucial word. If you were charged with programming a computer to distinguish colors, how would you do it? You would probably have some sort of hardware that measured the wavelength and then the computer would compare the wavelength's mathematically and even label them by comparing them to a table where each wavelength is associated with a color. Where does experiencing the color come in? The only way I know of that this program can be written is to use math.

And yet it is considered an "easy problem" (explanable in principle) to devise a machine (like a visual cortex) which distinguishes between wavelengths of light without the ability to classify them mathematically...besides, what's wrong with saying that they strike with a different amount of force, and thus stimulate differently on that basis (after all wavelength is inversely proportional to frequency, which is directly proportional to energy).
 
  • #70
Originally posted by Canute
Only nothing to explain, I think you mean? If consciousness is brain then there is nothing to talk about or explain.

1) Consciousness is not the brain, it is a process of the brain.
2) The brain is not completely understood, by any stretch of the imagination (kind of a failed attempt at a pun ), so there is plenty to explain.

Quite right, and scientists ask exactly the same question. I think you may have misinterpreted the scientific view on this issue, confusing it with Dennett's.

Scientists would not ask the question if subjective experience of a red ball were already defined as synonymous with such-and-such stimulation of the visual cortex. It's all about defining the term, and the Chalmereans haven't even done that yet (AFAIK), which leaves them right back at square one.

So what exactly is this 'experience of colour' you are referring to here? Surely you meant to say 'brain state'. Henceforth 'experience' is a word you cannot use because you say it does not exist, it is identical to brain state. You can't have it both ways.

It is identical to a particular brain state...wait, scratch that, it's identical to a particular brain process. If I can say "produce hormones" I can say "subjectively experience"...besides, I've at least defined the term. Can you say that for all the philosophers who challenge Dennett?
 
Back
Top