Royce's Theorem: Intelligence Is Not Computational

In summary: That's where the practical implications come in. If computing power can get "intelligent," then the line between human and machine is blurred to the point where intelligence (or consciousness) might be said to be inherent in the machines themselves. In summary, Penrose argues that consciousness cannot be computational even in principle, and that if a process is completely computational, the process is not intelligent and does not contain or take intelligence to perform or complete. He also suggests that if a process contains intelligence, it is the intelligence of the inventor, designer, or programmer of the computational algorithm, not the intelligence of the process itself. He argues that if this is the case, then any attempt to create consciousness through computer programming is doomed
  • #36
You say consciousness has not been "absolutely tied" to brain functioning. Have you any surveyable evidence of consciousness WITHOUT brain functioning? If you claim there is, or even might be, something like that then the burden of proof is surly on you!

Alternatively brain functioning is a necessary prerequisite of consciousness, and therefor to be conscious requires that your brain function, and hence that your body burn energy. Thus "to be conscious" necessarily entails "to process energy" whatever philosophical categories you costruct.
 
Physics news on Phys.org
  • #37
StatusX said:
Evolution, a purely physical process, could do it. Nothing else that evolution has done, short of creating the first instance of life itself, is doubted to be purely physical. So why not intelligence, and to go a little farther, why not consciousness?

When you say the word "evolution" you are simply equating it to whatever process generated human intelligence, so it isn't saying much to turn around and claim that therefore evolution has created human intelligence. It is true by definition. The trick comes in when you then equate this definition of "evolution"(whatever process that created humans) to the current scientific explanation for the existence of humans and the result is "evolution is a completely physical process".

Since the word "physical" means absolutely nothing to me, this conclusion doesn't either. But I do understand the intent. The intent is to claim that we understand almost everything there is to understand about how humans arrived. But as long as there are serious philosophical issues around consciousness I would expect there to always be conflicting theories. To simply gloss over the problems of consciousness because it is a result of what is known to be "a physical process" of evolution is simply the same as assuming your conclusion.

And "why not consciousness"? Because, as Royce is saying, there are serious issues suggesting that consciousness cannot be the result of complexity. If people here were claiming that consciousness cannot be created by humans because it is too complex then I would be agreeing with you. But that's not what is being stated here.


That it isn't currently explained is indisputable. That it can't be duplicated by a machine, even in principle, is much less certain, and presumes you know something about consciousness that you have already claimed isn't currently known.

What Royce is trying to say is that there are serious reasons to believe that,in principle, consciousness cannot be a creation of computative operations and therefore be reductively explained. The correlation between brains and consciousness is indiputable. But this correlation tells us nothing about the relationship. Perhaps, given the presense of certain processes, a fundamental element of reality called consciousness manifests itself in a different way? So it may be conceivable that man could actually create a conscious machine. But this is very different from claiming that the computations themselves actually create consciousness. In this case, it would be like claiming that your internal plumbing actually creates the water instead of simply allowing it to flow into your home. Or that your radio is actually creating that unusual vocal sound.
 
  • #38
selfAdjoint said:
Alternatively brain functioning is a necessary prerequisite of consciousness, and therefor to be conscious requires that your brain function, and hence that your body burn energy.

This statement and the one in your previous post is misleading. The discussion here is whether consciousness can be created by a reductive computative process. As I said in my previous post to StatusX, all we know is that there is a correlation between brains and consciousness. We don't know the nature of the relationships in the correlation(which is what Royce is saying I believe). So your very first sentence is misleading. The way it's worded, it implies that consciousness is created by brain functions. A radio is a prerequisite for you to hear your favorite song but that does not mean the creation of that song is dependent on the wiring of your radio. This implication is equating the existence of the song with the existence of the actual "play" event on the radio even though you have no clue how in the world that box of wires can compose such a sound. And there is reason to believe that in principle, it could not possibly do such a thing.

Thus "to be conscious" necessarily entails "to process energy" whatever philosophical categories you costruct

If I need gasoline to drive to my grandparents house so that I can split wood, then we can say that gasoline entails having split wood. Would there be an implication here that gasoline somehow contributes some functional explanation as to how the process of splitting wood takes place? I would hope not considering that gasoline has no purpose at all in the actual process of splitting wood. The same could be true for consciousness. Hey, we could just simply say that the big bang entails everything and leave it at that! No more explanations required!:-p

These just seem like semantic debates which miss much of the point of the original post.
 
Last edited:
  • #39
Royce said:
If a process is completely computational, the process is not intelligent and does not contain or take intelligence to perform or complete.

Royce, it seems to me that much of the debate here could have been fueled by using the word "intelligence". It depends on how we define this word but I would have initially pushed back on the idea that intelligence cannot be generated by computations.

So I'm not sure using the word is very useful. It doesn't seem we need it. There are serious philosophical issues around consciousness and any philosophical issues that you can site about "intellgence" seem to be because of its dependence on consciousness. So why have this word as if it's a different issue? Am I mis-understanding? Are there other things which you think cannot be generated by computations that aren't covered in the concept of consciousness?
 
  • #40
Fliption said:
This statement and the one in your previous post is misleading. The discussion here is whether consciousness can be created by a reductive computative process. As I said in my previous post to StatusX, all we know is that there is a correlation between brains and consciousness. We don't know the nature of the relationships in the correlation(which is what Royce is saying I believe). So your very first sentence is misleading. The way it's worded, it implies that consciousness is created by brain functions. A radio is a prerequisite for you to hear your favorite song but that does not mean the creation of that song is dependent on the wiring of your radio. This implication is equating the existence of the song with the existence of the actual "play" event on the radio even though you have no clue how in the world that box of wires can compose such a sound. And there is reason to believe that in principle, it could not possibly do such a thing.



If I need gasoline to drive to my grandparents house so that I can split wood, then we can say that gasoline entails having split wood. Would there be an implication here that gasoline somehow contributes some functional explanation as to how the process of splitting wood takes place? I would hope not considering that gasoline has no purpose at all in the actual process of splitting wood. The same could be true for consciousness. Hey, we could just simply say that the big bang entails everything and leave it at that! No more explanations required!:-p

These just seem like semantic debates which miss much of the point of the original post.


You could go to the concert or the station and hear the song without the radio. You cannot separate consciousness from the brain - or at least nobody has any evidence that you can. And your wood chopping example is exactly backwards; you could use the gas for some other chore, but in your example you could not get to the wood chopping chore without driving there. Review necessary and sufficient conditions.

In both examples it is you who are misleading because you bring up situations where two things can be easily separated and liken them to consciouness and the brain, of which the outstanding fact is that they cannot be separated.
 
  • #41
Royce said:
I knew that introducing subjective experience at this point was a mistake.
I am trying simply to take it one step at a time; but, that doesn't seem to be where this thread is going. IMHO all experience is altimately subjective as the actual experience happens in the mind/brain. I differentiated it because there is a difference between physical, awake experience of life and the purely mental experience of dreams, imagination and meditation. As far as Chalmers is concerned, I liked his argument of subtracting the smallest possible part from a conscious entity until it becomes no longer conscious to determine the point or minimum complexity needed for consciousness. This showes according to him the absurdity of consciousness being nothing more than a matter of complexity. Unless of course I have completely misread that passage.

Well like I said earlier, all your theorem reduces to is a definition of the ambiguous word intelligence. So to keep this discussion productive, you'll need to define intelligence in some other way that this can be compared to. For example, do you believe true intelligence requires subjective experience? Personally, I think intelligence refers to a purely behavioral characteristic, and if there is a corresponding phenomenal aspect, that is something else.

Now I don't know that you agree with me that intelligence is behavioral, since you mentioned "intelligence/consciousness" in your earlier posts as if they were equivalent. They are not, and it is perfectly coherent to imagine a being which possesses one of these attributes but not the other. So I have two questions that should get to the bottom of this argument: Is consciousness necessary for the behavioral quality of intelligence? and Can non-humans (or non-animals) be conscious?

And by the way, I don't know what passage you are referring to, but from what I've read of Chalmers', you are misinterpretting him. He argues that experience is a fundamental property of the universe, like space or energy, and arises wherever there is an information processing system, such as a brain, a computer, or a thermostat. The complexity of the experience is in direct correlation to the complexity of the information processing, and there is no cut-off point below which consciousness vanishes.
 
  • #42
Fliption said:
When you say the word "evolution" you are simply equating it to whatever process generated human intelligence, so it isn't saying much to turn around and claim that therefore evolution has created human intelligence. It is true by definition. The trick comes in when you then equate this definition of "evolution"(whatever process that created humans) to the current scientific explanation for the existence of humans and the result is "evolution is a completely physical process".

Since the word "physical" means absolutely nothing to me, this conclusion doesn't either. But I do understand the intent. The intent is to claim that we understand almost everything there is to understand about how humans arrived. But as long as there are serious philosophical issues around consciousness I would expect there to always be conflicting theories. To simply gloss over the problems of consciousness because it is a result of what is known to be "a physical process" of evolution is simply the same as assuming your conclusion.

I don't think I'm assuming anything. I start with the fact that evolution seems to be a perfectly reasonable and supported hypothesis for the development of complexity in life. This process is understood conceptually, even if many specific properties of the organisms themselves have not been exhaustively accounted for. But most importantly, the theory requires no extra ingredient; presumably, a universe in which the laws of physics as we understand them today are all there is would still develop advanced life. I used this to argue that intelligence does not require an extra ingredient.

As for consciousness, this is more subtle. I don't disagree that the laws of physics as we know them do not account for consciousness. What I'm saying is this: The arrangement of atoms in our brain has given rise to experience. I cannot accept that there was some supernatural event during the formation of our brain that endowed it with a property that another identical collection of atoms would not have. Therefore, I agree with Chalmers that the right information processing system is automatically instilled with subjective experience.

And "why not consciousness"? Because, as Royce is saying, there are serious issues suggesting that consciousness cannot be the result of complexity. If people here were claiming that consciousness cannot be created by humans because it is too complex then I would be agreeing with you. But that's not what is being stated here.

I think we are getting off track. The discussion was about intelligence, and I believe that increasing the complexity of machines will eventually lead to human intelligence and beyond. But as I said in my last post, I agree with Chalmers that consciousness is fundamental, not a result of complexity.

What Royce is trying to say is that there are serious reasons to believe that,in principle, consciousness cannot be a creation of computative operations and therefore be reductively explained. The correlation between brains and consciousness is indiputable. But this correlation tells us nothing about the relationship. Perhaps, given the presense of certain processes, a fundamental element of reality called consciousness manifests itself in a different way? So it may be conceivable that man could actually create a conscious machine. But this is very different from claiming that the computations themselves actually create consciousness. In this case, it would be like claiming that your internal plumbing actually creates the water instead of simply allowing it to flow into your home. Or that your radio is actually creating that unusual vocal sound.

This seems to be an argument of semantics. Whether or not the processes have "created" consciousness is immaterial. If thoses processes are always accompanied by experience, there is probably some kind of natural law relating them, and interpretting this law as anything more than a correlation would be misleading.
 
Last edited:
  • #43
To me intelligence is a part of consciousness. It involves thinking or better reasoning, understanding and awareness so it is surely a part of consciousness as far as human consciousness is concerned and I would think non-human animals that are clearly conscious, aware and to some degree intelligent. Intelligence is probably as hard to define as is consciousness. Penrose proposed using the term genuinely intelligent or creatively intelligent to distinguish it from so called intelligent machines which is common usage but to him and me not the proper in this sense use of the word intelligent.

I think the ability to reason and understand, to think unique, original creative thoughts at least to the one doing the thinking are all part of being intelligent as well seeing understanding and making new connections, relationships implications and ramifications beyond a new thought or idea. This of course leaves out a lot of people (but not as many animals) and of course includes especially blonds and liberals. :biggrin: (Sorry about that. I just couldn't resist the opportunity to take a jab.)

I'm not sure how you mean intelligence is behavioral unless you consider the above behavioral. I think of intelligence more as a quality or attribute.
 
  • #44
selfAdjoint said:
You could go to the concert or the station and hear the song without the radio. You cannot separate consciousness from the brain - or at least nobody has any evidence that you can. And your wood chopping example is exactly backwards; you could use the gas for some other chore, but in your example you could not get to the wood chopping chore without driving there. Review necessary and sufficient conditions.

In both examples it is you who are misleading because you bring up situations where two things can be easily separated and liken them to consciouness and the brain, of which the outstanding fact is that they cannot be separated.

Can we think of a parallel situation?

You are reading my words and ideas right now on your computer. Is your computer creating my words and ideas? No. Could my words and ideas be in your presence without computer technology and our computers? No.

Can physical principles alone at this time account for the presence of consciousness? No. Can consciousness be present in this universe without physicalness? As far as we know, no.

The proper conclusion is, the brain is required for consciousness to be present, but consciousness cannot yet be attributed to the brain. It is entirely possible that the brain "holds" consciousness here in the physical universe.
 
Last edited:
  • #45
selfAdjoint said:
You could go to the concert or the station and hear the song without the radio. You cannot separate consciousness from the brain - or at least nobody has any evidence that you can. And your wood chopping example is exactly backwards; you could use the gas for some other chore, but in your example you could not get to the wood chopping chore without driving there. Review necessary and sufficient conditions.

In both examples it is you who are misleading because you bring up situations where two things can be easily separated and liken them to consciouness and the brain, of which the outstanding fact is that they cannot be separated.

That is exactly my point. You seem to have missed it. The relationship between a radio and a song is the same as consciousness and the brain. Yes it is true, you can hear a song without a radio. This relationship between the radio and the song does not preclude you from listening to that song without a radio. So what is it about the relationship between consciousness and the brain that suggests to you that one cannot exists without the other? Or that one caused the other? The only thing you have to argue is that(unlike the radio) you don't know of an instance of consciousness without a brain. Well how on Earth would you know if such a thing did exists? The only argument you have is actually an illustration of the problems of a theory of emergence.

You simply assumed that it was a known fact that consciousness and brains cannot be separated. Whereas I was trying to illustrate that the relationship between the two suggests no such assumption...as the radio/song analogy shows.
 
Last edited:
  • #46
StatusX said:
I don't think I'm assuming anything. I start with the fact that evolution seems to be a perfectly reasonable and supported hypothesis for the development of complexity in life. This process is understood conceptually, even if many specific properties of the organisms themselves have not been exhaustively accounted for. But most importantly, the theory requires no extra ingredient; presumably, a universe in which the laws of physics as we understand them today are all there is would still develop advanced life. I used this to argue that intelligence does not require an extra ingredient.

I see your point and I agree. What I was trying to point out was the flaw in using the assumption that evolution is a complete theory to conclude that consciousness was physical, was flawed because consciousness isn't accounted for by any theory. It's simply assuming the conclusion.

I think we are getting off track. The discussion was about intelligence, and I believe that increasing the complexity of machines will eventually lead to human intelligence and beyond. But as I said in my last post, I agree with Chalmers that consciousness is fundamental, not a result of complexity.

I see. Then you and I completely agree. As I said earlier, I too would have argued that "intelligence" can be created by machines. But this is purely semantic. I think at the end of the day, Royce and I would likely agree.

This seems to be an argument of semantics. Whether or not the processes have "created" consciousness is immaterial. If thoses processes are always accompanied by experience, there is probably some kind of natural law relating them, and interpretting this law as anything more than a correlation would be misleading.

This I do not agree with. Whether consicousness is "created" is a very important distinction. If consciousness is fundamental and simply correlated with certain atom arrangements and processes then we don't need to spend a lot of time trying to reductively explain the production of consciousness. We simply need to understand the correlation. But if consciousness is created then we have to be able to reductively account for that creation. This is what I do not think is possible.

If this distinction were not important, then what on Earth is Chalmers all uptight about?
 
Last edited:
  • #47
Royce said:
To me intelligence is a part of consciousness. It involves thinking or better reasoning, understanding and awareness so it is surely a part of consciousness as far as human consciousness is concerned and I would think non-human animals that are clearly conscious, aware and to some degree intelligent. Intelligence is probably as hard to define as is consciousness. Penrose proposed using the term genuinely intelligent or creatively intelligent to distinguish it from so called intelligent machines which is common usage but to him and me not the proper in this sense use of the word intelligent.

Again, I'm not sure exactly what it is you are denying machines the ability to do. We might as well change your and Penrose's term to "human intelligence," which shows how this defintion makes your theorem trivially true. At present, machines can add, diagnose diseases, carry on simple conversations, and manipulate blocks in simple enviroments, among many, many other complicated tasks. Some of these must overlap with what most of us consider intelligence. And their abilities will only expand with time. Are you saying a machine doesn't have free will? Neither do you, in the strictest sense that your actions are constrained by physical law. It can't come up with original ideas? I disagree, and as evidence, a computer was able to prove an ancient geometric theorem in a new and original way that had never occurred to any human. So what can't they do?

I'm not sure how you mean intelligence is behavioral unless you consider the above behavioral. I think of intelligence more as a quality or attribute.

It is behavioral (a functional property) as opposed to subjective. For example, when someone takes an IQ test, it is measuring the ability of that chunk of matter, conscious or not, to reason.

Fliption said:
This I do not agree with. Whether consicousness is "created" is a very important distinction. If consciousness is fundamental and simply correlated with certain atom arrangements and processes then we don't need to spend a lot of time trying to reductively explain the production of consciousness. We simply need to understand the correlation. But if consciousness is created then we have to be able to reductively account for that creation. This is what I do not think is possible.

If this distinction were not important, then what on Earth is Chalmers all uptight about?

Well I'm sure human conscisousness can be reduced, but I think that pure qualia will be associated in some one-to-one way with some type of physical process. Once we know what this correlation is, the problem is solved for all intents and purposes.
 
  • #48
Les Sleeth said:
Can we think of a parallel situation?

You are reading my words and ideas right now on your computer. Is your computer creating my words and ideas? No. Could my words and ideas be in your presence without computer technology and our computers? No.

I don't have any qualms with the rest of your post, but this definitely isn't true. There are many ways to experience a person's words without a computer. There are still no known ways to experience consciousness without a brain. Fliption could be right, and the brain is only a radio-like conduit that allows us to channel consciousness from some other source, but it seems to me that explanations like that are a big time copout. If that is the case, then the actual source of consciousness becomes an unsolvable mystery. I guess that's just the rub with metaphysical questions, though. They're all unsolvable mysteries. I'd like to think that any phenomenon we can directly experience is not.
 
  • #49
loseyourname said:
I don't have any qualms with the rest of your post, but this definitely isn't true. There are many ways to experience a person's words without a computer. There are still no known ways to experience consciousness without a brain.

I assumed readers would see my point. I could have said, ". . . if you were someone who never left your computer since birth, never met other humans had been fed by tubes, etc." to make the analogy fit better. My point was that there are reasons to not yet assume the brain is creating consciousness, and there's another explanation for how consciousness could be present in the brain.


loseyourname said:
Fliption could be right, and the brain is only a radio-like conduit that allows us to channel consciousness from some other source, but it seems to me that explanations like that are a big time copout. If that is the case, then the actual source of consciousness becomes an unsolvable mystery.

It's not a copout if it is true. Just because we want to scientifically figure out everything doesn't mean we can. It's horrible to contemplate, but there might just be truths beyond human experience and therefore which ultimately must remain a mystery. But so what? We still get to be consciousness; not knowing the source doesn't change that. In fact, maybe it would benefit us more if we made more effort to learn how to be consciousness than trying to figure out what causes it.


loseyourname said:
I guess that's just the rub with metaphysical questions, though. They're all unsolvable mysteries.

Phyiscalism is a metaphysical question. Metaphysics isn't synonomous with myth. It just the meta-systems behind the specifics of what we see going on around us. Is there a physical meta-system? There must be because we can't see all the causes of physical phenomena. Is everything we see the result of a physical meta-system? That's what we are arguing about.


loseyourname said:
I'd like to think that any phenomenon we can directly experience is not [an unsolvable mystery].

Are science researchers experienced with all aspects of consciousness there is to know? If, for example, someone is adept with their intellect, does it mean they understand how to use their consciousness every way it has been demonstrated it can be used?

Here's what I don't understand. How do people justify remaining blissfully ignorant of the achievements of others (and I'm not specifically referring to you)? How do people develop their consciousness in one way, ignore everything which isn't their "way,", and then try to act like they know how to evaluate everything? When the only thing one studies is science and physicalness, for instance, that is all they are going to know about. It doesn't mean what science finds all there is to know, or that's the only way one can develop one's consciousness!

Consciousness has been studied deeply, and long, long before any brain researchers decided to take up the investigation. They were people who dedicated their entire lives to learning to directly experience that "subjective" aspect which mystifies everyone currently. As far as I can tell from looking at both sides, the neuroscience side understands the role of the brain best, and the direct experience side understands consciousness itself best. It is too bad the physicalists of the neuroscience side have already decided they know the metaphysical "truth," and so have closed off every bit of openness to any evidence except that which can be studied scientifically.
 
  • #50
At first glance the theorem may seem trivial and intuitively obvious; but. it has implications and ramifications that directly address a number of threads and discussions going on here at PF as well as other places and times. There are three that I have in mind:

1. Artificial Intelligence, AI, can only be that, artificial, at best a simulation of genuine creative intelligence. Thus Mr. Data of Star Trek, The Next Generation could never be more than a data processor of great complexity and sophistication that simulated intelligence and consciousness. He could never be, in principle and fact an intelligent, conscious sentient entity or being in his own right. Sentient conscious robots or machine as we know them now are in principle impossible.

2. Intelligence and thus consciousness cannot be duplicated by merely increasing the size, computational complexity or sophistication of computational processes. Thus intelligence and consciousness cannot be emergent properties.

3. There is that of intelligence and consciousness that is non-computational and thus non-reducible by physics, science and/or mathematics at this time with the tools and scope of science as it is.

----------

There is something more than computational processes to intelligence and consciousness that is beyond science at this time and to study it science will have to broaden it's scope and limits to include at least subjective experience.

---------
The fact that electrochemical activity can be observed in the brain while people are thinking, problem solving and meditating proves only that there is a correlational between thinking and electrochemical activity in the brain. It does not prove that such activity is the sole cause and origin of intelligence and consciousness There is also activity in the brains of people in comas. There is anecdotal evidence, documented, verified and published that people who are clinically dead or verified to be deeply anesthetized have retained their awareness, consciousness and identity and could report accurately experiencing and seeing events going on around them and knowing and recognizing people that they had no way of knowing. This at least brings into question the necessity of a conscious functioning brain being present for awareness, experience and consciousness.
There are also many reports of near death and out of body experiences that cannot be easily explained by traditional physical sciences. You may ignore or reject such evidence as not being scientific and nothing more than mystical hogwash; but, how do you or science know until it is legitimately investigated. It is evidence that is repeatable and verifiable and in order to get anywhere with intelligence/consciousness any and all evidence must be investigated with an open mind even if it is outside the realm of physical science.
Yes it is presently within the realm of metaphysics but not necessarily totally mystical and not necessarly forever so. I do believe that it is knowable.
 
  • #51
Les Sleeth said:
I assumed readers would see my point. I could have said, ". . . if you were someone who never left your computer since birth, never met other humans had been fed by tubes, etc." to make the analogy fit better. My point was that there are reasons to not yet assume the brain is creating consciousness, and there's another explanation for how consciousness could be present in the brain.

The issue was, can consciousness be separated from the brain. You repeatedly respond with examples in which the two phenomena of the example can easily be separated, but you never respond with evidence that brain and consciuousness can be experienced apart from each other.




It's not a copout if it is true. Just because we want to scientifically figure out everything doesn't mean we can. It's horrible to contemplate, but there might just be truths beyond human experience and therefore which ultimately must remain a mystery. But so what? We still get to be consciousness; not knowing the source doesn't change that. In fact, maybe it would benefit us more if we made more effort to learn how to be consciousness than trying to figure out what causes it.

What might be is infinitely ambiguous; it has no bearing on a simple question of what is.


Phyiscalism is a metaphysical question. Metaphysics isn't synonomous with myth. It just the meta-systems behind the specifics of what we see going on around us. Is there a physical meta-system? There must be because we can't see all the causes of physical phenomena. Is everything we see the result of a physical meta-system? That's what we are arguing about.

This is more arm waving. Mighta been could have been.


Are science researchers experienced with all aspects of consciousness there is to know? If, for example, someone is adept with their intellect, does it mean they understand how to use their consciousness every way it has been demonstrated it can be used?

We don't have to be experts on everything to observe regularities of nature. One of these is that consciousness and brain are never experienced apart.

Here's what I don't understand. How do people justify remaining blissfully ignorant of the achievements of others (and I'm not specifically referring to you)? How do people develop their consciousness in one way, ignore everything which isn't their "way,", and then try to act like they know how to evaluate everything? When the only thing one studies is science and physicalness, for instance, that is all they are going to know about. It doesn't mean what science finds all there is to know, or that's the only way one can develop one's consciousness!

Every time scientists try to discuss consciousness with such people they get the type of argumentation you have been putting up. After a while it gets old. I am continually amazed by the patience of those who respond and try to educate posters with crazy attempts to replace relativity or quantum mechanics. I used to do that but it finally wore me out. They never stop coming. And neither do the psi crowd ever stop coming.

Consciousness has been studied deeply, and long, long before any brain researchers decided to take up the investigation. They were people who dedicated their entire lives to learning to directly experience that "subjective" aspect which mystifies everyone currently. As far as I can tell from looking at both sides, the neuroscience side understands the role of the brain best, and the direct experience side understands consciousness itself best. It is too bad the physicalists of the neuroscience side have already decided they know the metaphysical "truth," and so have closed off every bit of openness to any evidence except that which can be studied scientifically.

A couple of years back zoobyshoe, I think it was, provided a neurological explanation of satori. He noted that epileptics often experience an aura that is reportably indistinguishable from what mystics call enlightenment. In the case of epilepsy this is evidently of physical, neurological, origin, and he noted the use of breathing techniques among mystical meditators. By flushing your brain with too much or too little oxygen, or poisoning it with excess carbon dioxide, you can force it into a neurological spasm which mimics or reproduces the epileptic aura. Zooby had links to research, but I don't have them any more.

Rather than physicalists being required to explain what you claim, it is up to you to explain how this physical explanation is false.

Added in edit: Here is one of zoobie's posts. Also scroll down in that thread for more discussion.

https://www.physicsforums.com/showpost.php?p=288466&postcount=7
 
Last edited:
  • #52
selfAdjoint, the fact that a malfunctioning brain can give or have symptoms that resemble other known mental phenomena does not explain away or negate the validity of these experiences. It really only proves that the human mind/brain is capable of experiencing these aspects of mentality in more than one way, that the human brain is wired to experience these thing and will experience them whether symptomatic of illness or phenomena of alternate mental states. Again it is confusing a specific effect with a general cause.
 
  • #53
Royce said:
It really only proves that the human mind/brain is capable of experiencing these aspects of mentality in more than one way

No it doesn't. It SHOWS there is one way to produce them. You CLAIM there is another way, but you haven't even described it, much less shown that it exists. Note that your own testimony is a report, and reports of epileptics sound the same. So listening to both reports, even studying them carefully, does not in itself demonstrate two ways. And even you cannot compare your own internal experience with the inner experience of an epileptic to see if they are the same or not.
 
  • #54
I agree, but the fact that more than one person describes an experience from more than one cause supports the possibility that something is really happening and the report of a known malfunctioning brain does not negate or disprove the reports of many others who report similar experiences with no known malfunction.

Got to go now, more later.
 
  • #55
Royce said:
1. Artificial Intelligence, AI, can only be that, artificial, at best a simulation of genuine creative intelligence. Thus Mr. Data of Star Trek, The Next Generation could never be more than a data processor of great complexity and sophistication that simulated intelligence and consciousness. He could never be, in principle and fact an intelligent, conscious sentient entity or being in his own right. Sentient conscious robots or machine as we know them now are in principle impossible.

This is exactly what I'm talking about. What exactly is it that you are denying the machines? What is the difference between genuine intelligence and artificial intelligence? You have yet to lay this out. I think you see the distinction as obvious, but it is not. Are you saying they can't experience knowledge? Maybe, but we'll probably never know what they experience. You assume they don't experience at all, but can you prove it? Or even give compelling evidence for it?

But I'll even assume, for the moment, that they aren't conscious. Are you saying there are specific actions that humans can perform that computers never will, such as writing a sonnet that other humans judge beautiful? I disagree, although it could take hundreds of years to get there since the human brain is so complicated and, at present, poorly understood. A computer would have to be able to mimic human emotional responses and, probably, "live a life." That is, it will have to get out there and experience the things humans experience and relate to (note that I don't necesarily mean experience in the strictest sense, it might only mean "record the emotional responses to certain occurences and ideas"), But this is not impossible in principle.
 
Last edited:
  • #56
StatusX said:
This is exactly what I'm talking about. What exactly is it that you are denying the machines? What is the difference between genuine intelligence and artificial intelligence? You have yet to lay this out. I think you see the distinction as obvious, but it is not. Are you saying they can't experience knowledge? Maybe, but we'll probably never know what they experience. You assume they don't experience at all, but can you prove it? Or even give compelling evidence for it?

But I'll even assume, for the moment, that they aren't conscious. Are you saying there are specific actions that humans can perform that computers never will, such as writing a sonnet that other humans judge beautiful? I disagree, although it could take hundreds of years to get there since the human brain is so complicated and, at present, poorly understood. A computer would have to be able to mimic human emotional responses and, probably, "live a life." That is, it will have to get out there and experience the things humans experience and relate to (note that I don't necesarily mean experience in the strictest sense, it might only mean "record the emotional responses to certain occurences and ideas"), But this is not impossible in principle.

I think I can argue this, from a point of view of human experience. I am a human. I have intelligence. I am conscious. I also have subjective experiences. I am aware of all of these qualities of what it is to be a human.

Human intelligence is quantitively related to all the other qualities, as what it is to be like a human.

The difference between human intelligence and AI is the evidence on a past post of mine.
https://www.physicsforums.com/showthread.php?t=34922&page=5&pp=15

The engineering program that eventually effectively produced biological organisms with asymmetrically optic molecules of a certain type, we can not reproduce. Since only I, as a human can have subjective experience, we can be most certain, that an AI that we could manufacture with my intelligence would know what I know but would not experience it. Notwithstanding if it could be manufactured with the same biological asymmetrical molecules a human is made from, it would have all the same qualities and I do.

The question is, does that let us assume that the arrangement of molecules of whatever type they may be, make us intelligent? The difference between a dead and a live human is most certainly intelligible.
 
  • #57
StatusX said:
This is exactly what I'm talking about. What exactly is it that you are denying the machines? What is the difference between genuine intelligence and artificial intelligence? You have yet to lay this out. I think you see the distinction as obvious, but it is not. Are you saying they can't experience knowledge? Maybe, but we'll probably never know what they experience. You assume they don't experience at all, but can you prove it? Or even give compelling evidence for it?

I am denying machines the ability to think, understand, to create, to make connections, to see beyond the initial thought and see implications, ramifications and consequences, to be aware that it is thinking as well as aware of what it is thinking and to evaluate the validity and value of what they are thinking. Machines are data processors period. Any intelligence involved is built into it by its designers and programmers. Machines follow step by step the instructions given it according to its design and wiring. If the programmer or designer make a mistake the machine will make the same mistake over and over again and never be aware that it is making a mistake.
It will continue on until the mistake is caught and repaired by an intelligent designer or programmer. That is why every program written and sold has patches, fixes and updates that have to be downloaded and installed.

But I'll even assume, for the moment, that they aren't conscious. Are you saying there are specific actions that humans can perform that computers never will, such as writing a sonnet that other humans judge beautiful? I disagree, although it could take hundreds of years to get there since the human brain is so complicated and, at present, poorly understood. A computer would have to be able to mimic human emotional responses and, probably, "live a life." That is, it will have to get out there and experience the things humans experience and relate to (note that I don't necesarily mean experience in the strictest sense, it might only mean "record the emotional responses to certain occurences and ideas"), But this is not impossible in principle.

Your example is a classical case that I considered. Such action could only be the random connecting and stringing together of random symbols. Evaluating the results according to the rules programmed into it by an intelligent programmer and proceeding on like the proverbial 1000 monkeys pounding on 1000 type writers typing out the complete works of Shakespeare. It is at best data processing following the steps and rules programed into it. It is not creativity nor intelligence and can never be aware or conscious of what it is doing as are most higher animals hopefully including humans.
The key word in your post is mimic. Machine can and will be able to better mimic or simulate human intelligence, emotional responses and consciousness but they will not be able to duplicate human awareness and consciousness and genuine creative intelligence because what a machine does is slavishly and dutifully follow precisely step by step instructions given to it or designed into it. This is data processing not thinking.
According to Roger Penrose there are aspects of consciousness that are not computational. If they can't be computed then they can't be programmed or designed into a machine as we know the term today. Neither he nor I are saying that it will never be possible to build a devise that is or becomes truly intelligent and or conscious. We are saying that it cannot be done today with the tools, science and mathematics that we have today as they are limited to computational studies only. Science, math and our concept of machines will have to change before we can even address intelligence, awareness and consciousness.
Machines are not intelligent nor conscious; nor, can they ever be even in principle as they are now because what they do has nothing to do with consciousness nor intelligence as we humans know it and think of it. If everything in the universe is conscious and interactive then that is whole different bag of worms and is not what I am addressing here; nor, do I mean it in that sense but only in our human sense..
 
  • #58
Les Sleeth said:
I assumed readers would see my point. I could have said, ". . . if you were someone who never left your computer since birth, never met other humans had been fed by tubes, etc." to make the analogy fit better. My point was that there are reasons to not yet assume the brain is creating consciousness, and there's another explanation for how consciousness could be present in the brain.

You can make that argument for anything, though. We've always associated nuclear fission with the release of huge amounts of energy, but for all we know, fission just opens a rift to another dimension from which energy is channelled. There is always the possibility that effects we always experience in concert with certain assumed causes could actually be caused by something else that we cannot observe.

It's not a copout if it is true. Just because we want to scientifically figure out everything doesn't mean we can. It's horrible to contemplate, but there might just be truths beyond human experience and therefore which ultimately must remain a mystery.

I suppose it isn't a copout if it's true, but it is very unsatisfying. Given that you don't like physical explanations because you find them to be unsatisfying, why would you prefer a more mysterious, but equally unsatisfying, lack of physical explanation?

Phyiscalism is a metaphysical question.

Actually, it's a metaphysical answer, which is exactly the reason I am not a physicalist. I don't like metaphysical questions of any kind. I prefer questions that have answers, which I prefer to assume that phenomena we can directly experience have explanations that we can know, rather than giving up right off the bat.

Are science researchers experienced with all aspects of consciousness there is to know? If, for example, someone is adept with their intellect, does it mean they understand how to use their consciousness every way it has been demonstrated it can be used?

I don't know how to answer a question that general. I don't know all science researchers, nor do I even know very many. I'm sure some do, some don't.

As far as I can tell from looking at both sides, the neuroscience side understands the role of the brain best, and the direct experience side understands consciousness itself best. It is too bad the physicalists of the neuroscience side have already decided they know the metaphysical "truth," and so have closed off every bit of openness to any evidence except that which can be studied scientifically.

Can you not see that you've decided exactly the same thing, but in the opposite direction? You've already separated "consciousness itself" from "role of the brain." You are just as guilty, from what I can see.
 
  • #59
StatusX said:
I agree with Chalmers that consciousness is fundamental, not a result of complexity.

Well I'm sure human conscisousness can be reduced

I'm sure it's just a difference in the way we use terminology but at the moment these two statements seem contradictory.
 
  • #60
Rader said:
The difference between human intelligence and AI is the evidence on a past post of mine.
https://www.physicsforums.com/showthread.php?t=34922&page=5&pp=15

The engineering program that eventually effectively produced biological organisms with asymmetrically optic molecules of a certain type, we can not reproduce. Since only I, as a human can have subjective experience, we can be most certain, that an AI that we could manufacture with my intelligence would know what I know but would not experience it. Notwithstanding if it could be manufactured with the same biological asymmetrical molecules a human is made from, it would have all the same qualities and I do.

I have no idea what "assymetrically optic" could be, and a google search turns up nothing. Please be more clear. If this is a mainstream theory you are referring to by an alternate name, fine, but if it is your own, just be careful to follow the guidelines for this forum.

The question is, does that let us assume that the arrangement of molecules of whatever type they may be, make us intelligent? The difference between a dead and a live human is most certainly intelligible.

I never understood this objection to physicalism. Obviously, a dead person is very physically different than a living one. That is, the molecules are arranged in a radically different way.
 
  • #61
loseyourname said:
Fliption could be right, and the brain is only a radio-like conduit that allows us to channel consciousness from some other source, but it seems to me that explanations like that are a big time copout.

If this theory were being suggested because the complexity of consciousness is just too great than I would agree with you. The reason I do not believe this idea is a cop-out is because there are serious philosophical reasons to believe that the alternative ideas that Self Adjoint worships cannot be correct, in principle. On any other topic, I would agree with you completely.

I know from other posts that you do not see this "in principle" problem and I honestly cannot see why yet. I hope to understand it more very soon(Rosenburg discussion) because this problem seems so obvious to me. Even Dennett doesn't suggest this emergent approach. He realizes it cannot be done either so he just denies that there's a problem to solve at all by defining things a bit differently.
 
Last edited:
  • #62
selfAdjoint said:
The issue was, can consciousness be separated from the brain. You repeatedly respond with examples in which the two phenomena of the example can easily be separated, but you never respond with evidence that brain and consciuousness can be experienced apart from each other.

I responded to this point once and I'll do it again since it still lingers.

It is true that the examples used, like radios and songs, can be separated. But they were selected for this analogy exactly because they can be separated! You're missing the point not to see this. The analogy is trying to illustrate that there is nothing in the relationship of a radio and a song that precludes them from be separated. (If they couldn't be separated then the analogy would do a very poor job of illustrating this!) The exact same relationship can be argued to exists for brains and consciousness. Therefore, given what we know about this relationship, we have no evidence to suggest anything about what created what in this relationship. All we know is that there is a correlation.

We know no more than a person ignorant of radios would know about the relationship between the radio and the song. This is the crucial point.

We don't have to be experts on everything to observe regularities of nature. One of these is that consciousness and brain are never experienced apart.

So you're saying that since you don't experience what a rock experiences that a rock doesn't have experiences? Explain to me how you could ever experience anything outside of your own brain. Exactly how would you expect brainless consciousness to manifest itself to you?
 
Last edited:
  • #63
Royce said:
I am denying machines the ability to think, understand, to create, to make connections, to see beyond the initial thought and see implications, ramifications and consequences, to be aware that it is thinking as well as aware of what it is thinking and to evaluate the validity and value of what they are thinking. Machines are data processors period. Any intelligence involved is built into it by its designers and programmers. Machines follow step by step the instructions given it according to its design and wiring. If the programmer or designer make a mistake the machine will make the same mistake over and over again and never be aware that it is making a mistake.
It will continue on until the mistake is caught and repaired by an intelligent designer or programmer. That is why every program written and sold has patches, fixes and updates that have to be downloaded and installed.

Well, this is a much more useful formulation of your original theorem, and one I completely disagree with. I don't disagree that any intelligence would have to be built into it, but so what? A helicopter's ability to fly is built into it, but does that mean that it only "artificially flies" while a bird "genuinely flies"? They both fly, period.

But I think the deeper problem is that machines follow rules, and they cannot break these rules. In science fiction, when a robot encounters a logical paradox, its head explodes, whereas a human will just be temporarily confused or laugh. If we wanted to, we could trace their operation and predict all of their actions, so how could they be creative?

Here's how. A computer that could reasonably be considered creative would be so complex that it's behavior couldn't be predicted, even in principle. Like I said, it would have to go out into the world to get sensory experience, and this experience could not be exactly predicted or duplicated in another machine. It would almost certainly invole random number generators. It would not be anything like a conventional if...then...else based program.

Now you might argue that no matter how complex, the rules at the bottom are set in stone and it can never break them. This is, of course, completely true. But when was the last time you were able to will one of your own neurons to not fire even though it was above the threshold? Human brains, at the most basic level, follow strict rules. These can never be broken, but they don't stop you from making decisions and listening to paradoxes without self-destructing. The reason is that the rules aren't simple if-based rules most conventional programs use, but are based on weights and parrellel connections among billions of neurons. We don't yet completely understand how this hugely complex system of interconnected neurons is able to do what is does(at present, I doubt we even know how thousands of neurons interact), but they do, and there is no evidence that everything we do is a result of anything but billions of neurons obeying strict underlying rules.

As for a machine being aware of it's own thoughts, what would be so hard about that? An intelligent computer would have access to a database of information that it could constantly update (it's memory), and why couldn't it have a self-model in it? It's "thought" would be temporarily stored in this memory, and could be immediatly accessed and evaluated just as we do. I don't pretend to be a psychologist or AI scientist, so I can't get much more specific than this, but it's all possible in principle.

One last thing. Consciousness may or may not be necessary for intelligence. The evidence for this is that we can never be 100% sure that other people are conscious, and, at least for the most part, they behave intelligently. I say a computer could do everything a human does. This includes talking about such things as how colors seem to have intrinsic qualities. I don't know if such a contemplation means the computer is conscious, but in my opinion, it would be.
 
Last edited:
  • #64
Fliption said:
I'm sure it's just a difference in the way we use terminology but at the moment these two statements seem contradictory.

Red, for example, is fundamental. But listening to a sypmphony or looking at a painting can be reduced to more basic constituents, the "fundamental particles" of experience, such as colors, sounds, the most basic emotions and feelings, etc.
 
Last edited:
  • #65
Fliption said:
If this theory were being suggested because the complexity of consciousness is just too great than I would agree with you. The reason I do not believe this idea is a cop-out is because there are serious philosophical reasons to believe that the alternative ideas that Self Adjoint worships cannot be correct, in principle. On any other topic, I would agree with you completely.

I know from other posts that you do not see this "in principle" problem and I honestly cannot see why yet. I hope to understand it more very soon(Rosenburg discussion) because this problem seems so obvious to me. Even Dennett doesn't suggest this emergent approach. He realizes it cannot be done either so he just denies that there's a problem to solve at all by defining things a bit differently.

I don't see this "in principle" thing because every argument claiming to prove that this is the case can be shown to be circular. I don't consider circular arguments to be serious philosophical objections. That is the only reason. I don't cling to the opposite view, but I do think it's the more fruitful view to be investigated simply because, from what I can tell, there is no way to investigate views like yours.
 
  • #66
Initially, this discussion was ostensibly about intelligence, but it seems most of the thread has been devoted to consciousness. This seems to have been a source of much confusion.

Intelligence is a functional concept. All it means to have intelligence is to exhibit the kind of behaviors that we agree to label 'intelligent.' Note that there is no 'Problem of Other Intelligences' analagous to the Problem of Other Minds; whereas we cannot assess the existence of subjective experience in any case but our own, we can assess intelligence in others just by observing behavior (e.g. engaging in conversation, reviewing one's life works, measuring the results of an IQ test, or perhaps someday by analyzing the inner workings of one's neural circuits.). We may disagree about whether the evidence suggests that someone is intelligent, but in the case of experience, there is not even any evidence to argue about in the first place.

This is not a coincidence. We cannot assess the existence of subjective experience in others because we can only investigate the external world via functional means, and there is more to subjective experience than just functional relationships; on the other hand, we can assess the existence of intelligence in others precisely because intelligence is nothing more than a particular class of functional relationships and propensities. As such, intelligence falls squarely among the 'easy' problems of consciousness, and there are no in principle concerns about our ability to account for intelligence in purely physical terms. (Of course, figuring out exactly how human intelligence works in practice-- let alone getting a solid idea of what we mean by 'intelligent' in the first place-- is no small matter.)

Certainly, none of the arguments Chalmers levels against physicalist explanations of consciousness apply to intelligence. Nor does Chalmers argue elsewhere that explaining intelligence is beyond the reach of physicalism. I'm not even sure if Chalmers uses the kind of argument against the idea that complexity causes consciousness that Royce credits him with. He does endorse thought experiments that involve gradually replacing neurons with silicon chips, but these thought experiments are designed to argue against the idea that consciousness is only a property of biological systems; they are not intended to have anything to do with complexity. I would appreciate if Royce could find an online reference to Chalmers' purported 'neural removal' argument against complexity-causing-consciousness, but even if it does exist, it will wind up being disanalogous with Royce's attempt to apply this argument to intelligence.

Of course, various subjective experiences accompany intelligent thought, such as the subjective feeling of understanding / knowing, or the raw feeling of what it is like to roll thoughts around in the mind while solving a problem. These experiential aspects fall under the category of the hard problem, but there is not any sense in which their allegiance with intelligent thought or action is relevant to the 'hardness' of the hard problem. The subjective feeling of trying to solve a math problem is not different in any important respect, as regards the hard problem, from the subjective experiencing of color or pain or hunger. In all these cases, what makes the phenomena fall under the umbrella of the hard problem is that they seem to have intrinsic properties above and beyond what can be accounted for by just structure and function, even in principle.

Once we conceptually isolate these experiential phenomena from their functional correlates, all that remains to be explained is structure and function. Once we are down to explaining structure and function, the concerns of the hard problem no longer apply. So there is no in principle reason for supposing that the functional phenomenon we call intelligence cannot be explained by functional explanations.
 
  • #67
Can any of you perform an intelligent function that does not involve p-consciousness, as in talking to yourself, forming some visual image, etc.?
 
  • #68
StatusX said:
Well, this is a much more useful formulation of your original theorem, and one I completely disagree with. I don't disagree that any intelligence would have to be built into it, but so what? A helicopter's ability to fly is built into it, but does that mean that it only "artificially flies" while a bird "genuinely flies"? They both fly, period.

No helicopters don't fly of and by themselves as birds do. A helicopter is a machine incapable of doing anything much less flying by its own volition unlike a bird. This is a better example for my argument than for yours. It points out the difference between a machine and a bird with some degree of intelligence.

But I think the deeper problem is that machines follow rules, and they cannot break these rules. In science fiction, when a robot encounters a logical paradox, its head explodes, whereas a human will just be temporarily confused or laugh. If we wanted to, we could trace their operation and predict all of their actions, so how could they be creative?

Here's how. A computer that could reasonably be considered creative would be so complex that it's behavior couldn't be predicted, even in principle. Like I said, it would have to go out into the world to get sensory experience, and this experience could not be exactly predicted or duplicated in another machine. It would almost certainly invole random number generators. It would not be anything like a conventional if...then...else based program.

Again you make my point a machine cannot and does not experience anything and why would anybody build a machine that 'behaves' unpredictably. And admittedly a minor point but we cannot build a truly random number generator or machine.

Now you might argue that no matter how complex, the rules at the bottom are set in stone and it can never break them. This is, of course, completely true. But when was the last time you were able to will one of your own neurons to not fire even though it was above the threshold? Human brains, at the most basic level, follow strict rules. These can never be broken, but they don't stop you from making decisions and listening to paradoxes without self-destructing. The reason is that the rules aren't simple if-based rules most conventional programs use, but are based on weights and parrellel connections among billions of neurons. We don't yet completely understand how this hugely complex system of interconnected neurons is able to do what is does(at present, I doubt we even know how thousands of neurons interact), but they do, and there is no evidence that everything we do is a result of anything but billions of neurons obeying strict underlying rules.

Here again you are making invalid assumptions. It has been documented that such people as Yogas can and do consciously will there body functions to change to the point of virtual hybernation. Normal people can change some of their normally automatic body function like heart rate blood pressure and temperature simply by using biofeedback.

By simply using my imagination I can willfully bring my body to a state of a high degree of excitement whether fight or flight or sexual; or the reverse will myself to relax to the point of going to sleep. We all can and do do these things.

As for a machine being aware of it's own thoughts, what would be so hard about that? An intelligent computer would have access to a database of information that it could constantly update (it's memory), and why couldn't it have a self-model in it? It's "thought" would be temporarily stored in this memory, and could be immediatly accessed and evaluated just as we do. I don't pretend to be a psychologist or AI scientist, so I can't get much more specific than this, but it's all possible in principle.

That is my point. The operation you describe is again data processing according to a predetermined design. This does not constitute intelligence, awareness and consciousness.

One last thing. Consciousness may or may not be necessary for intelligence. The evidence for this is that we can never be 100% sure that other people are conscious, and, at least for the most part, they behave intelligently. I say a computer could do everything a human does. This includes talking about such things as how colors seem to have intrinsic qualities. I don't know if such a contemplation means the computer is conscious, but in my opinion, it would be.

But this is merely mimicing or simulating intelligence. It is not genuine, creative, will driven intelligence. Machine have no volition of their own they simply do exactly what they are designed to do in the exact method they or their program are designed to do.

Again an automatic assembly line can't suddenly become bored and stop making cars and decide to make washing machines instead. If it could and if it did then it would no longer be a machine as we use the term today. It would be and entity with volition, intelligence and consciousness and probably have to be granted rights and benefits, joint the union and go on vacation just like everybody else,
 
  • #69
hypnagogue, I have been trying to remember where I picked up Chalmer's argument that I have been using. I thought at first the I read it on a link supplied in one of the many threads on consciousness here and then I thought it might have been a referense by Penrosee in The Emeror's New Mind or even possibly both. Either way I or Penrose may have taken it out of context and use it for our own puposes and (mis)application. I have not read any of Chalmer's books or articles myself and am only referencing a reference. I still think the idea is valid even if Chalmers did not mean to use it the way I have. Again if I have misused it, I apologize. It was not intentional.

This thread was started intentionally about intelligence because I felt it was much easier to prove or at least support my position that machines, as we know the term, do not and can not be intelligence or have intelligence and that by merely increasing complexity and size will not change that proposition. It will still be nothing more than blind data processing. And by extention merely the evolution of the organic brain to increasing size complexity and sophistication is not enought to explain the coming about of intelligence and thusly consciousness. Which is what my understanding of the term emergence is stipulating.
 
  • #70
StatusX said:
I have no idea what "assymetrically optic" could be, and a google search turns up nothing. Please be more clear. If this is a mainstream theory you are referring to by an alternate name, fine, but if it is your own, just be careful to follow the guidelines for this forum.

Its spelled "asymmetrically optic"
Here is a link below to the information, with all its detail.
http://www.fact-index.com/o/op/optical_isomerism.html

This is no theory it is a biological fact discovered by Luis Pasteur, it is the very essence that distinguishes biological organisms from innate material. Did you read the link to my post, it explains what I am talking about.
https://www.physicsforums.com/showthread.php?t=34922&page=5&pp=15

My argument here is based on physical known properties. What we do not know is if all physical known properties produce all that we observe in humans, in this instance human intelligence. My second argument below suggests just that. Humans posses several qualities that when you eliminate one the others disappear also. Based on these facts there is no reason to believe that you could build an AI without asymmetrically optic molecules that, when you were finished, it would poses all the qualities of a human.

I never understood this objection to physicalism. Obviously, a dead person is very physically different than a living one. That is, the molecules are arranged in a radically different way.

No, that is not the only difference, at least until rigor mortis, would set in. The arrangement of molecules is the same, what is different is we observe the body is dead has no consciousness and in this case no intelligence can be observed in it either. Something left the body that was in there when it had all these qualities, that’s very easy to understand.
 
Last edited by a moderator:
Back
Top