Is It Possible to Create a Human Computer Program?

In summary: They are not actually neurons themselves. A computer could easily model a neuron that only had one emotional state, just like a computer could model a neuron that only had one state.In summary, John Searle's argument that it is not possible to program a computer to duplicate the functions of a human is flawed. While it is possible to simulate a realistic human brain, it would require a computer with judgement clouded by emotions and prior experiences, irrational fears, superstitions. Additionally, the one thing that would really cause a problem in the simulation is all the sources of error the wet, meaty, decaying human brain adds to the computation.
  • #1
Math Jeans
349
0
Hello. I have been studying in my philosophy class a work by John Searle regarding whether it is possible to program a computer program (with hardware) to duplicate the functions of a human.

I wrote up a six page paper against Searle and supported the prospect of a humans computer, but I do not think that it would be a good idea to post that here.

I just wanted the opinions of other PFers: Do you think that is possible to duplicate a human with a computer? (this is a question about possibility, not practicality, and you are not allowed to suggest that you give the computer the anatomy of the brain by duplicating the firing of neurons. This is purely hardware and programming).
 
Computer science news on Phys.org
  • #2
Assuming the brain consists of neurons in certain states with certain connections:

There are a finite number of neurons.
They have a finite number of states.
They have a finite number of interconnections.

And you have a big enough computer to simulate this configuration - then why not?
 
  • #3
John Searle was the inventor of the so called 'Chinese room experiment', right? Both Dennett and Carrier has spend considerable time refuting it and I am inclined to agree with them.
 
  • #4
Also- Douglas Hoffstadter has repeatedly demolished Searle's argument.

Roger Penrose might be considered on Searle's side.
 
  • #5
mgb_phys said:
Assuming the brain consists of neurons in certain states with certain connections:

There are a finite number of neurons.
They have a finite number of states.
They have a finite number of interconnections.

And you have a big enough computer to simulate this configuration - then why not?
But the computer would also need to have judgement clouded by emotions and prior experiences, irrational fears, superstitions. It would change it's decision based on what a loved one thinks. Also, lack of sleep, health, etc... would effect how well it functioned.

So while we might someday be able to simulate an "ideal" human brain, I don't think that we will ever be able to simulate a realistic human brain.
 
  • #6
Evo said:
But the computer would also need to have judgement clouded by emotions and prior experiences, irrational fears, superstitions. It would change it's decision based on what a loved one thinks. Also, lack of sleep, health, etc... would effect how well it functioned.

So while we might someday be able to simulate an "ideal" human brain, I don't think that we will ever be able to simulate a realistic human brain.

Aren't all those things created by the combination of states and interconnections? If so, then why couldn't a computer replicate those emotions and feel them just as strongly as we do?
 
  • #7
If you believe the brain is only neurons/states/connections then - yes.
If you believe there is some 'higher level of being' or some quantum effect then no.
 
  • #8
ganstaman said:
Aren't all those things created by the combination of states and interconnections? If so, then why couldn't a computer replicate those emotions and feel them just as strongly as we do?
Well, state is also influenced by the propagation of hormones and endorphins and whatnot, which travel outside the neat lines of the neuron interconnections. So, you will have to add to mgb_phys's 3-tuple a simulation of the endocrine system... :)

Evo also mentions "prior experiences", which is a big deal. You may be able to model a human brain with a graph of simulated neurons, but do you know how to construct the graph in the first place? In order to tell which neurons should have interconnections between them, you'd basically have to take a real human brain apart neuron by neuron and record what's connected to what...

None of this really prevents the simulation, though, it just complicates it.

Same with "quantum effects", which also would only complicate the modeling-- since a quantum computer can't do anything a normal computer can't do too. Quantum computers just do it faster. And that's okay, since your neuron simulation is likely to run very, very slow already...

The one thing that would really cause a problem in the simulation is all the sources of error the wet, meaty, decaying human brain adds to the computation, as blood sloshes around in your head and brain cells die off. It would probably not be reasonably possible to model all these biological sources of error-- at least so long as we're still thinking of the computation as "model a network of neurons" and not "model every atom in the human brain". The question of course then becomes whether these sources of error have a effect on behavior significant enough that you could in any way tell the difference...
 
Last edited:
  • #9
Evo said:
But the computer would also need to have judgement clouded by emotions and prior experiences, irrational fears, superstitions. It would change it's decision based on what a loved one thinks. Also, lack of sleep, health, etc... would effect how well it functioned.

Think of it this way. Emotions are merely a classification of an action. It is what we use to describe the mood of an action in words.

You could also take into account the if/then factor in all actions as an extra boost to a separate argument.
 
  • #10
John Searle isn't the only one. Computational consciousness depends on functionalism, a concept which Hillary Putnam came up with and who now has decided the idea is flawed. In his most recent book, "Representations and Reality" he attempts to prove that if computationalism is true, then panpsychism is also true, thus it's false. Mark Bishop and Tim Maudlin among others also have jumped on this line of logic, so Searle and Putnam are not the only ones.

Emotions are merely a classification of an action.
Emotions are experienced. They are qualia. For a computer to classify an action is simply behaviorism. Just because something acts as if it is hurt, in love, angry etc... doesn't mean it actually is.
 
  • #11
There have been several critics of the qualia concept, such as Daniel Dennett and Paul Churchland.
 
  • #12
Hi Moridin
There have been several critics of the qualia concept, such as Daniel Dennett and Paul Churchland.
True. They are essentially saying that once you've explained how and why all the neurons interact, you've explained everything there is to explain. In so explaining the interactions, you've done all that needs to be done to explain conscious phenomena.

I don't buy that. The counterargument is to point out that such explanations don't explain how qualia can arise. Why should a given set of light wavelengths appear a given color such as red as opposed to being blue or green? Why should coffee have a specific taste or smell?

One example (thought experiment) Dennett suggests is that such qualia might not be consistant and he uses a Maxwell House coffee taste tester as an example. He asks, does this taste tester have the same experience over time? Does this taste change over the years or does it stay the same? If not, then it gives rise to the idea that qualia are totally illusory.

Again, I don't buy this. The fact there is anything to explain at all about how coffee tastes or smells is an indication that there is something more to explain. And of course the zombie argument also points out that we are missing something if we don't try to explain qualia.

Dennett counters that zombies can't exist. etc...

Why should the interaction of switches produce any phenomena such as experience which is more than simply behavior? And if a computer's switches can create this, then of course any similar computational device can also, including for example Ned Block's Chinese brain or Searle's Chinese room - something intuitively we'd like to avoid but can't. And if these examples are valid, then we must ask ourselves, 'what is a computation and how do you define it'? That question is the biggest problem today with no good answer, despite attempts by a long laundry list of truly brilliant individuals.
 
  • #13
When a computer is capable of teaching a human child the first person meaning of the word "pain", so that the child goes on to use the word correctly in future cases, then we will be forced to believe the computer if it says it has pain; and so on for every other mental state*.

Imagine such a computer in an android body being struck by a car, screaming and wirthing on the ground. Could you deny that it had pain? Maybe at first, but as they became more wide spread society would quickly judge it to be correct to say the machine has mental states.

Edit: If it is possible for me to say that other humans beside myself experience qualia, and to say this without doubt, then it will similarly one day be correct to make the same judgment about computers, when their behavior is in accord.
 
Last edited:
  • #14
Hi Math Jeans,
I have been studying in my philosophy class a work by John Searle regarding whether it is possible to program a computer program (with hardware) to duplicate the functions of a human.
How do you define a computer? Consider that the phenomena produced by any physical system can be determined by calculation and is therefore a computational device of some sort. Consider also that compuation is symbol manipulation, and those symbols (ex: the position of a switch, the temperature of a rock, the shadow cast by a rock) can be interpreted in any way whatsoever. The symbols are dependant on people to define what they mean. Now try and explain what exactly a computer is.
 
  • #15
The Problem with the Chinese Room argument is that the speed at which our brains process information is (by some estimates) 100,000,000,000 operations per second. Several critics point out that the man in the room would probably take millions of years to respond to a simple question, and would require "filing cabinets" of astronomical proportions.

http://plato.stanford.edu/entries/chinese-room/

"Steven Pinker (1997) also holds that Searle relies on untutored intuitions. Pinker endorses the Churchlands' (1990) counterexample of an analogous thought experiment of waving a magnet and not generating light, noting that this outcome would not disprove Maxwell's theory that light consists of electromagnetic waves. Pinker holds that the key issue is speed: "The thought experiment slows down the waves to a range to which we humans no longer see them as light. By trusting our intuitions in the thought experiment, we falsely conclude that rapid waves cannot be light either. Similarly, Searle has slowed down the mental computations to a range in which we humans no longer think of it as understanding (since understanding is ordinarily much faster)

[...]

Thus several in this group of critics argue that speed affects our willingness to attribute intelligence and understanding to a slow system, such as that in the Chinese Room. The result may simply be that our intuitions regarding the Chinese Room are unreliable, and thus the man in the room, in implementing the program, may understand Chinese despite intuitions to the contrary (Maudlin and Pinker). Or it may be that the slowness marks a crucial difference between the simulation in the room and what a fast computer does, such that the man is not intelligent while the computer system is (Dennett)."
 
  • #16
Hi Crosson,
When a computer is capable of teaching a human child the first person meaning of the word "pain", so that the child goes on to use the word correctly in future cases, then we will be forced to believe the computer if it says it has pain; and so on for every other mental state*.

Imagine such a computer in an android body being struck by a car, screaming and wirthing on the ground. Could you deny that it had pain? Maybe at first, but as they became more wide spread society would quickly judge it to be correct to say the machine has mental states.
This is a fairly typical, 'common sense' argument which I believe to be flawed. We already have these things, they're called computer games. We can create an image of a person act as if it is in pain, or being struck by a car, or whatever on a video screen. We can make the computer game tell you it feels pain, so the obvious next step is to point out that we can give that computer game a body and if that's all it takes to convince someone, then we allegedly have a computer which feels and has qualia. Hopefully you can see how absurd this is. We have to do more than simply show behavior which mimics what a person does to explain conscious phenomena.
 
  • #17
Hi Mordin,
Regarding the argument that speed somehow affects the phenomena of consciousness. If we take an allegedly conscious computer (whatever a computer is) and begin to slow it down, is there some speed at which it looses consciousness? Does the device slowly loose consciousness? What criteria should we use which will define for us how the speed of computation affects consciousness?

I don't think the point regarding speed of computation can be taken seriously.
 
  • #18
Q_Goest, what are your thoughts about Churchlands' counterexample?
 
  • #19
Hi Moridin,
I think you're referring to the analogy regarding our intuitions, if not please explain.

Regarding the analogy,
Churchlands' (1990) counterexample of an analogous thought experiment of waving a magnet and not generating light, noting that this outcome would not disprove Maxwell's theory that light consists of electromagnetic waves.
I'm not in favor of using any analogy or thought experiment unless it aids in understanding a logical argument. This is where the Chinese room thought experiment fails.
... our intuitions fail us when considering such a complex system, and it is a fallacy to move from part to whole: " ... no neuron in my brain understands English, although my whole brain does."
(Same reference)

I'd agree that we can't dismiss that the Chinese room is consciously aware, a point made in the article you referenced which I've seen made a few times previously. It's a good point. This is strictly an intuition we have. We can't say for sure the system of the man, the room, and the instructions are not aware. This is something Searle would like to have you believe, but this is of course an intuition not based on any logical argument.

If computationalism is true, then the system of man, room, instructions, input and output, must also be aware. I don't see any way around it, unless we find some way of defining a computation that excludes Searle's Chinese room. Searle is appealing to our intuitions.

Regarding that last point being made (which is made by virtually everyone) that " ... no neuron in my brain understands English, although my whole brain does." That's an interesting and often cited point. Even this point however has critics. Steven Sevush and .. <someone else who's name escapes me> have proposed single neuron theories of consciousness which actually look very appetizing to me. :)
 
  • #20
Q_Goest said:
How do you define a computer?

I think this is a pretty crucial question.

Personally I would define a computer as a mechanism which an be configured so as to solve generalized problems. So a Macintosh is a computer, and a Turing machine is a computer, and the "chinese room" mechanism with the man sitting in the middle is as a whole also a computer, and the human brain is a computer.

Of course the way I've defined things Math Jeans' question is answered by "cheating", sort of: Can a computer be a person? Yes, because I defined a person as a kind of computer. I think that this indicates a problem with the question more than it does with my definition, though-- overall I see the "can a computer be like a person?" question as just not very interesting, because I don't think we're really all sure what it is we're even arguing about. I think any discussion on this subject invariably degrades into several different arguments which occur simultaneously, with people arguing past each other because they're fundamentally talking about totally different things. (This could be helped if people would bother to start out by stopping and defining such terms as "computer" or "person" or "self-aware", so that it isn't possible to accidentally equivocate between the things these terms mean in the different overlapping discussions.)
 
  • #21
Might be that by understanding brain structure and imitating it might yield a better intelligence than the Chinese Room structure. This is the premise for [URL="Might be that by understanding brain structure and imitating it might yield a better intelligence than the Chinese Room structure. This is the premise for On Intelligence by Jeff Hawkins. Here be the link. onintelligence.org Gonna' have to type this in the address bar manually as I can't get the darn thing to work any other way. I'd say to hell with it but this is the guy that founded Palm Computing and Handspring. He's well known in Silicon Valley and has applied with some success his findings on brain structure to the design of computers. He explains why computers are not intelligent. I think this a worthwhile read.

I just can't get the link to work except by manually entering onintelligence.org in the address bar.
 
Last edited:
  • #22
Hi Coin,
Personally I would define a computer as a mechanism which an be configured so as to solve generalized problems.
This sounds like a relatively common sense definition of the term “computer”. Thanks for the response.

Now consider that this common sense computer is a device which has at its heart, numerous switches with wires between them. The modern computer is a form of this, with each switch position being driven by some electrical voltage. The switch position is a ‘symbol’ in that it does not represent anything intrinsic to physics. It does not equate to the temperature of the sun for example, nor does it equate to how much money is in your savings account. In order for the switch position to have some equivalent meaning in the real world, humans must assign meaning to these positions. We do that by creating a machine which has a symbol interface such as a monitor with squiggles on the screen which represent something to us depending on our language or other assignment such as the colors used to represent temperature of a model of the sun or the numbers representing dollars or yen in our bank account for example.

Searle, among others, points this out when he says, “… computation is defined syntactically in terms of symbol manipulation. But syntax and symbols are not defined in terms of physics. Though symbol tokens are always physical tokens, “symbol” and “same symbol” are not defined in terms of physical features. Syntax, in short, is not intrinsic to physics.”
(Ref: Searle, “Is the Brain a Digital Computer?”)

If you had a computer terminal into which you typed the question, “What is 12 times 23?” and it came back with 276, you might say this machine was computing something. If it came back with 0110 1011, you might recognize this as being the binomial answer (I didn’t try to actually put the right number in, so use your imagination) and you’d say yes, that’s the same as 276 so it must have computed the answer. Or it might come back with @1U)> in which case, you might pull out your trusty Batman decoder and find that @1U)> equates to 276 and again you’d find this machine was computing something.

The problem we have with defining a computer is exactly this – everything is ‘computing’ something, because everything can be seen to interact to input, and provide output, due to its causal structure. Everything has a causal structure which is governed by physical laws, so what something does can be termed a computer if you’d like.

Again from Searle: “It follows that you could not discover that the brain or anything else was intrinsically a digital computer, although you could assign a computational interpretation to it as you could to anything else. The point is not that the claim “The brain is a digital computer” is false. Rather, it does not get up to that level of falsehood. It does not have a clear sense.”

I’m inclined to agree with that. Being an engineer for 20 years, I look at nature having causal structure that can be broken up into local, independent elements (ex: such as FEA commonly uses). And these elements interact in a way which is clearly determined by local interactions just like a computer. However, it is unclear how we can define one part of nature as being a computer and another part as not being a computer. It simply doesn’t make sense.
 
  • #23
Hi minorwork,
I'd agree that trying to emulate brain structure would be closer to getting us to artificial consciousness, however that's what functionalism says and there I disagree that simply providing a function is sufficient. I think we also have to emulate the cells, the DNA, the complex chemistry, etc... we might as well just make a baby. ;)

Also, your link doesn't work.
 
  • #24
I've done what I could with the link. No link. Have to enter manually in the address bar the following http://onintelligence.org If it weren't such a darn good out of the box theory backed by facts I'd say screw it.
 

FAQ: Is It Possible to Create a Human Computer Program?

What is a Human Computer Program?

A Human Computer Program is a type of computer program that is designed to mimic human thought processes and behaviors. It is often used in artificial intelligence research to study how humans think and solve problems.

How does a Human Computer Program work?

A Human Computer Program typically uses algorithms and data structures to process information and make decisions. It may also incorporate machine learning techniques to improve its performance over time.

What are the applications of a Human Computer Program?

A Human Computer Program has a wide range of applications, including natural language processing, speech recognition, autonomous vehicles, and robotics. It can also be used in industries such as finance, healthcare, and education.

How is a Human Computer Program different from a traditional computer program?

A Human Computer Program is designed to mimic human intelligence, while a traditional computer program is programmed to perform specific tasks based on predetermined rules and instructions. A Human Computer Program is also capable of learning and adapting, whereas a traditional computer program usually has a fixed set of instructions.

What are the ethical considerations surrounding the use of a Human Computer Program?

There are various ethical concerns surrounding the use of Human Computer Programs, such as privacy, bias, and potential job displacement. It is important for developers to consider these issues and ensure that the program is used responsibly and ethically.

Back
Top