Will AI ever achieve self awareness?

  • Thread starter ElliotSmith
  • Start date
  • Tags
    Ai Self
In summary: Earth.This is a difficult question. It is possible that machine consciousness may not be supported by silicon-based microprocessors/classical computing methods/programming languages/algorithms. And only an artificial neural network (ANN) can support consciousness and sentience. As far as to my knowledge, it is not possible to create an ANN out of a 2D transistorized silicon die.Successfully reverse-engineering the human brain and deciphering all of it's workings will be a momentous milestone in scientific and human history!
  • #1
ElliotSmith
168
104
Will advanced artificial intelligence ever achieve consciousness and self-awareness? Perhaps in the not-too-distant future?

And is it possible for AI to match or surpass the intelligence of human beings?
 
  • Like
Likes SongjiSMD7050
Technology news on Phys.org
  • #2
Consciousness? Yes - but not with current architectures. Current computers have no components that could support consciousness.

Self-awareness? Also yes. This is much simpler. When you are aware of something, you are only aware of symbols for it - and certain associated information. For example, it you look at a tree, you are collecting photons that are reflecting off the tree and the image is encoded by the retina and neurons. Additional information processing allows you to recognize it as something that is in some ways familiar. So the result is neuronal activity that represents (symbolizes) the tree you are gazing at. Similarly, when you are aware of yourself or your thoughts, what you are aware of is a symbolic representation of yourself or your thoughts. Most people can readily recognize that they only have limited information about the tree - but "self-awareness" is a more convincing. Still, the only thing you can be aware of is neuronal representation of something.

So for a computer, the question becomes "Does awareness imply consciousness?". If it doesn't, then you can already describe computers as self-aware - they can report their own temperature, memory and memory usage, processor usage, etc. If it does, then there would need to be a reason for the computer to process information about itself in the conscious realm - for example, self-locomotion, self-preservation, or socializing with other computers or people.

Matching humans? It's hard to see anything that would get in the way of this eventually happening.
 
  • #3
.Scott said:
Consciousness? Yes - but not with current architectures. Current computers have no components that could support consciousness..

What sort of components would support consciousness? What's the least complex thing that could have consciousness?
 
  • Like
Likes mfb
  • #4
DavidSnider said:
What sort of components would support consciousness? What's the least complex thing that could have consciousness?
As I have said before, when we are conscious, we are conscious of more than a small number of bits at once. So we would need a register that can store or process substantial information in a single state. To my knowledge, that is an unambiguous specification of QM superpositioning.

Regarding the least complex thing: If superpositioning is the foundation of consciousness, then primitive forms of consciousness are ubiquitous.
 
  • #5
Wouldn't writing to a hard drive be the same as 'storing substantial information in a single state'?
 
  • #6
DavidSnider said:
Wouldn't writing to a hard drive be the same as 'storing substantial information in a single state'?
If I write "Tree" onto a hard drive, then there are 32 bits of information in 32 different locations comprising 32 completely independent states. There is no one place where "tree" exists so there is no place for consciousness of "tree" to exist. If you are conscious of "tree", then there has to be a you (perhaps some protein molecule) that has all of "tree". You can't do it with 32 distinct yous.
 
  • #7
.Scott said:
If I write "Tree" onto a hard drive, then there are 32 bits of information in 32 different locations comprising 32 completely independent states. There is no one place where "tree" exists so there is no place for consciousness of "tree" to exist. If you are conscious of "tree", then there has to be a you (perhaps some protein molecule) that has all of "tree". You can't do it with 32 distinct yous.

I doubt there is a single place in my brain where the entire concept of "tree" exists either.
 
  • Like
Likes Pythagorean
  • #8
It is possible that machine consciousness may not be supported by silicon-based microprocessors/classical computing methods/programming languages/algorithms. And only an artificial neural network (ANN) can support consciousness and sentience. As far as to my knowledge, it is not possible to create an ANN out of a 2D transistorized silicon die.

There are many fundamental challenges in creating a strong AI.

What causes consciousness and how the brain really operates is still not fully understood by neuroscience, and until the brain is fully reverse engineered and figured out to it's absolute 100% entirety, it will not be possible to synthesize it (in real time) inside of a computerized substitute. The human brain is the single most complex object known to science and is something that would requires decades worth of the combined effort and research of most cutting-edge physics and neuroscience in the world to fully master. A good analogy would be like deciphering the mysteries surrounding black holes and dark matter -- that's how staggeringly complex and enigmatic the human mind actually is.

Successfully reverse-engineering the human brain and deciphering all of it's workings will be a momentous milestone in scientific and human history!

Furthermore, the computational power required for whole brain emulation does not yet exist. It took one of the world's fastest supercomputers 40 minutes to simulate just one second of human brain activity. This is why quantum computing might be imperative for creating an artificial intelligence/neural network that is an exact 1 to 1 match with the human brain and is literally capable of performing any intellectual task that a human being can including writing this post. Quantum computers (particularly topological quantum computers) could theoretically be billions or trillions of times faster than classical Turing computers like the one you're using right now, which would provide many times more than enough computing power for real-time whole brain emulation. Unfortunately quantum computing is still in it's infancy and won't be perfected for quite some time.

It's very likely that all of this will one day be possible, but in my opinion, probably not until sometime during the second half of this century and late within my lifetime (I am currently 27).

Here's some food for thought regarding the (theorized) intrinsic relationship between quantum mechanics and the brain. This is why (in my above argument) radically advanced quantum computing could be required to (somehow) actively support the quantum effects that are believed to be inherent properties of brain dynamics like decision making, memory, conceptual reasoning, judgment, and perception.

http://en.wikipedia.org/wiki/Quantum_cognition

http://en.wikipedia.org/wiki/Quantum_mind
 
Last edited:
  • #9
One question I have is how would you emulate the non-linear and non-digital nature of the brain using digital electronics?
 
  • #10
Drakkith said:
One question I have is how would you emulate the non-linear and non-digital nature of the brain using digital electronics?
Conventional non-linear analog systems can be modeled with more precision than the physical systems. As long as you know what the system is doing.

QM systems are a different story. It is possible to create a QM system that is practically uncomputable with conventional digital processing - even with all the time and energy available in the universe.
 
  • #11
ElliotSmith said:
It is possible that machine consciousness may not be supported by silicon-based microprocessors/classical computing methods/programming languages/algorithms. And only an artificial neural network (ANN) can support consciousness and sentience. As far as to my knowledge, it is not possible to create an ANN out of a 2D transistorized silicon die.
Artificial Neural Networks are a topic of AI - not of biology. We know what is happening with a relatively few actual (biological) neural circuits - and many don't work at all as the AI ones do - for example, the ones that do edge detection in a mammals visual cortex. ANNs can "learn" in an adaptive way, and the components are topologically similar to real neurons.

Here is a link to 1989 paper where a few neurons were monitored in a monkey:
[url\http://neurosciences.us/courses/vision2/V1V2/peterhans.pdf[/URL]

There was also some work done with a cat's visual cortex that was a lot more detailed. The point is that the neurons in this part of the brain are preprogrammed or pre-wired for particular data processing functions. They are not performing the ANN-style adaptation.
 
  • #12
DavidSnider said:
I doubt there is a single place in my brain where the entire concept of "tree" exists either.
You don't need the entire concept in one state - only as much as you are conscious of at one instant.
 
  • #13
.Scott said:
So we would need a register that can store or process substantial information in a single state.

Depends on how you define the word "state". Suppose I look at a single register in the 64-bit processor in my machine. It has 64 bits. Does it have one "state" or 64 different "states"?

.Scott said:
To my knowledge, that is an unambiguous specification of QM superpositioning.

Consider a single qubit: it has an infinite number of possible "states" (as in, its quantum state vector can assume an infinite number of possible values), but that doesn't mean a single qubit can store an infinite amount of information.
 
  • Like
Likes DavidSnider
  • #14
PeterDonis said:
Depends on how you define the word "state". Suppose I look at a single register in the 64-bit processor in my machine. It has 64 bits. Does it have one "state" or 64 different "states"?
A 64-bit register holds 64 independent states. So, if it held an integer and was used for your consciousness, the "you" that was conscious of the units position would be an entirely different from the "you" that was conscious of the two's position. If, in a single moment, you are conscious of a towering green tree, there has to be a place in your brain where the full concept of "towering green tree" exists in a single state.
PeterDonis said:
Consider a single qubit: it has an infinite number of possible "states" (as in, its quantum state vector can assume an infinite number of possible values), but that doesn't mean a single qubit can store an infinite amount of information.
A qubit with an infinite number of "possible states" is in a single quantum state that can be described by common QM notation. A group of entangled qubits also has such a single QM state. For example, when looking for prime factors using the Shor algorithm, operations applied to one qubit effect the entire system of entangled qubits. This makes 64 entangled qubits very different from a common 64-bit register.
 
  • #15
.Scott said:
A 64-bit register holds 64 independent states. So, if it held an integer and was used for your consciousness, the "you" that was conscious of the units position would be an entirely different from the "you" that was conscious of the two's position. If, in a single moment, you are conscious of a towering green tree, there has to be a place in your brain where the full concept of "towering green tree" exists in a single state.

I think you are assuming a lot about the properties that consciousness must have. Do you have an actual theory of consciousness to back this up (or a reference to one), or is it just your opinion?

.Scott said:
A qubit with an infinite number of "possible states" is in a single quantum state that can be described by common QM notation.

So is the entire universe. If you're going to take this approach, there are no separate objects at all; there is just one universal quantum state. Again, do you have an actual theory of consicousness (or a reference to one) that explains how it works if there are no separate objects but just one universal quantum state? Or is it just your opinion that all this makes sense?
 
  • #16
PeterDonis said:
I think you are assuming a lot about the properties that consciousness must have. Do you have an actual theory of consciousness to back this up (or a reference to one), or is it just your opinion?
This one is based on two direct observations of your own conscious state. First, when you are conscious, are you conscious of information - a memory, an image, etc? When you are conscious, are you conscious of more than one bit of information? The experiment can be tricky because you may be conscious of a recently created memory that symbolizes more than you are really conscious of in one instant - but even then, it is more than a few bits.
PeterDonis said:
So is the entire universe. If you're going to take this approach, there are no separate objects at all; there is just one universal quantum state. Again, do you have an actual theory of consciousness (or a reference to one) that explains how it works if there are no separate objects but just one universal quantum state? Or is it just your opinion that all this makes sense?
I am not talking about a "universal quantum state". I am talking about information processing devices that process information in a purposeful way as viewed by our species. In a broad sense, the entire universe is technically an information processing device - but in this thread we are limiting ourselves to brains and artificial intelligence machines that do something humanly purposeful (in the sense that we readily ascribe a purpose to it) with the information - such as a ECDIS system steering a ship, or a brain assisting an animal to survive.

The specific example I gave was Shor's Algorithm (http://arxiv.org/abs/quant-ph/9508027v2 ). In that case, scores of qubits are manipulated so that when their states are finally measured, they provide indications of the prime factors of a large composite number. The purpose of that reference was to illustrate that there is something more sophisticated that simple binary states but short of the entire universe - and that it involves the processing of a single elaborate QM state to a purposeful end. There is a stage in that processing when the entire problem is in a single QM state - where any measurement will effect the entire system - not just the qubit being measured. This is very different from a common 64-bit register where the state of each bit remains entirely local to its hardware device.
Another example is Grover's Algorithm (http://arxiv.org/abs/quant-ph/9605043 ). A device designed for Grover's Algorithm can search through many possibilities looking for a match. This is much more likely the kind of algorithm that would be biologically useful.

The OP asked the question about self-awareness and consciousness in an AI device. A "self-awareness" independent of consciousness can be designed into an AI machine with little effort as I described above. "Consciousness" can also be quickly addressed - but not quickly designed in. With direct observations of our own experience of consciousness, we can discover some of the physical requirements for our consciousness such as:
* It must be supported by a single state as described above;
* It can effect our behavior - otherwise we would not ever discuss "consciousness"; and
* The type of information we are conscious of is heavily processed - we're not conscious of the individual contributions of each rod and cone in our retina or each tone measurement in our ears.

Laboratory results can also contribute to our list of requirements - though, so far, less directly.

As far as "theory of consciousness" is concerned, there is a difference between the philosophical treatment of consciousness and the physical treatment of it. Even if we had a "theory of apples", it would not help us understand apples. Instead, we can examine the apple and discover all sorts of physical charateristics of apples and perhaps at a certain point determine if an artificially designed apple had all the characteristics we think we need to qualify as an apple.

I can taste an apple and tell you whether is has an apple flavor. You can then repeat the experiment and decide if you agree. Those three observations about consciousness are my own observations - but you or anyone else can repeat the observation for yourself.
 
  • #17
.Scott said:
This one is based on two direct observations of your own conscious state.

You don't directly observe your conscious state, if by "state" you mean the physical state of your brain (or whatever medium your consciousness is instantiated in). So you can't draw conclusions from your conscious experience about what kind of physical state it's instantiated in.

.Scott said:
I am not talking about a "universal quantum state".

I know you're not intending to. But my point is that your logic about what a "state" is implies that the entire universe is a single state. There's no in between; you can't pick out a particular piece of the universe, such as your brain or a single neuron or a single particle, and say that that has a "single state", except arbitrarily; there's nothing in the physics that makes a single particle any more of a "single state" than the universe, if you're looking at QM the way you're looking at it.

.Scott said:
in this thread we are limiting ourselves to brains and artificial intelligence machines that do something humanly purposeful

Which is an arbitrary limitation that is not picked out by the physics; it's picked out by our human choices. But if you're asking about an AI, it isn't human, so you can't assume that it will adhere to this limitation. An AI could be self-aware without having any purposes that we humans would understand.

.Scott said:
There is a stage in that processing when the entire problem is in a single QM state - where any measurement will effect the entire system - not just the qubit being measured. This is very different from a common 64-bit register where the state of each bit remains entirely local to its hardware device.

I agree there is a physical difference here; I just don't see how our experience of consciousness makes the first kind of physical system any more likely than the second kind as a substrate for consciousness to be instantiated in. It's an open question.

.Scott said:
With direct observations of our own experience of consciousness, we can discover some of the physical requirements for our consciousness such as:
* It must be supported by a single state as described above;
* It can effect our behavior - otherwise we would not ever discuss "consciousness"; and
* The type of information we are conscious of is heavily processed - we're not conscious of the individual contributions of each rod and cone in our retina or each tone measurement in our ears.

I have no problem with your second and third items here. But I don't think the first one is valid, for the reasons I've given.
 
  • #18
I think that someone needs to really define what awareness and consciousness actually means and what it actually is in a precise manner before even thinking about answer the initial question - because answering a question with terms that are ill-defined will never result in anything useful anyway.

Anybody that has done any serious mathematics or fields that make use of it will tell you just how difficult it is to be precise about simple things - and I bet that the definition of awareness and consciousness is extremely far from even being close to something that is so simple.

Personally I don't think psychologists have really defined the term let alone the computational neurosciences and as such I don't think the question has even got a remote chance of being satisfactorily answered given the current state of the sciences.

This is not an attack at scientists in this field just an observation that the key requirement (i.e. a solid, objective, unambiguous and clearly interpretable definition) doesn't exist.
 
  • Like
Likes MisterX and 256bits
  • #19
.Scott said:
The OP asked the question about self-awareness and consciousness in an AI device.
chiro said:
I think that someone needs to really define what awareness and consciousness actually means and what it actually is...

This thread contains some of the "better" discussions I've read in a long time... about a very difficult topic.

I'm going to post a totally unscientific link to a YouTube video... however, I don't want to decrease the value of this thread, or cause it to be locked by doing something inappropriate...

If it can stay, it's well worth watching, IMO... full screen is best, it seems a bit dark.

If it can't stay, delete... that's completely fine by me...

I didn't want it to embed, so the link is... here.
 
  • Like
Likes I_am_learning
  • #20
PeterDonis said:
You don't directly observe your conscious state, if by "state" you mean the physical state of your brain (or whatever medium your consciousness is instantiated in). So you can't draw conclusions from your conscious experience about what kind of physical state it's instantiated in.
Okay, then observe the characteristics of your consciouness. My key point here is that when consciousness exists, it has information content. Do you agree?

PeterDonis said:
I know you're not intending to. But my point is that your logic about what a "state" is implies that the entire universe is a single state. There's no in between; you can't pick out a particular piece of the universe, such as your brain or a single neuron or a single particle, and say that that has a "single state", except arbitrarily; there's nothing in the physics that makes a single particle any more of a "single state" than the universe, if you're looking at QM the way you're looking at it.
I gave the Shor and Grover algorithms as examples - but I should have been more explicit with the term "state". What I meant was a "minimum independent state". When two particles become entangled, they share a single "minimum independent state" and they remain that way until one is measured. I think my response to your fourth point will make this clear.

PeterDonis said:
Which is an arbitrary limitation that is not picked out by the physics; it's picked out by our human choices. But if you're asking about an AI, it isn't human, so you can't assume that it will adhere to this limitation. An AI could be self-aware without having any purposes that we humans would understand.
As I posted earlier, by a normal technical definition of "self-aware" as distinct from "conscious", some machines are already "self-aware".
When I said we were addressing ourselves to machines that did something "humanly purposeful", it was intended only as a common sense limit. Most QM events happen without human notice - and even if noticed, would not be considered purposeful. So if I throw a log in the fire, there is all sorts of information processing that is incidental to the combustion of the log - but the final result of all that "computation" is simply ash, heat, light, and smoke. By common human standards, the details are of no consequence. It is an arbitrary limitation, but it is also a pragmatic one - and one that is implicitly used by all scientists all the time.

PeterDonis said:
I agree there is a physical difference here; I just don't see how our experience of consciousness makes the first kind of physical system any more likely than the second kind as a substrate for consciousness to be instantiated in. It's an open question.
Let's say we want to make our AI capable of consciously experiencing eight things, coded with binary symbols 000 to 111. For example: 000 codes for apple, 001 for banana, 010 for carrot, 011 for date, 100 for eggplant, 101 for fig, 110 for grape, and 111 for hay. In a normal binary register, hay would not be seen by any of the three registers - because none of them have all the information it takes to see hay.

Now let's say that I use qubits. I will start by zeroing each qubit and then applying the Hadamard gate. Then I will use other quantum gates to change the code (111) to its complement (000) thus eliminating the 111 code from the superposition. At this point, the hay code is no longer local. The state of each qubit is, in part, determined by the full code. If measured, each qubit will report one (1) 37.5% of the time and zero (0) 62.5% of the time. But the code "111" will never be seen, so each one "knows" that if the other two report as 1, it must report as 0. This superpositioning will only last for as long as none of those qubits are measured. But during that time, the entire 3-bit code is held in a single state reflected by all three qubits.
 
Last edited:
  • #21
chiro said:
I think that someone needs to really define what awareness and consciousness actually means and what it actually is in a precise manner before even thinking about answer the initial question - because answering a question with terms that are ill-defined will never result in anything useful anyway.
There is an aspect of "consciousness" that cannot be addressed at all. There is also a temptation to use that fact as an excuse for avoiding all critical thinking on the matter.

In this thread, we are using the term "consciousness" as it was introduced in the OP. Something distinct from self-awareness or biology - very akin to "qualia".

I know that none of the computers I have programmed are conscious - even though I only have an approximate definition for consciousness.
 
  • #22
I get the idea that you need to decide where to draw the line and make a decision when it comes to using a definition that is "good enough" for whatever purpose.

Having said that though the definitions are still really vague and thin and one has to remember that many people have different definitions along with expectations of what something is and what it means.

The above is what really causes the problem and it's also why many questions and debates end up nowhere where any resolution is basically impossible since the terms are ill-defined and the question is not well defined as a result of this (or other things).

It's like when you get into an argument of what energy is or what god is - there is so much ambiguity, subjectivity, and complete lack of clarity when it comes to defining these terms and yet people still argue completely missing this important fact. Eventually the arguments end up in a state of crap where people are flinging verbal diarrhea at each other - especially when one person thinks that there definition is an absolute one.

Unfortunately what facilitates the above - including the inability to critically think and evaluate something is the language we use.

The languages we use in the written and spoken manners in my opinion are horrible for reaching resolutions because they are more like labels rather than an actual description. You see this with the fact that many words have many meanings, and this helps cause some of the problems mentioned above. Perhaps if we had a language where the representation actually described what was being said in some invariant manner then well defined questions would get as well defined an answer as possible.

I still think many definitions including those of consciousness, awareness, energy, god, and others share common characteristics that facilitate prohibiting one from really get any sort of useful resolution and you can see this in action when people talk about these concepts - it's not an excuse for anything: it just is what it is.
 
  • Like
Likes Medicol
  • #23
You need to define 'conciousness' before it can be duplicated. I've seen no tenable definition. The presumption that sentience can be reproduced by a digital series of yes-no decisions does not work to my satisfaction.
 
Last edited:
  • #24
.Scott said:
I know that none of the computers I have programmed are conscious - even though I only have an approximate definition for consciousness.
How do you know that?

Where is the fundamental difference between a register having 32 bits of 1/0 and a set of 32 neurons in your brain, some of them active and some not? Both can represent "tree" in some way. I don't see why you mention quantum mechanics so often, but there are no superpositions on the level of firing neurons, their decoherence time is way too short.

A single register bit is not sufficient for consciousness in the usual sense, but that was never the question - a single neuron does not have consciousness either.
 
  • #25
I agree that to be able to answer the question "Can a computer be conscious?", you have to have a definition of "conscious". In my opinion, the definition should be something observable at the macroscopic level, rather than something at the microscopic level having to do with quantum effects. Why do I say that? Because we grant each other the status of being conscious based on outward behavior, without having any detailed microscopic theory of consciousness.

That was sort of the idea behind the Turing Test, to make machine intelligence something that is observable, rather than how something is implemented. I think that the Turing Test is not really right: It's possible to pass the Turing Test by tricks that give the illusion of intelligence without a real implementation of understanding. In the other direction, I can imagine nonhuman intelligence (the intelligence of animals, or of aliens, or of robots) that would be different enough from human intelligence that it would fail to pass the Turing Test, but which we would probably consider intelligence of a different kind.

So what is the definition of consciousness in terms of outwardly observable behavior? I don't know! It's one of those things that I think I would know it when I see it. I think that there is a whole package of fuzzy concepts that are tied together in our notion of what it means to be a "person". Emotions, plans, goals, ability to remember and learn from the past, evidence of an inner model for how the world works, evidence of updating that model based on experience, that sort of thing.
 
  • #26
In my opinion, the only significant definition of consciousness is subjective experience. A very well designed computer can appear to have cognition, attention, to compute and "think", to be "aware", but it would take some unknown observational method to determine whether the well-designed computer actually feels anything. That is what's important to most people when they talk about consciousness.

I also disagree with the requirement that consciousness must somehow be described by a single physical state as defined by QM. We don't know enough about consciousness to make such a specific claim.
 
  • #27
mfb said:
A single register bit is not sufficient for consciousness in the usual sense, but that was never the question - a single neuron does not have consciousness either.
I'm responding to this statement first, because it is key. A single bit register cannot directly support our sense of consciousness - and a neuron acting like a 1-bit register can't either. More importantly, no collection of 1-bit registers can directly support our consciousness - because the information in them is not combined. And the same can be said for any device acting as a 1-bit register, including neurons. This is not to say that our AI computer will not have multibit registers, just that those registers are not where "consciousness" can happen.

The reason is simple, a 32-bit register is no more than 32 1-bit registers and putting 32 registers next to each other doesn't make them do anything different. The notion that a computer might be conscious simply because it is processing lots of information in a complicated way is wrong. No small piece of the computer circuitry is all that complicated - all of the bits are kept separate from each other.

That last paragraph is critical. As long as you have the notion that some large combination of 1-bit registers and other logic gates (or similarly functioning neurons) can create the experience of a towering green tree, you won't look any further.

If you do look further, you wil conclude that you're going to need a different type of register (and a different type of neuron), one that can combine many bits of information (or bits-worth of information) into a single physical state. Such a register (or neuron) would be able to directly support consciousness.

mfb said:
Where is the fundamental difference between a register having 32 bits of 1/0 and a set of 32 neurons in your brain, some of them active and some not? Both can represent "tree" in some way. I don't see why you mention quantum mechanics so often, but there are no superpositions on the level of firing neurons, their decoherence time is way too short.
Representing "tree" is not the problem. You can write "tree" onto a piece of paper and you now have a representation of "tree". What you don't have in consciousness. One key way you know you don't have consciousness is that there is no place on the paper where the entire representation of "tree" exists.

There is no conceptual difference between writing "tree" on a piece of paper and setting up a 32-bit register to code for "tree". In both cases, there is no one place where the notion of "tree" can exist. The reason I invoke QM is that consciousness needs a way of "coding notions into the consciousness", that is, consolidating information into a single state. And as I described with the 3-qubit register above, QM provides such a mechanism.

As far as decoherence times are concerned, there seems to be wide agreement among physicists that QM states cannot be exchanged among neurons - and I will accept that. It simply means that QM data processing must happen at the molecular level - so the real you is a molecule, but not neccesarily the same molecule at every moment.
 
  • #28
Pythagorean said:
In my opinion, the only significant definition of consciousness is subjective experience. A very well designed computer can appear to have cognition, attention, to compute and "think", to be "aware", but it would take some unknown observational method to determine whether the well-designed computer actually feels anything. That is what's important to most people when they talk about consciousness.

Except that we assume that other people have consciousness, even though we have no way to know if they have "subjective experience". We assume that other people have such experience because they behave in a way that you would expect for someone having that sort of subjective experience.

So I don't think we need to prove that anything has subjective experience, only that their behavior is in line with such experience. Roughly speaking, if it's easier to understand the behavior through assuming subjective experience than it is in any other way, then for all practical purposes, they have subjective experience.
 
  • #29
mfb said:
a single neuron does not have consciousness
Seems like a bold statement, what if neurons are inherently "aware" and it is the collective "feelings" form a majority of neurons which determines our sentient "mood". Self aware is simple, my laptop knows it is plugged into a power supply and if the power goes out the battery kicks in making it self preservative also. It has the personality to complain and work less as it runs out of energy, but I doubt it is conscious of this behavior almighty Bill bestowed upon it.
.Scott said:
A single bit register cannot directly support our sense of consciousness
What if we had an entire classical computer to "simulate" 1 neuron instead of talking in bits... how would you make a net of computers "conscious"? There has to be definite criteria to fulfill to determine success.
 
  • #30
stevendaryl said:
Except that we assume that other people have consciousness, even though we have no way to know if they have "subjective experience". We assume that other people have such experience because they behave in a way that you would expect for someone having that sort of subjective experience.

So I don't think we need to prove that anything has subjective experience, only that their behavior is in line with such experience. Roughly speaking, if it's easier to understand the behavior through assuming subjective experience than it is in any other way, then for all practical purposes, they have subjective experience.

That's true; we only use inference to judge that other people have consciousness - they look/act/move/sound like us so they must feel like us. But when we construct something and design it to do the same behaviors we observe, it's more difficult to infer that the behavior is a result of an intrinsic autonomic process, and more likely the result of us designing an inanimate object to behave that way. Then again... our own behavior may not actually be the result of consciousness - it may be that our consciousness only picks up (gets to experience) behavior that is otherwise deterministic. As Libet's experiments (and those following it) demonstrate, actions that feel spontaneous and chosen to us can be predicted by brain imaging software, implying that they were already going to occur and our mind just got to experience it after the decision was already made by our "hardware".
 
  • #31
stevendaryl said:
I agree that to be able to answer the question "Can a computer be conscious?", you have to have a definition of "conscious". In my opinion, the definition should be something observable at the macroscopic level, rather than something at the microscopic level having to do with quantum effects. Why do I say that? Because we grant each other the status of being conscious based on outward behavior, without having any detailed microscopic theory of consciousness.
First, "We grant each other the status..." is a political statement. Your statement could be interpreted as suggesting that conscious beings have rights. I would like to separate the notion of political status from conscious status.
We presume each other to be conscious based on outward behavior and the presumption that that behavior has the same mechanism behind it. So for an AI device, the internals would be important.
 
  • #32
Pythagorean said:
That's true; we only use inference to judge that other people have consciousness - they look/act/move/sound like us so they must feel like us. But when we construct something and design it to do the same behaviors we observe, it's more difficult to infer that the behavior is a result of an intrinsic autonomic process, and more likely the result of us designing an inanimate object to behave that way. Then again... our own behavior may not actually be the result of consciousness - it may be that our consciousness only picks up (gets to experience) behavior that is otherwise deterministic. As Libet's experiments (and those following it) demonstrate, actions that feel spontaneous and chosen to us can be predicted by brain imaging software, implying that they were already going to occur and our mind just got to experience it after the decision was already made by our "hardware".

The way I feel about it is that if we could develop a computer program that has the same range of behaviors as a human, and not only does conversing with it seem like conversing with another human, but we ENJOY conversing with it--we feel that we learn something about the world, or about the inner world of that program, then for all intents and purposes, it's conscious.

Imagine a world in which there are humanoid robots that are indistinguishable from humans in behavior. You can joke with them, ask their opinions about whether your clothes match, talk about music, etc., and there is nothing in their behavior that would lead you to think that they are any different from humans. For children who grew up with such robots, I don't think that they would be any more likely to question whether such robots were truly conscious than we are to question whether red-headed people are truly conscious. That wouldn't prove that robots were conscious, but I don't think that anybody would spend a lot of time worrying about the question.

The main reason for doubting computer consciousness today is because they don't act conscious.
 
  • #33
.Scott said:
We presume each other to be conscious based on outward behavior
I think most people have been or seen someone "unconscious" stumbling around drunk, so how can you tell if they are aware or not?
 
  • #34
jerromyjon said:
What if we had an entire classical computer to "simulate" 1 neuron instead of talking in bits... how would you make a net of computers "conscious"? There has to be definite criteria to fulfill to determine success.
With unlimited resources, the simulation could produce the same behavior - or at least statistically the same behavior. More elaborately, this could be done with an much larger neural circuits - perhaps even to the point of reporting itself "conscious". But if it did, it would be lying. ;)
 
  • #35
.Scott said:
First, "We grant each other the status..." is a political statement. Your statement could be interpreted as suggesting that conscious beings have rights. I would like to separate the notion of political status from conscious status.

I wasn't at all talking about political rights. I'm just saying that when we choose who we are friends with, who we trust with our secrets, who we enjoy talking about politics or music or science with, it's all based on outward behavior. We interpret that outward behavior as reflecting inner, subjective experience, but we never know, and it doesn't really matter.

We presume each other to be conscious based on outward behavior and the presumption that that behavior has the same mechanism behind it.

Why should anyone care about whether it's the same mechanism? As I said, when choosing friends or people to hang out with, it's based on outward behavior, because that's all that we have access to. And it's enough to make it worthwhile to be friends with someone. If there is someone that I really enjoy spending time with, discussing things, I can't imagine changing my mind about them by discovering that their behavior has a different mechanism than mine.

(Unless knowing their mechanism made me have doubts about their future behavior. For example, if I know that someone who has been friendly toward me is just pretending, in order to gain my confidence so that he can pull a scam on me, then of course that would affect how I feel about him. But I can't imagine how knowing that his brain is structured differently than mine would make any difference to me.)
 

Similar threads

  • Computing and Technology
3
Replies
99
Views
5K
Replies
7
Views
1K
Replies
10
Views
2K
Replies
19
Views
2K
  • Computing and Technology
Replies
4
Views
987
Writing: Input Wanted Number of Androids on Spaceships
  • Sci-Fi Writing and World Building
Replies
21
Views
934
  • Computing and Technology
Replies
1
Views
1K
  • Programming and Computer Science
Replies
1
Views
1K
Replies
7
Views
5K
  • General Discussion
Replies
2
Views
1K
Back
Top