- #36
jerromyjon
- 1,244
- 189
So on the other side of the coin if a computer hid its conscience you would believe it?.Scott said:perhaps even to the point of reporting itself "conscious"
So on the other side of the coin if a computer hid its conscience you would believe it?.Scott said:perhaps even to the point of reporting itself "conscious"
.Scott said:With unlimited resources, the simulation could produce the same behavior - or at least statistically the same behavior. More elaborately, this could be done with an much larger neural circuits - perhaps even to the point of reporting itself "conscious". But if it did, it would be lying. ;)
Only because it was part of the question posed by the OP. Some people name their cars.stevendaryl said:Why should anyone care about whether it's the same mechanism? As I said, when choosing friends or people to hang out with, it's based on outward behavior, because that's all that we have access to. And it's enough to make it worthwhile to be friends with someone. If there is someone that I really enjoy spending time with, discussing things, I can't imagine changing my mind about them by discovering that their behavior has a different mechanism than mine.
First, it probably does make a difference. Second, from the first-person point of view, not only does it make a difference, it makes all the difference.stevendaryl said:Why should anybody care about a truth that makes no difference? To me, that's like discovering that there is an absolute reference frame, but because of the peculiarities of the laws of physics, nobody can detect whether they are at rest in this reference frame, or not.
.Scott said:Only because it was part of the question posed by the OP. Some people name their cars.
.Scott said:First, it probably does make a difference. Second, from the first-person point of view, not only does it make a difference, it makes all the difference.
I don't deny other criteria.stevendaryl said:The original poster didn't mention anything about mechanism. Obviously, the mechanism for AI would be different from the mechanism used by human brains. So how can you possibly tell whether it is "really" conscious, or not? One criterion is sophistication of behavior. To me, that's good enough--we don't have any other definition of consciousness that is capable of being investigated scientifically.
It's very observable. Everyone gets to run the experiment for themselves. Are you denying that you are conscious?stevendaryl said:Well, we never have access to anyone else's first-person experience. So you're by definition making the most important thing about consciousness unobservable. That's fine, but to me, it's like saying: "Yes, I know that relativity implies that we can never know whether we are at absolute rest, but maybe there is absolute rest, anyway."
.Scott said:I don't deny other criteria.
.Scott said:It's very observable. Everyone gets to run the experiment for themselves. Are you denying that you are conscious?
How does such a combination look like? And where is the evidence that we have such a combination in our brain, and computers do not have it?.Scott said:If you do look further, you wil conclude that you're going to need a different type of register (and a different type of neuron), one that can combine many bits of information (or bits-worth of information) into a single physical state. Such a register (or neuron) would be able to directly support consciousness.
Where is the mechanism? Just saying "QM has superpositions => consciousness!" is not an argument..Scott said:The reason I invoke QM is that consciousness needs a way of "coding notions into the consciousness", that is, consolidating information into a single state. And as I described with the 3-qubit register above, QM provides such a mechanism.
If something as simple as a neuron on its own is "aware" by some definition, then nearly everything is "aware". That is a possible definition, but not the point I was discussing in my post.jerromyjon said:Seems like a bold statement, what if neurons are inherently "aware" and it is the collective "feelings" form a majority of neurons which determines our sentient "mood".
I agree.stevendaryl said:Imagine a world in which there are humanoid robots that are indistinguishable from humans in behavior. You can joke with them, ask their opinions about whether your clothes match, talk about music, etc., and there is nothing in their behavior that would lead you to think that they are any different from humans. For children who grew up with such robots, I don't think that they would be any more likely to question whether such robots were truly conscious than we are to question whether red-headed people are truly conscious. That wouldn't prove that robots were conscious, but I don't think that anybody would spend a lot of time worrying about the question.
The main reason for doubting computer consciousness today is because they don't act conscious.
.Scott said:It's very observable. Everyone gets to run the experiment for themselves. Are you denying that you are conscious?
First, we need to recognize that even among people there is a variety of conscious experiences. Those blind from birth are missing that from their sonscious experience. Some are incapable of language. So it would be tough to talk about whether animals are conscious "in the same way" we are.jerromyjon said:Are animals conscious? I believe they are but there is no clear cut scientific proof. Could it be as simple as awareness of consequences?
It looks like the example I provided in one of last nights posts. I encoded a 3-bit mechanism by creating a 3-qubit register and encoding the 3 bits as the only code that was not part of the superposition. This forces all three qubits to "know" about their shared state. If you don't understand that post, ask me about it. It describes the type of information consolidation that is needed very directly.mfb said:How does such a combination look like?
Because my conscious experiences each consist of many bits-worth on information and I know what technologis are used in computers. So far, only the Canadian DWave machine (not an admirable device) is able to create information that is consolidated as needed.mfb said:And where is the evidence that we have such a combination in our brain, and computers do not have it?
And we are not conscious of everything at once. So there must be many consciousness mechanisms - and we are one of them at a time.mfb said:There is no single point (as you seem to not accept distributed structures?) in the brain where everything "happens".
My argument is that there is a type on information consolidation that is required for our conscious experience - and so far, in all of physics, we only know of one mechanism that can create that - QM superpositioning.[/quote]mfb said:Where is the mechanism? Just saying "QM has superpositions => consciousness!" is not an argument.
That is very true - and I am not offering the entire design on the brains consciousness circuitry. I am only stating that such components will be needed.mfb said:A single molecule is not sufficient to represent the concept of a tree (unless you have some external data storage saying "this is a tree molecule").
That's an easy question - although you may find the answer to be a bit disconcerting. In all likelihood, many "consciousness" processes are happening all the time - but the results of only one get recorded to memory and have the potential to affect our actions. So what's the most important thing on your mind? It seems the brain has a way of setting that priority.mfb said:And how would you decide which molecule is relevant at a specific point in time?
Absolutely. If what I am saying is true, then some form of primitive awareness is ubiquitous.mfb said:If something as simple as a neuron on its own is "aware" by some definition, then nearly everything is "aware".
Earlier in this thread I listed three additional observables: The information capacity of consciousness, the reportability, and the type of information we are conscious of. You can repeat those observations for yourself as well.stevendaryl said:If an experiment has one possible answer, then I don't see how you can say that you learn anything by running the experiment. If you are able to ask the question: "Am I conscious?" then of course, you're going to answer "Yes". So you don't learn anything by asking the question.
Okay, but we have nothing remotely like this in our brain..Scott said:It looks like the example I provided in one of last nights posts. I encoded a 3-bit mechanism by creating a 3-qubit register and encoding the 3 bits as the only code that was not part of the superposition. This forces all three qubits to "know" about their shared state. If you don't understand that post, ask me about it. It describes the type of information consolidation that is needed very directly.
But then you are missing the point you highlighted as important - everything relevant should be entangled in some way..Scott said:And we are not conscious of everything at once. So there must be many consciousness mechanisms - and we are one of them at a time.
Please give a reference for that claim..Scott said:My argument is that there is a type on information consolidation that is required for our conscious experience - and so far, in all of physics, we only know of one mechanism that can create that - QM superpositioning.
If you look at the outside consequences of this, none of it would need quantum mechanics. In particular, classical computers could provide all three of them..Scott said:Earlier in this thread I listed three additional observables: The information capacity of consciousness, the reportability, and the type of information we are conscious of.
I would suggest we look. We already have examples in biology where superposition is important. Should we repeat the citations? Clearly, such molecules would be hard to find and recognize.mfb said:Okay, but we have nothing remotely like this in our brain.
If we want the AI machine to think as a person does, then this is a design issue that needs to be tackled. It's tough for me to estimate how much data composes a single moment of consciousness. It's not as much as it seems because our brains sequentially free-associate. So we quickly go from being conscious of the whole tree - to the leaves moving - to the type of tree. Also, catching what we are conscious of involves a language step which itself is conscious - and which further directs our attention.mfb said:But then you are missing the point you highlighted as important - everything relevant should be entangled in some way.
I believe you are referring to "in all of physics, we only know of one mechanism that can create [the needed information consolidation] - QM superpositioning". I cited Shor's and Grover's algorithms as examples of this. Here is a paper describing an implementation of Shor's Algorithm with a specific demonstration that it is dependent on superpositioning:mfb said:Please give a reference for that claim.
The last two, yes. The first one, no.mfb said:If you look at the outside consequences of this, none of it would need quantum mechanics. In particular, classical computers could provide all three of them.
.Scott said:My key point here is that when consciousness exists, it has information content. Do you agree?
.Scott said:Let's say we want to make our AI capable of consciously experiencing eight things, coded with binary symbols 000 to 111. For example: 000 codes for apple, 001 for banana, 010 for carrot, 011 for date, 100 for eggplant, 101 for fig, 110 for grape, and 111 for hay. In a normal binary register, hay would not be seen by any of the three registers - because none of them have all the information it takes to see hay.
.Scott said:Now let's say that I use qubits. I will start by zeroing each qubit and then applying the Hadamard gate. Then I will use other quantum gates to change the code (111) to its complement (000) thus eliminating the 111 code from the superposition. At this point, the hay code is no longer local.
.Scott said:One key way you know you don't have consciousness is that there is no place on the paper where the entire representation of "tree" exists.
Well at least we can agree on the observable: That human consciousness involves awareness of at least several bits-worth of informaiton at one time.PeterDonis said:Sure, but that's a separate question from how, physically, the information is stored and transported. "Observing the characteristics of your consciousness" does not tell you anything about that, except in a very minimal sense (no, your brain can't just be three pounds of homogenous jello).
That's not it. All that data processing can be done conventionally.PeterDonis said:I'm not sure what you mean by the last sentence. If you mean that the information stored in the three bits, by itself, can't instantiate a conscious experience of anything, then I certainly agree; what makes 111 code for hay is a whole system of physical correlation and causation connected to the three bits--some kind of sensory system that can take in information from hay, differentiate it from information coming from apples, bananas, carrots, etc., and cause the three bits to assume different values depending on the sensory information coming in.
I agree with all of that.PeterDonis said:If, OTOH, you mean that no single bit can "see" hay because it takes 3 bits (8 different states) to distinguish hay from the other possible concepts, that's equally true of the three bits together; as I said above, what makes the 3 bits "mean" hay is not that they have value 111, but that the value 111 is correlated with other things in a particular way.
I'm doing it to make those three bits non-local. Three qubits set to 111 are no better than three bits set to 111. By recoding 111 as a superposition of 2(000),001,010,011,100,101,110, and 110 as 000,2(001),010,011,100,101,111, etc. I am still using only eight possible states, but that state information is not longer tied to one location. If I move one qubit to Mars, another to Venus, and keep the other one on Earth, those three qubits still know enough not to all turn up "1" - even though information can no longer be transmitted among them. The Bell inequality doesn't apply here, but the notion of a shared state still does.PeterDonis said:I don't understand why you are doing this or what difference it makes. You still have eight different things to be conscious of, which means there must be eight different states that the physical system instantiating that consciousness must be capable of being in, and which state it is in must depend on what sensory information is coming in. How does all this stuff with qubits change any of that? What difference does it make?
I agree with all of that.PeterDonis said:If you mean that somehow the quantum superposition means a single state "sees" all 3 bits at once, that still isn't enough for consciousness, because it still leaves out the correlation with other things that I talked about. And that correlation isn't due to quantum superposition; it's due to ordinary classical causation.
It is not sufficient. Since you agree that the consciousness is of at least several bits, what mechanism causes those several bits to be selected? What's the difference between one bit each from three separate brains and three bits from the same brain? What is neccesary is some selection mechanism. I suspect that you think that something classical mechanism - like AND or OR gates - can do it. But how, in the classical environment, would that work?PeterDonis said:So I don't see how quantum superposition is either necessary or sufficient for consciousness.
I am certainly not advocating a unity of consciousness - just a consolidation of the information we are conscious of, illusion or not.PeterDonis said:The apparent unity of conscious experience is an illusion; there are plenty of experiments now showing the limits of the illusion.
.Scott said:I am certainly not advocating a unity of consciousness - just a consolidation of the information we are conscious of, illusion or not.
There seems to be very little argument over the evidence - its a direct observable. And the results, we all experience lots of data in a moment. I've cited sources describing the physical limitations of what it takes to create that situation. If I can make my logic clearer, let me know and I will respond.Pythagorean said:Right, but you still have yet to make the case that consciousness requires a single physical state, whether you want to call it unity or not. And if all you can do is make logical arguments about it (i.e. you can't provide evidence) then anybody else can come up with logical arguments challenging it and everybody is just having logical arguments with no evidence, which isn't very productive.
.Scott said:There seems to be very little argument over the evidence - its a direct observable.
I agree that the requirement for one physical state is not a direct observable. And I obviously shouldn't treat it as self-evident.Pythagorean said:It's still not directly observable to me that consciousness requires one physical state. I know you've presented a lot of evidence about other things; things which I don't really dispute anyway, but which are irrelevant if this point can't be demonstrated.
What are the alternatives? I was trying to come up with some that might make some sense. Since some, but not all, of the information gets into the consciouness, there has to be some involvement with information - don't you agree?Pythagorean said:I sense a false dilemma: you propose that consciousness must be either your idea or the alternative you outline - and I'm not sure of what alternative(s) you outline besides computational since they're not laid out carefully. But there's not much to suggest that these are the extent of out choices.
If you want to can free will, that is fine with me. My personal estimate is that it is simply a purposeful, wired-in illusion. The "new physics" I was taking about was selecting the information that would contribute to consciousness. If the bits aren't selected by merging them into a single state, how else do they get associated? By proximity? If by proximity, how does that work? By mashing them together in NAND gates? If so, how does that work? That's what I mean by "new physics".Pythagorean said:And second, It wouldn't require new physics if there was no top down causation (i.e. free will) and free will experiments so far tend to suggest that people feel like they've made a spontaneous decision after the predictible brain activity (in other words, the researchers were able to predict people's "spontaneous" decisions before the people even felt like they made a decision). Not to mention, the idea of free will violates physics in the first place (an entity acting independently of cause and effect, yet still somehow causing and affecting.)
jim hardy said:Will it wake, sit up and thank me for all that work? Will it know right from wrong? Will it think Mary Steenburgen is the prettiest creature since Helen of Troy ?
I don't think it will.