- #1
svastikajla
- 9
- 0
All right, for arguments sake we shall assume that the neurons of the human central nervous system function in a simplified way, it shall soon become clear that this isn't relevant to the experiment, we shall assume they work as:
1: They receive an electrical charge via one to more inputs
2: They put out a charge through other outputs depending on what they receive (they fire).
3: The collective of all those neurons makes the human central nervous system function and makes my hands type for one, lovely piano fingers they are, elegantly moving over the keyboard.
All right, now image a room, a cubicle, what we have in this room are lights, L.E.D.'s if you like which start to flicker at some points. In it we place a human which we have briefed thoroughly about what to do. This human is instructed to press buttons in precise combinations depending on the patterns of L.E.D.'s that person sees, very easy. Of course we could give this human instructions equivalent to how one random isolated neuron in for instance, my central nervous system works based on firing. We just set a computer to give the light patterns and ask this person in this room to press the buttons.
I'm sure you by now get the intention, we can hook up the buttons to L.E.D.'s in another room, and so forth to effectively create a functional replica of a human central nervous system with in each a human which is given the exact analogous instructions. This is the part were we find out that the complexity of neurons is largely irrelevant as we can give more complex instructions if we want. All right, then we make a computer feed sensory input and we have all those milliards of humans simply doing their little boring task. The best part is, they needn't be aware of the others, we can just tell them we are performing a test on their reaction speed, in fact, we have thus created a swarm intelligence consisting of milliards of humans which can compute all a human brain can... a lot slower, won't see Dell investing on this.
Of course, we can also let the output of this vast complex of boring cubicals animate a real human, let's just do it in a computer simulation for ethics sake. And that input that fictive human gets is of course the fake sensory stimuli of that simulation. The human, which is now a replica of me, is just walking around in that simulated world, typing this message on this board, in reality calculated by a swarm intelligence, all of whose components have no idea what they are doing and that there little reaction tests contributes tiny bits of my automation.
Now come the questions of life:
1: Am I—the simulation—conscious?
2: Am I—the simulation—self-aware?
3: Do I—the simulation—have free will?
My whole brain function are just humans in cubicals, but to all functional effect, they perform the same functions; from my perspective, there shouldn't be a difference, but a lot of people would be pretty scary to say I am a conscious form of life by now which would also have the ethetical implications that this little program here can't be just turned off that easily.
From my perspective, all three are true, not from a human in the cubical or the one who orchestrated the experiment, there's also no way to test for them if I am conscious. There's no way to test if a computer is concious. I react to every-thing in the same manner every human would, though slowed down considerably, but my reactions are still calculated by milliards of people in cubicals who have no idea that they are calculating it at all.
Also, if I did have all those three, where on Earth is it 'located'?
Thoughts? I arrived upon this idea from late night calls with a friend while she was at a boring team working camp with her orchestra, I believe it's sufficient to demonstrate the dead end of consciousness / free will / self aware debates.
1: They receive an electrical charge via one to more inputs
2: They put out a charge through other outputs depending on what they receive (they fire).
3: The collective of all those neurons makes the human central nervous system function and makes my hands type for one, lovely piano fingers they are, elegantly moving over the keyboard.
All right, now image a room, a cubicle, what we have in this room are lights, L.E.D.'s if you like which start to flicker at some points. In it we place a human which we have briefed thoroughly about what to do. This human is instructed to press buttons in precise combinations depending on the patterns of L.E.D.'s that person sees, very easy. Of course we could give this human instructions equivalent to how one random isolated neuron in for instance, my central nervous system works based on firing. We just set a computer to give the light patterns and ask this person in this room to press the buttons.
I'm sure you by now get the intention, we can hook up the buttons to L.E.D.'s in another room, and so forth to effectively create a functional replica of a human central nervous system with in each a human which is given the exact analogous instructions. This is the part were we find out that the complexity of neurons is largely irrelevant as we can give more complex instructions if we want. All right, then we make a computer feed sensory input and we have all those milliards of humans simply doing their little boring task. The best part is, they needn't be aware of the others, we can just tell them we are performing a test on their reaction speed, in fact, we have thus created a swarm intelligence consisting of milliards of humans which can compute all a human brain can... a lot slower, won't see Dell investing on this.
Of course, we can also let the output of this vast complex of boring cubicals animate a real human, let's just do it in a computer simulation for ethics sake. And that input that fictive human gets is of course the fake sensory stimuli of that simulation. The human, which is now a replica of me, is just walking around in that simulated world, typing this message on this board, in reality calculated by a swarm intelligence, all of whose components have no idea what they are doing and that there little reaction tests contributes tiny bits of my automation.
Now come the questions of life:
1: Am I—the simulation—conscious?
2: Am I—the simulation—self-aware?
3: Do I—the simulation—have free will?
My whole brain function are just humans in cubicals, but to all functional effect, they perform the same functions; from my perspective, there shouldn't be a difference, but a lot of people would be pretty scary to say I am a conscious form of life by now which would also have the ethetical implications that this little program here can't be just turned off that easily.
From my perspective, all three are true, not from a human in the cubical or the one who orchestrated the experiment, there's also no way to test for them if I am conscious. There's no way to test if a computer is concious. I react to every-thing in the same manner every human would, though slowed down considerably, but my reactions are still calculated by milliards of people in cubicals who have no idea that they are calculating it at all.
Also, if I did have all those three, where on Earth is it 'located'?
Thoughts? I arrived upon this idea from late night calls with a friend while she was at a boring team working camp with her orchestra, I believe it's sufficient to demonstrate the dead end of consciousness / free will / self aware debates.
Last edited: