- #1
- 2,570
- 2
argument that all computable universes are "real"
I haven't been on this forum in a while, and I'm sure this kind of thing has been talked about many times, but I thought I'd bring it up again so I could discuss it with you guys. Here's my argument.
Start with a human being named John. He is conscious, experiencing the world around him. In a certain sense, his experiences are all he knows to be real, although he believes there is a real world out there. Let’s assume he is right. Now let’s consider slowly replacing neurons in John’s brain with computer chips which carry out the same exact function. Would his experiences change? There doesn’t seem to be any essential difference between protein-based and silicon-based information processors, so let’s assume his experiences do not change. Now we feed input to his new brain that reflects a made-up, computer simulated world. Again, if this simulation were good enough, he would not know the difference. At this stage we basically have one big program running on a computer. Now let’s consider slowing down this computer. Since his perception of time is tied to processes in his brain, he would presumably not notice any difference.
Now, this computer is essentially just performing calculations, and if it’s going slow enough, these calculations could just as easily be done by a person. So let’s slowly replace this computer by a human operator. At first the computer could output some calculation that needs to be done, the operator would do it, and feed it back in. Eventually, all the calculations could be done by a human operator, perhaps using a hand-operated Turing machine with a gigantic piece of tape. It will probably take billions of years to run just a second of simulation time, so we will require many generations of operators, this doesn’t matter, John’s experience should be the same. Now let’s assume these operators get really good at what they’re doing, and rather than perform every step, they start noticing patterns, and can skip steps. Does John’s experience change? Does it skip ahead when the operator skips steps? What if the operator still goes through these steps in his head, so that they at least occur somewhere? What if he doesn’t? What if at some point he gives up? There’s still a well-defined sequence of states that would occur if he had continued, although they never end up actually being written on the tape. Does John ever experience these states?
At a certain point it starts to seem like the actual writing on the tape is not the important thing, what’s important is the program that’s being run. And it’s not important how it’s being run, on what machine, or how fast. All that’s important is the mathematical structure of the program itself. Thus the mere platonic existence of a program which implements a conscious observer seems to be sufficient for such an observer to experience things. There remains the important question of what precisely defines a conscious observer in a program, but certainly anything resembling human beings in this universe should qualify, and possibly things much simpler.
I’ll lay out the argument in a series of steps, and you can tell me at what point you think it breaks down:
1. A human brain can be slowly replaced by a computer program, and the subjective experience would not significantly change.
2. The input to this computer could be yet another computer program, so that the entire system is a program running on a computer, and still, there would be conscious experience.
3. Once we have such a program with conscious experience, the system used to implement it is not important, even a person manually operating a turing machine would suffice.
4. Since the implementation is not important, one might argue all that is important is the structure of the program itself. Even if it’s not actually running somewhere, there is still a subjective experience.
Potential objection: What if the operator makes a mistake, or even purposely modifies something against the rules of the program? It seems John would experience this, even though it doesn’t follow from the mere platonic existence of the program. All I can think of in response to this is that we can still imagine the operator-machine-program system itself is part of a more complicated program. John might wonder whether he is in the simpler program (where rules are never broken) or the more complicated, imperfect simulation. A priori, the former seems more likely by something like Occam’s razor.
I haven't been on this forum in a while, and I'm sure this kind of thing has been talked about many times, but I thought I'd bring it up again so I could discuss it with you guys. Here's my argument.
Start with a human being named John. He is conscious, experiencing the world around him. In a certain sense, his experiences are all he knows to be real, although he believes there is a real world out there. Let’s assume he is right. Now let’s consider slowly replacing neurons in John’s brain with computer chips which carry out the same exact function. Would his experiences change? There doesn’t seem to be any essential difference between protein-based and silicon-based information processors, so let’s assume his experiences do not change. Now we feed input to his new brain that reflects a made-up, computer simulated world. Again, if this simulation were good enough, he would not know the difference. At this stage we basically have one big program running on a computer. Now let’s consider slowing down this computer. Since his perception of time is tied to processes in his brain, he would presumably not notice any difference.
Now, this computer is essentially just performing calculations, and if it’s going slow enough, these calculations could just as easily be done by a person. So let’s slowly replace this computer by a human operator. At first the computer could output some calculation that needs to be done, the operator would do it, and feed it back in. Eventually, all the calculations could be done by a human operator, perhaps using a hand-operated Turing machine with a gigantic piece of tape. It will probably take billions of years to run just a second of simulation time, so we will require many generations of operators, this doesn’t matter, John’s experience should be the same. Now let’s assume these operators get really good at what they’re doing, and rather than perform every step, they start noticing patterns, and can skip steps. Does John’s experience change? Does it skip ahead when the operator skips steps? What if the operator still goes through these steps in his head, so that they at least occur somewhere? What if he doesn’t? What if at some point he gives up? There’s still a well-defined sequence of states that would occur if he had continued, although they never end up actually being written on the tape. Does John ever experience these states?
At a certain point it starts to seem like the actual writing on the tape is not the important thing, what’s important is the program that’s being run. And it’s not important how it’s being run, on what machine, or how fast. All that’s important is the mathematical structure of the program itself. Thus the mere platonic existence of a program which implements a conscious observer seems to be sufficient for such an observer to experience things. There remains the important question of what precisely defines a conscious observer in a program, but certainly anything resembling human beings in this universe should qualify, and possibly things much simpler.
I’ll lay out the argument in a series of steps, and you can tell me at what point you think it breaks down:
1. A human brain can be slowly replaced by a computer program, and the subjective experience would not significantly change.
2. The input to this computer could be yet another computer program, so that the entire system is a program running on a computer, and still, there would be conscious experience.
3. Once we have such a program with conscious experience, the system used to implement it is not important, even a person manually operating a turing machine would suffice.
4. Since the implementation is not important, one might argue all that is important is the structure of the program itself. Even if it’s not actually running somewhere, there is still a subjective experience.
Potential objection: What if the operator makes a mistake, or even purposely modifies something against the rules of the program? It seems John would experience this, even though it doesn’t follow from the mere platonic existence of the program. All I can think of in response to this is that we can still imagine the operator-machine-program system itself is part of a more complicated program. John might wonder whether he is in the simpler program (where rules are never broken) or the more complicated, imperfect simulation. A priori, the former seems more likely by something like Occam’s razor.