- #1
FallenApple
- 566
- 61
This question is kind of out there, so I'm not sure where to put it. But it is physics, somewhat, under the fact that information itself contain energy.
So, is it physically possible for a computer program running off of logical statements alone to be as intelligent as a human? Because this takes energy, perhaps a lot of it.
I ask this question because current artificial intelligence, as primitive as they are, has not been designed by solely logical if/else statements. They work around this problem by using data from the real world, (billions to trillions of bits) just to train their program, using probabilistic models , to do a conceptually simple task such as recognizing faces.
This is in some sense less deterministic than just code running on if statements, at least subjectively. If it could have been run on a bunch of if statements, it could take so much code( i.e if statements for every situation/combo of situations=> infinite possibilites that need to be accounted for) that it would be impossible to cram it down to the size of the human brain without it creating a black hole. Could this be the case? If so, then deterministic intelligence is impossible.
AI could require too much data for it too handle if it were to do any meaningful in terms of intelligence. So if it requires so much data just to qualify as being sentient and intelligence, that the density in information would collapse into a black hole before reaching this point, then this puts a restriction on not only deterministic intelligence, but our ability to create it with current science.
Has this issue ever been addressed seriously in physics?
So, is it physically possible for a computer program running off of logical statements alone to be as intelligent as a human? Because this takes energy, perhaps a lot of it.
I ask this question because current artificial intelligence, as primitive as they are, has not been designed by solely logical if/else statements. They work around this problem by using data from the real world, (billions to trillions of bits) just to train their program, using probabilistic models , to do a conceptually simple task such as recognizing faces.
This is in some sense less deterministic than just code running on if statements, at least subjectively. If it could have been run on a bunch of if statements, it could take so much code( i.e if statements for every situation/combo of situations=> infinite possibilites that need to be accounted for) that it would be impossible to cram it down to the size of the human brain without it creating a black hole. Could this be the case? If so, then deterministic intelligence is impossible.
AI could require too much data for it too handle if it were to do any meaningful in terms of intelligence. So if it requires so much data just to qualify as being sentient and intelligence, that the density in information would collapse into a black hole before reaching this point, then this puts a restriction on not only deterministic intelligence, but our ability to create it with current science.
Has this issue ever been addressed seriously in physics?
Last edited: