- #71
- 19,557
- 10,337
Nice video on the subject
Actually, the distinguishing feature of AlphaZero is that it had no training data at all, nor any input of human expertise. However, in current form it is not at all general intelligence. Instead, it is a general capability to self learn extreme expertise within a closed system all on its own. I agree that much of what people consider general intelligence is tied to interacting with the world and other people, especially via language. At some point this will have to be tackled, to achieve any form of true AI.Fooality said:One thing I wonder about is the capacity to abstract generalized intelligence from the physical world. One thing that defines AI is the vast training sets, humans don't really use. But we do have vast training sets in terms of a continuous stream of experiences from birth, and somehow we are able to use that experience to rapidly learn new abstract things. Scary as it sounds, the bridge to real AI, or a demonstration that Google has it, may have to come from agents processing such streams of world experience: droids!
PAllen said:Actually, the distinguishing feature of AlphaZero is that it had no training data at all, nor any input of human expertise. However, in current form it is not at all general intelligence. Instead, it is a general capability to self learn extreme expertise within a closed system all on its own. I agree that much of what people consider general intelligence is tied to interacting with the world and other people, especially via language. At some point this will have to be tackled, to achieve any form of true AI.
That still means it had to develop everything itself. It didn't have even the most basic knowledge ("keeping the queen is good"). I wonder how the first games were played. Completely randomly until one side happened to be able to check mate the other side within a few moves? A few games until it discovers that it is advisable to beat pieces of the opponent?Fooality said:But in a sense I think does have training data, in terms of the games (as I understand it) it plays against itself.
mfb said:That still means it had to develop everything itself. It didn't have even the most basic knowledge ("keeping the queen is good"). I wonder how the first games were played. Completely randomly until one side happened to be able to check mate the other side within a few moves? A few games until it discovers that it is advisable to beat pieces of the opponent?
mfb said:That still means it had to develop everything itself. It didn't have even the most basic knowledge ("keeping the queen is good"). I wonder how the first games were played. Completely randomly until one side happened to be able to check mate the other side within a few moves? A few games until it discovers that it is advisable to beat pieces of the opponent?
mfb said:I would be surprised if it can understand the opponent - I would expect it to play purely based on the current board (and the RNG output).
It plays unconventionally - I can imagine that human opponents get lost quickly. The AI will take an opportunity to check mate if it is there, but simply improving the material and tactical advantage more and more is a very reliable strategy as well.A checkmate can be done quickly, that is a good point - probably not too many random moves then.
Andy Resnick said:I finally got a chance to read the arXiv report, which is fascinating- my question is, is there some way to 'peek under the hood' to see the process by which AlphaZero optimized the move probabilities based on the Monte-Carlo tree search, and if during the process of selecting and optimizing the parameters and value estimates arrived at an overall strategic process that is measurably distinct from 'human' approaches to play- could AlphaZero pass a 'Chess version' Turing test?
With the success in Go and Chess: The approach can't be too bad...Devils said:and the authors believe their new approach is right.
It would be interesting to see how well AlphaZero performs if the number of states it can go through is limited to human-like levels. I'm not aware of such a competition.BWV said:Is the key issue that the neural networks can do a better job than humans of trimming the tree of potential moves?
Problem is, nobody knows how many positions humans consider, because humans cannot accurately report on both conscious and unconscious thought - a milder version of a major problem with neural networks. If you believe what is reported, Capablanca (world chess champion with reasonable claim to being the greatest natural chess prodigy) answered the question “how many moves do you consider?” with “I only consider one move - the best one”. Of course tongue in cheek, but really nobody including the grandmaster knows all that goes into choosing a move.mfb said:It would be interesting to see how well AlphaZero performs if the number of states it can go through is limited to human-like levels. I'm not aware of such a competition.
I keep wondering whether this technology would be applicable to solving mathematics problems, perhaps defining the game rules by some formal system. What I find hard to imagine is how to formulate the state of a partial proof in a form that can be the input of an artificial neural net.Fooality said:One thing I wonder about is the capacity to abstract generalized intelligence from the physical world. One thing that defines AI is the vast training sets, humans don't really use. But we do have vast training sets in terms of a continuous stream of experiences from birth, and somehow we are able to use that experience to rapidly learn new abstract things. Scary as it sounds, the bridge to real AI, or a demonstration that Google has it, may have to come from agents processing such streams of world experience: droids!
To illustrate this point I remember a game when I used to play weekend chess tournaments. I was about 1800, so a decent player. I was losing to a slightly weaker opponent having blown a big advantage when, to my horror, I noticed my opponent had a checkmate in one! which, obviously he hadn't seen.PAllen said:Problem is, nobody knows how many positions humans consider, because humans cannot accurately report on both conscious and unconscious thought
Hendrik Boom said:I keep wondering whether this technology would be applicable to solving mathematics problems, perhaps defining the game rules by some formal system. What I find hard to imagine is how to formulate the state of a partial proof in a form that can be the input of an artificial neural net.
I'm hoping initially to be able to automate somewhat the choice of proof tactics in a proof assistant. Not have AI-generated insight.Fooality said:Yeah, good question. I know mathematical proofs we're one of the first thing they tried to unleash computers on in the 50s, and they meet their first failures in making machines think. There's more to it than just formal logic it seems.
Thinking about Bridges of Königsberg problem solved by Euler. You have this question that seems to involve all this complexity, but you discard a lot of data to get down to the simplest representation, and in that context break down the notion of travel until the negative result is obvious, is proven. And it's proven to us because in that simple form we can understand it, we don't have the cognitive power to brute force it.
How does Euler's brain know to not think about the complete path, but rather just a single node (in his newly created graph theory) to find the solution for all complete paths? It's hard to imagine NN doing this without a priori knowledge that paths are composed of all the places visited, again real physical world knowledge.
Hendrik Boom said:I'm hoping initially to be able to automate somewhat the choice of proof tactics in a proof assistant. Not have AI-generated insight.
Sorry. I don't actually have the resources to do this. So I spend my time wondering how it might be done instead.Fooality said:Oh you're actually doing it? Cool good luck. If you can get the training data, I don't see why not.