AlphaGo Beats Top Player at Go - Share Your Thoughts

  • Thread starter Buzz Bloom
  • Start date
In summary, Google's AI system has successfully defeated a top human player at the game of Go, an ancient Eastern contest of strategy and intuition that has bedeviled AI experts for decades. With Go—a 2,500-year-old game that's exponentially more complex than chess—human grandmasters have maintained an edge over even the most agile computing systems. The sheer number of possible moves, as well as the many strategic considerations, make go an exceedingly difficult game for a computer to win.
  • #36
DiracPool said:
I've been involved in the AI field since the late 80's and have seen many promising technologies come and go. You name it, I've seen it, PDP, "fuzzy logic," simulated annealing, attractor neural networks, reinforcement learning etc. etc. Each one of them promised the same as the article stated above...

AlphaGo is a reinforcement learning algorithm. It's a standard one too, it's just that a neural net is used as a function approximator with lots of free parameters for one of the standard functions in a standard reinforcement learning algorithm. The technology is RL (1980s) and ANN (1970s for the backpropagation).

Apart from faster computers, the main advance for backpropagation since the 1970s is that it was found that setting the initial conditions of the weights in a certain way allowed backpropagation to reach a decent point much more quickly.
 
Technology news on Phys.org
  • #37
phinds said:
Well, we're going to have to agree to disagree on this.
The only thing I can think you mean is as a deterrent to making terminator
Greg Bernhardt said:
Well there you have it, The Terminator is not far off.
Yep, the good one should be coming back to protect John soon.
 
  • Like
Likes atyy
  • #38
Planobilly said:
I don't have my own special definition of the meaning of intuition. I just don't think one can call this AI program intuitive based on the standard definition of the word.
We like to imbue objects both animate and inanimate with human characteristics which makes for interesting cartoons. Not so much in serious discussions of technology.
Hi @Planobilly:

It was not my intention to imbue AI with human qualities. I think if we have a disagreement, it is not about concepts, or the limits of AI, it is about the use of vocabulary.

Chess AIs mostly use processes that have a descriptive similarity with human "rational" processing. In AIs early history, that was the general method of choice: sequential and rule based. When a move choice was made, generally it was possible to describe why.

AlphaGo makes a much greater exploitation of "non-rational" processes that are similar to human pattern recognition in that the details about how the recognition of a complex pattern occurs is not observable, and there are not any specific rational explanations about why a particular complex pattern is classified as belonging to a particular category. Therefore, by analogy (or metaphor) it seems natural to describe the behavior as non-rational, or intuitive. There is no implication that the chess AI's "rationality" is the same as a human's, nor that AlphaGo's "intuition" is the same as a human's.

Regards,
Buzz
 
Last edited:
  • #39
jerromyjon said:
The only thing I can think you mean is as a deterrent to making terminator
I have no idea what you are talking about. I think they are a joke. They are useless. They are not going to happen.
 
  • #40
D H said:
Edward Lasker was a chess grandmaster who later found go to be a superior game.
Hi @D H:

I would like to add that Edward Lasker invented the relatively unknown checkers variation that I have never heard called anything other than "Laskers". Laskers is much more complicated than checkers, but still less complicated than chess. I was unable to find a reference to this checkers variation on the internet. The variation involves the following changes:
1. The two sides of each checker are distinguishable. One side is called the "plain" side; the other is the "king" side.
2. A capture move captures only the top checker of a stack, and this checker is not removed from the board. It is put on the bottom of the capturing stack.
3. The color on top of a stack determines which player can move the stack.
4. A stack with a plain side on top, a plain stack, moves like a non-king in checkers. When a plain stack reaches the last rank, the top checker in turned over, and the stack becomes a king stack, and moves like a king in checkers.

Regards,
Buzz
 
  • #41
Hi Buzz,

Buzz Bloom said:
It was not my intention to imbue AI with human qualities. I think if we have a disagreement, it is not about concepts, or the limits of AI, it is about the use of vocabulary.

Yes, I am 100% in agreement with that statement.

I also "intuitively" think (lol) AI will ultimately advance to the point where it has the capability too match human abilities in many areas and exceed them in other areas.
The reason for my assumption is that there are truly huge amounts of money to be made from the development of a functional AI system applied to issues like weather forecasting. How long that will take I have no idea. The fact that it has not been done yet only indicates to me that it is not so easy to do.

AI has the possibility of being the "machine" that can provide answers to truly complex issues. I for one am glad to see Google investing money in this technology.

Cheers,

Billy
 
  • #42
Planobilly said:
I also "intuitively" think (lol) AI will ultimately advance to the point where it has the capability too match human abilities in many areas and exceed them in other areas.
Hi @Planobilly:

I recognize and sometimes envy the optimism of your "intuition". I "rationally" (not lol) have pessimistic thoughts, not about AI limits, but about the likelihood that before AI can achieve these benefits, the consequences of negative aspects in our culture (global warming, pollution, overuse of critical resources, overpopulation, extreme wealth inequality, etc.) will destroy the necessary physical and social infrastructure that supports technological progress.

Regards,
Buzz
 
  • Like
Likes billy_joule
  • #43
Buzz Bloom said:
Hi @Planobilly:

I recognize and sometimes envy the optimism of your "intuition". I "rationally" (not lol) have pessimistic thoughts, not about AI limits, but about the likelihood that before AI can achieve these benefits, the consequences of negative aspects in our culture (global warming, pollution, overuse of critical resources, overpopulation, extreme wealth inequality, etc.) will destroy the necessary physical and social infrastructure that supports technological progress.

Regards,
Buzz
Well, if you want things to worry about, you could add the "AI tipping point" to the list. That's when the machines become smart enough to design/build better machines. Some people believe that will happen and it will have a snowball effect on AI. Whether that's a good thing or a bad thing for humanity is very much an open question, but worriers worry about it.
 
  • #44
Hi Buzz,

Buzz Bloom said:
(global warming, pollution, overuse of critical resources, overpopulation, extreme wealth inequality, etc.)

I also am painfully aware of the above. Perhaps we humans are pre programmed to destroy ourselves, perhaps not. There is a very wide difference in what we are involved in. In one place people are driving robotic cars around on another planet and in another place people are loping off heads. Strange world we live in.

Better for Google to develop AI than involve themselves in developing the next new way to destroy things!
As far as the "machines" taking over, based on my computer, I don't think we have much to worry about...lol My machine appears to be about as smart as a retarded cock roach...lol
Cheers,

Billy
 
  • Like
Likes Buzz Bloom
  • #45
Planobilly said:
As far as the "machines" taking over, based on my computer, I don't think we have much to worry about...
Just think what an AI could do to your and every other computer on the planet. Your "retarded cock roach" of a computer even with you operating it stands no chance...
 
  • #46
phinds said:
Well, if you want things to worry about, you could add the "AI tipping point" to the list. That's when the machines become smart enough to design/build better machines. Some people believe that will happen and it will have a snowball effect on AI. Whether that's a good thing or a bad thing for humanity is very much an open question, but worriers worry about it.

This is a good point, and I think for the purposes of my post here readers can reference this related thread:

https://www.physicsforums.com/threa...ence-for-human-evolution.854382/#post-5371222

I have a clear view as to where I think "biologically inspired," if you will, machine intelligence is heading. The moniker, "artificial intelligence" sounds cool but it has the baggage of 40 years of failure that weighs it down, so I don't like to speak of AI, strong or not-so-strong, etc., for fear of being guilted by association.

That said, I feel I can speak to where machine intelligence is heading because I'm part of the effort to forward this advancement. I'm not currently doing this in an official capacity, but I'm confident I'll be accepted into a major program here come next fall.

So now that you're fully aware of my lack of qualifications, I will give you my fully qualified predictions for the next 100 years:

1) Humans and biological creatures will be around as long as "the robots" can viably keep them around. I don't think the robots are going to want to kill the flora and the fauna or the humans and the centipedes any more than we (most of us) want to. If we program them correctly, they will look at us like grandma and grandpa, and want to keep us around as long as possible, despite our aging and obsolete architecture.

2) Within 74 years, we (biological humans) will be sending swarms of "robo-nauts" out into the cosmos chasing the tails of the Mariner and Voyager probes. These will be "interstellar" robo-organisms which may, on transit, build a third-tier intergalactic offspring. How will they survive the long transit? Well, they have a number of options we humans don't have; First, they don't need food or any of the "soft" emotional needs that humans do. Ostensibly, they can recharge their batteries from some sort of momentum/interstellar dust kind of thing. Or maybe island hopping for natural resources on the nearest asteroid? Please don't ruin my vision with extraneous details...

Second, they don't need any of the fancy cryogenic "put the human to sleep" technology which is a laugh. Don't get me started as to the myriad of complications that can arise from this on long distance travels. Suffice it to say that this is not going to be the future of "Earthling" interstellar travel. In fact, I can (almost) guarantee you that we biological sacks of Earth chemicals with never make it past Mars, so we better grab Mars while we still have the chance.

The future is going to be robotic implementations of the biological mechanism in our brains that generates our creative human cognition. Unless the proverbial condition of if we destroy ourselves first doesn't transpire, I think that this is an inevitability. And I don't think it's a bad thing at all. We want our children to do better than us and be stronger than us, this is built into our DNA (metaphorically speaking). Why would we not want to build children in our likeness but not necessarily in our carbon-ness that excel and exceed our capabilities. This is, of course, in the spirit of what the core of the human intellect is...
 
  • #47
DiracPool said:
2) Within 74 years, we (biological humans) will be sending swarms of "robo-nauts" out into the cosmos
.
74? Not 73 ? :biggrin:
 
  • #48
PAllen said:
74? Not 73 ? :biggrin:

Actually, it's 73 years 8 months (August). I just rounded up. But who's counting...:rolleyes:
 
  • #49
Alpha Go won the first game against Lee Sedol, 9p ! This is enormously more of an accomplishment than the first computer victory over Kasparov.
 
Last edited:
  • Like
Likes fluidistic and Greg Bernhardt
  • #50
Hi Paul:

I downloaded the score of the game as an SGF file. I found several free SGF readers available online, but I could find no information about the sites' reliability. Can you please recommend a site from which I can download a safe SGF reader?

Regards,
Buzz
 
  • #51
Buzz Bloom said:
Hi Paul:

I downloaded the score of the game as an SGF file. I found several free SGF readers available online, but I could find no information about the sites' reliability. Can you please recommend a site from which I can download a safe SGF reader?

Regards,
Buzz
No, because I don't have one. My background in Go is the following:

- read 1/2 of one beginners book
- played about 15 games total, in person, 5 online

I went through a phase of being obsessed with the different rule sets from the point of view of their impact on possibly programming Go (which I never actually did). However, there are many websites where you can play through this game (I have several times already, despite my minimal playing strength):

https://gogameguru.com/alphago-defeats-lee-sedol-game-1/
 
  • Like
Likes jerromyjon
  • #55
PAllen said:
Alpha Go won the first game against Lee Sedol, 9p ! This is enormously more of an accomplishment than the first computer victory over Kasparov.

How big is AlphaGo (the one playing Lee Sedol)? Is it a desktop, or some distributed thing?
 
  • #56
atyy said:
How big is AlphaGo (the one playing Lee Sedol)? Is it a desktop, or some distributed thing?
It is distributed, with enormous number of total cores. I don't have detailed figures, but in pure cycles and memory, it dwarfs the computer that first beat Kasparov. What remains remarkable to me, is that even a few years ago, AI optimists thought beating a top Go player was decades away, irrespective of compute power. It wasn't that many years ago that Janice Kim (3-dan professional) beat the top go program with a 20 stone handicap !
 
  • #57
PAllen said:
It is distributed, with enormous number of total cores. I don't have detailed figures, but in pure cycles and memory, it dwarfs the computer that first beat Kasparov. What remains remarkable to me, is that even a few years ago, AI optimists thought beating a top Go player was decades away, irrespective of compute power. It wasn't that many years ago that Janice Kim (3-dan professional) beat the top go program with a 20 stone handicap !

Lee Sedol was just careless. He figured the thing out in game 4 :P
 
  • #58
Here is a nice summary of the final result (4-1 for AlphaGo): https://gogameguru.com/alphago-defeats-lee-sedol-4-1/

It seems to me that this program is broadly in the same category as chess programs in relation to top human players (perhaps more like when Kramnik still won one out of 4 against a program). However, the following qualtitative points apply to both:

1) Expert humans have identifiable superiorities to the program.
2) The program has identifiable superiorities to expert humans.
3) Absence of lapses in concentration and errors (by the programs) combined with superior strength in common situations makes a direct match up lopsided.
4) A centaur (human + computer combination) is reliably superior to computer alone.

We cannot say that human experts have no demonstrable understanding that computers don't have until a centaur is no stronger than the computer alone, and that this state is reached only by the human doing nothing (i.e. any human choice different from the machine's is likely worse).
 
Last edited:
  • Like
Likes Monsterboy
  • #59
PAllen said:
Here is a nice summary of the final result (4-1 for AlphaGo): https://gogameguru.com/alphago-defeats-lee-sedol-4-1/

It seems to me that this program is broadly in the same category as chess programs in relation to top human players (perhaps more like when Kramnik still won one out of 4 against a program). However, the following qualtitative points apply to both:

1) Expert humans have identifiable superiorities to the program.
2) The program has identifiable superiorities to expert humans.
3) Absence of lapses in concentration and errors (by the programs) combined with superior strength in common situations makes a direct match up lopsided.
4) A centaur (human + computer combination) is reliably superior to computer alone.

We cannot say that human experts have no demonstrable understanding that computers don't have until a centaur is no stronger than the computer alone, and that this state is reached only by the human doing nothing (i.e. any human choice different from the machine's is likely worse).
In chess, centaurs are weaker than the program alone. See https://www.chess.com/news/stockfish-outlasts-nakamura-3634 where Nakamura helped by rybka lost to (a weakened version of) Stockfish.
There's been a huge progress in terms of elo for the top programs in the last years, mostly due to fishtest (http://tests.stockfishchess.org/tests), an open source platform where anyone can test ideas in Stockfish's code. In order for a patch to be committed, the idea is tested by self play over thousands of games to determine whether the idea is an improvement or not. The hardware is the one of volunteers, just like you and me. The overall result is that this added about 50 elo per year since the last 4 years or so and closed source programs like Komodo also benefited from it (by trying out the ideas).
Programs are so superior than humans that grand masters intervening on the program's play only weakens it.
Sure, it's easy to cherry-pick a position where a program makes a mistake and claim that it's easy for a human to recognize it; or find a position where the program misunderstands the position, like in the case of a fortress with blocking pawns, and you add queens and other good pieces for one side. The computer is generally going to give an insane evaluation despite a clear draw. But the reality is that these positions are so rare that they almost never occur in a match or on hundreds of games.
 
  • #60
fluidistic said:
In chess, centaurs are weaker than the program alone. See https://www.chess.com/news/stockfish-outlasts-nakamura-3634 where Nakamura helped by rybka lost to (a weakened version of) Stockfish.
That's not a good example because Nakamura is not an experienced centaur. The domain of postal chess, which is all centaurs (officially allowed and required now) proves on a regular basis that anyone only using today's latest program is slaughtered by players combining their intelligence with a program. Not a single such tournament has been won by someone just taking machine moves (and there are always people trying that, with the latest and greatest engines.)
fluidistic said:
There's been a huge progress in terms of elo for the top programs in the last years, mostly due to fishtest (http://tests.stockfishchess.org/tests), an open source platform where anyone can test ideas in Stockfish's code. In order for a patch to be committed, the idea is tested by self play over thousands of games to determine whether the idea is an improvement or not. The hardware is the one of volunteers, just like you and me. The overall result is that this added about 50 elo per year since the last 4 years or so and closed source programs like Komodo also benefited from it (by trying out the ideas).
Programs are so superior than humans that grand masters intervening on the program's play only weakens it.
This is just wrong. See above.
 
  • #61
PAllen said:
That's not a good example because Nakamura is not an experienced centaur. The domain of postal chess, which is all centaurs (officially allowed and required now) proves on a regular basis that anyone only using today's latest program is slaughtered by players combining their intelligence with a program. Not a single such tournament has been won by someone just taking machine moves (and there are always people trying that, with the latest and greatest engines.)

This is just wrong. See above.
I stand corrected about correspondence chess. Even people under 2000 elo can indeed beat the strongest programs under such time controls and liberty to use any program, etc.
I do maintain my claim on the progress in elo of programs, I don't see what's wrong with it (yet at least).

Edit: I am not sure how such weak chess players manage to beat the strongest programs. One guess that I have is that they use multi pv to see the best moves according to a strong engine, and with another computer they investigate each one of these lines and pick the best. In fact no chess knowledge is required to do that, a script could do it.
 
  • #62
fluidistic said:
I stand corrected about correspondence chess. Even people under 2000 elo can indeed beat the strongest programs under such time controls and liberty to use any program, etc.
I do maintain my claim on the progress in elo of programs, I don't see what's wrong with it (yet at least).

Edit: I am not sure how such weak chess players manage to beat the strongest programs. One guess that I have is that they use multi pv to see the best moves according to a strong engine, and with another computer they investigate each one of these lines and pick the best. In fact no chess knowledge is required to do that, a script could do it.
The winning correspondence players don't just do this. An example where expert (but not world class) knowledge helps is early endgame phase. With tablebases, computers have perfect knowledge of up to 6 piece endings. However, they have no knowledge of types of rook endings with e.g. one or two pawn advantage, that are drawn when there are more than 6 pieces (pawns and kings included). Thus, a computer with a pawn advantage will not know how to avoid such endings (thus allowing a centaur to draw), and a computer with the disadvantage may unnecessarily lose to a centaur by not seeking such a position. You need a lot less than grandmaster knowledge to push programs in the right direction in such cases.

Rather than being exotically rare, such endgames are perhaps the most common type.
 
  • Like
Likes fluidistic
  • #63
Here are some interesting failures of neural networks.

http://www.slate.com/articles/techn...ence_can_t_recognize_these_simple_images.html

http://gizmodo.com/this-neural-networks-hilariously-bad-image-descriptions-1730844528On the linguistic front, here is today's example from Google Translate. I had a much longer translation cycle using the following English sentence, which failed horribly. So I decided to try what I thought would be much easier: a simple English -> Chinese -> English test. I could come up with a sentence that would be easy for the software to parse, but that's not the point. I'm trying to come up with a sentence that we sloppy humans might read and understand.

English:

It seems highly improbable that humans are the only intelligent life in the universe, since we must assume that the evolution of life elsewhere occurs the same way, solving the same types of problem, as it does here on our home planet.

Chinese人类是宇宙中唯一的智慧生命似乎是不可能的,因为我们必须假设其他地方的生命的演变是同样的方式,解决相同类型的问题,就像在我们的家庭星球上。

English

Humanity is the only intelligent life in the universe that seems impossible because we have to assume that the evolution of life elsewhere is the same way that the same type of problem is solved, just as on our home planet.
 
Last edited by a moderator:
  • #64
If anyone here is interested in hearing more detailed discussion on machine learning with an emphasis towards future AGI (they also talk about AlphaGo in several instances, I believe), check out the conference recently hosted by Max Tegmark. Here's an article explaining more about it: Beneficial AI conference develops ‘Asilomar AI principles’ to guide future AI research. The Future of Life Institute also has a YouTube channel here where more presentations can be viewed from the conference. There were some fantastic talks by some high level contributors to the field like Yoshua Bengio, Yann LeCun, and Jürgen Schmidhuber.
 
Back
Top