Google’s DeepMind AI beats humans at the massively complex game Go

fforfalcon.blogspot.com

Google acquired the British artificial intelligence startup DeepMind just over two years ago, but at the time it wasn’t clear what the secretive company was working on. Most of DeepMind’s work has been under the radar, but Google has now announced DeepMind’s research has led to a significant AI milestone. A new program called Alpha Go has been developed that can beat a professional human player at the game of Go, something no computer has managed to do before.

We’re all familiar with chess-playing computers — Deep Blue famously beat Garry Kasparov 20 years ago. Go, which was created more than 2,500 years ago in China, is considered a more significant challenge for AI because the overwhelming complexity makes it an “intuitive” game. The goal in Go is to place your pieces on the board to surround and capture the opponent’s pieces until you control more than half of the board. It’s a game of pattern recognition and skill; there’s no luck involved, making it a perfect problem to test artificial intelligence.

The complexity of Go comes from the huge number of board configurations. In chess, there are only 32 total pieces and 64 squares on the board. Additionally, each piece can only move in certain ways. It’s possible for a computer to brute force all the potential board configurations and plan many moves in advance. Go is played with identical pieces that are placed on a 19 by 19 grid (361 potential locations). The number of board configurations is huge — more than the number of atoms in the universe, making it impossible for a computer to simply brute force the search space. You need a computer that can learn to play the game like a human, and that’s what Alpha Go did.

Most serious Go players can’t explain exactly why certain moves are the right ones — thus, the intuitive aspect. Most programmers felt until recently that Go was so complex it would take decades for a computer to best a human. Then, Alpha Go defeated European Go champion Fan Hui five games to zero in a recent match. This coming March, Alpha Go will take on Lee Sedol, one of the best players in the world.


              


Google isn’t the only AI company that has been interested in cracking Go, and now that it has, many of the same techniques could be applied to other problems. DeepMind researchers developed general AI methods, so they’re not locked into only playing Go; that would not make for a very useful AI. There are two basic learning networks inside Alpha Go — one network learns to predict likely upcoming moves and the other predicts the outcome of different arrangements of game pieces. It doesn’t try to simulate an entire game with all the uncountable board configurations, but instead just thinks a few moves ahead like a human player would.

With a different data set, these algorithms could tackle big problems like medical diagnosis and climate modeling. For now, DeepMind is focusing on the match with Lee Sedol. Alpha Go can run through millions of games per day to improve its understanding of the game. That might help it win more games, but playing games is just the beginning.
Share on Google Plus

About Awais Ahmad

0 comments:

Post a Comment