Ever since programmers have been creating basic artificial intelligence, they’ve been pitting it against the many strategic games of the world. By this point, we have chess pretty much mastered, and have for a while now. The next step on the ladder turned out to be a lot further away. The Chinese game of Go, a deep strategy game with nearly infinite possible moves, has proven elusive. Computers have had a difficult time beating even moderately skilled players up until very recently. That’s because the game has what Google says is 10171 possible positions. For reference, the observable universe is believed to have 1080 atoms in it.
But now, Google’s DeepMind division has built an AI, named AlphaGo, that beat a top-ranked Go player, Fan Hui, five matches to zero. The computational power required to do this, Google says, is astronomical. To beat a truly skilled Go player, the team had to upload 30 million moves played by human experts, which allowed AlphaGo to predict human moves 57% of the time. AlphaGo took that hard information and used it to learn new strategies, playing thousands of games between neural networks with a process called reinforcement learning. To get the processing power necessary to do this, Google ended up using its Google Cloud Computing platform.
After winning 499 out of 500 matches against every other functioning Go AI, Google brought in Fan Hui for that five-to-zero win, marking the first ever win of an AI over someone above amateur level.
This is a huge step for AI, because this system wasn’t built specifically for Go, but rather as a general purpose AI. That means this kind of learning could be applied to other problems. I’ll let you decide whether you want to go Skynet or J.A.R.V.I.S. with your conclusions about these results. It’s also a good demonstration of just how complex and powerful the human mind is that it can perform the same sorts of calculations that an AI needs cloud computing to even start parsing.
AlphaGo’s next task? Take on the Go world champ, Lee Sedol, in March.