AlphaGo, the human Go slayer, has just been demolished by new AI

19 Oct 2017

Image: StockphotoVideo/Shutterstock

After leaving human Go players in its wake, AlphaGo’s sibling is dominating in a series of new games.

The artificial intelligence (AI) of AlphaGo – developed by Google’s subsidiary DeepMind – was heralded as a major step forward in computer intelligence, given that the complexity of the game is so great that an AI was not predicted to beat a human player for at least a few more decades.

But now, just a few months after the AI triumphed in a series of matches with the best human player on the planet, DeepMind has revealed a new one that’s even better: AlphaGo Zero.

According to The Verge, the new AI was given less understanding of the ancient game from the beginning, working with just the basic rules, whereas the original AlphaGo had the advantage of being trained on 100,000 previous games.

To train, AlphaGo Zero played against itself millions of times each day and, within the space of three days, had reached a competency level that could beat the reigning human champion, Lee Sedol, 100pc of the time.

After just 40 days, AlphaGo Zero did the unthinkable and achieved a success rate of 90pc over the original AlphaGo.

Speaking about how the AI learned to become arguably the best player of all time, lead programmer David Silver said it inevitably discovered human moves that had been tried over the millennia of the game’s existence, and built upon them.

“It started off playing very naively like a human beginner, [but] over time it played games which were hard to differentiate from human professionals,” Silver said.

“It found these human moves, it tried them, then ultimately it found something it prefers.”

Future uses

In a paper published to Nature documenting the results, the DeepMind team explained that AlphaGo Zero is not only a better player than the original, it’s also much more efficient.

While the original required 48 specially built AI processors to run, AlphaGo Zero needs just four. It is also entirely self-taught, which takes a huge amount of workload off the developers.

The algorithms involved could prove crucial to massive scientific research projects, with DeepMind’s co-founder Demis Hassabis suggesting that it could help develop a room-temperature superconductor, a so-far hypothetical substance that could produce electrical current with zero lost energy.

While current superconductors need extremely cold temperatures to function, a room-temperature conductor could allow the technology to become more widespread, bringing with it incredibly efficient power systems.

Colm Gorey was a senior journalist with Silicon Republic

editorial@siliconrepublic.com