Jump to Content
A Go board – a wide grid with black and white stones placed on top.

AlphaGo

AlphaGo mastered the ancient game of Go, defeated a Go world champion, and inspired a new era of AI systems.

Making history

Our artificial intelligence (AI) system, AlphaGo, learned to master the ancient Chinese game of Go — a profoundly complex board game of strategy, creativity, and ingenuity.

AlphaGo defeated a human Go world champion a decade before experts thought possible, inspired players around the world to discover new approaches, and arguably, became the strongest Go player in history.

It proved that AI systems can learn how to solve the most challenging problems in highly complex domains.

AlphaGo in training

The challenge

Go was long considered a grand challenge for AI. The game is a googol times more complex than chess — with an astonishing 10 to the power of 170 possible board configurations. That’s more than the number of atoms in the known universe.

The strongest Go computer programs only achieved the level of human amateurs, despite decades of work. Standard AI methods struggled to assess the sheer number of possible moves and lacked the creativity and intuition of human players.

Our approach

We created AlphaGo, an AI system that combines deep neural networks with advanced search algorithms.

One neural network — known as the “policy network” — selects the next move to play. The other neural network — the “value network” — predicts the winner of the game.

Initially, we introduced AlphaGo to numerous amateur games of Go so the system could learn how humans play the game. Then we instructed AlphaGo to play against different versions of itself thousands of times, each time learning from its mistakes — a method known as reinforcement learning. Over time, AlphaGo improved and became a better player.

I thought AlphaGo was based on probability calculation and that it was merely a machine. But when I saw this move, I changed my mind. Surely, AlphaGo is creative.

Lee Sedol
Winner of 18 world Go titles

The matches

In October 2015, AlphaGo played its first game against the reigning three-time European Champion, Fan Hui. AlphaGo won the first ever match between an AI system and Go professional, scoring 5-0.

AlphaGo then competed against legendary Go player Lee Sedol — winner of 18 world titles, and widely considered the greatest player of that decade. AlphaGo's 4-1 victory in Seoul, South Korea, in March 2016 was watched by over 200 million people worldwide. This landmark achievement was a decade ahead of its time.

Watch

Inventing winning moves

This game earned AlphaGo a 9 dan professional ranking — the first time a computer Go player had received the highest possible certification.

During the games, AlphaGo played several inventive winning moves. In game two, it played Move 37 — a move that had a 1 in 10,000 chance of being used. This pivotal and creative move helped AlphaGo win the game and upended centuries of traditional wisdom.

Then in game four, Lee Sedol played a Move 78, which had a 1 in 10,000 chance of being played. Known as “God’s Touch”, this move was just as unlikely and inventive as the one AlphaGo played two games earlier — and helped Sedol win the game.

Players of all levels have examined these moves ever since.

Technical legacy

AlphaGo’s victory inspired a new era of AI systems.

It was conclusive proof that the underlying neural networks could be applied to complex domains, while the use of reinforcement learning showed how machines can learn to solve incredibly hard problems for themselves, simply through trial-and-error.

Its ability to look ahead and plan are also still used in today’s AI systems.

The next generation

These ideas allowed us to develop stronger versions of AlphaGo and the system continued to play competitively, including defeating the world champion.

Now, its successors — AlphaZero, MuZero, and AlphaDev — are building upon AlphaGo’s legacy to help solve increasingly complex challenges that impact our everyday lives.