Picture this: you sit down at a board marked with a simple grid of black lines. You have a bowl of black stones, your opponent has white. The rules? Place a stone. Try to surround your opponent’s territory. That’s it. Yet from these laughably simple rules emerges one of the most complex strategic games humans have ever created—and one that perfectly mirrors how programming works.

a close up of a board game with black and white balls
Photo by Elena Popova on Unsplash

Go, invented in China over 4,000 years ago, proves something remarkable: the most powerful systems often start with the smallest, simplest pieces. Just like how a few lines of code can crash a website, launch a spacecraft, or recommend your next favourite song, a single stone placed on a Go board can completely reshape the entire game.

When Simple Rules Create Infinite Complexity

Think about your smartphone for a moment. Everything it does—from sending texts to playing videos—boils down to millions of simple on-off switches called transistors. Each one follows the same basic rule: if electricity comes in, let it through or block it. That’s binary thinking in action, just like we explored with finger counting. Yet from these billions of identical on-off decisions comes the complexity of modern computing.

Go works the same way. You place stones on intersections. You capture stones by surrounding them. Groups of stones live or die together. These rules fit on a single page, yet the number of possible Go games exceeds the number of atoms in the observable universe. Sound familiar? It’s the same principle that makes coding so powerful: simple instructions combined in countless ways create infinite possibilities.

When you write a loop in code—”repeat this action while something is true”—you’re using a rule as simple as “place a stone on the board.” But that loop might process a million photos, generate a thousand random numbers, or check every possible move in a game. The rule stays simple; the results become magnificently complex.

The Butterfly Effect on a Board

Here’s where Go becomes a perfect coding teacher. In programming, we often talk about “debugging”—fixing small errors that create massive problems. Change one character in a program, like switching a plus sign to a minus sign, and suddenly your banking app starts giving money away instead of charging fees. Small changes, enormous consequences.

Go players experience this constantly. Moving one stone from here to there might seem tiny, but it can shift the entire game. That stone creates new possibilities for connection, changes which territories are safe, and opens up attacks that weren’t possible before. Just like how changing a single condition in an if-then statement can make your program behave completely differently.

This is why both Go masters and experienced programmers develop something called “pattern recognition.” They learn to see how small decisions ripple outward. A Go player looks at a position and instantly recognises: “This stone placement will create problems twenty moves from now.” A programmer reads code and thinks: “This function call will cause memory issues when the program grows larger.”

Teaching Machines to Think

For decades, Go represented the final frontier for artificial intelligence. Computers mastered chess in the 1990s, but Go seemed impossible. Why? The game tree—the branching possibilities of every potential move—was simply too vast for brute-force calculation.

Then came AlphaGo, and everything changed. Instead of trying to calculate every possible move (impossible even for supercomputers), the researchers at DeepMind taught their AI to think more like a human Go player. They used something called “deep learning”—basically, pattern recognition on steroids.

Here’s the magical part: they fed AlphaGo millions of human Go games, letting it discover patterns we never explicitly taught it. The AI learned to recognise “good” positions versus “bad” ones, just like how you might learn to recognise a catchy song versus an annoying one. No one can explain exactly what makes a melody catchy, but somehow we know it when we hear it.

AlphaGo developed intuition. It would make moves that human experts initially dismissed as mistakes, only to realize twenty moves later: “Oh, that was brilliant.” The AI had seen patterns invisible to human eyes, connections we’d never noticed despite 4,000 years of playing the game.

Emergent Strategy and Creative Problem-Solving

This brings us to one of the most exciting concepts in both Go and programming: emergence. Remember how we talked about transistors creating smartphones? Individual ants following simple rules somehow create sophisticated colonies? This is emergence—complex behaviours arising from simple interactions.

In Go, you’re not just fighting for territory. You’re creating ladders of stones that chase opponents across the board. You’re building “eyes”—empty spaces that make your groups impossible to capture. You’re trading territory in one corner for influence in the centre. None of these concepts exist in the basic rules, yet they emerge naturally from gameplay.

Programming works identically. You start with basic commands: store this number, compare these values, repeat this action. But combine them thoughtfully, and suddenly you’re creating weather prediction systems, social networks, or music recommendation algorithms. The creativity doesn’t come from the individual commands—it emerges from how you combine them.

When AlphaGo played its famous “Move 37” against world champion Lee Sedol, commentators were baffled. The move seemed to violate everything human players understood about good strategy. Yet it worked perfectly, setting up victories that wouldn’t become apparent for dozens of moves. This is emergent strategy in action—solutions that arise from the interaction of simple rules, visible only when you step back and see the bigger picture.

Thinking Several Moves Ahead

Both Go and programming teach you to think in layers. When you place a stone, you’re not just capturing territory now—you’re setting up future possibilities, blocking enemy plans, and creating options for yourself down the line. When you write a function, you’re not just solving today’s problem—you’re creating a tool that future you (or your teammates) can reuse and build upon.

This kind of thinking takes practice. Early Go players focus on immediate captures and obvious threats. Beginners in programming write code that solves the problem right now, without considering what happens when requirements change. But as you develop, you start seeing the connections: how this decision affects that possibility, how this function could be extended, how this stone placement creates opportunities three moves from now.

The beautiful thing is, you don’t need to calculate every possibility—impossible anyway, in Go or in programming. Instead, you develop instincts. You learn to recognise patterns that usually work well. You build a library of techniques and strategies. Most importantly, you learn when to trust your intuition and when to step back and calculate more carefully.

Next time you see someone playing Go, or next time you write a simple program, remember: you’re witnessing one of the most remarkable phenomena in the universe. Simple rules, followed consistently, creating complexity beyond imagination. It’s happening in your brain right now, in the computer running this text, in the ancient game spreading across boards in parks and community centres worldwide. Small pieces, infinite possibilities—that’s not just how Go works, or how computers work. That’s how thinking works.