“Game of Life”: the game that wasn't a game
Do you remember Flash games? The ones that ran in the browser before Adobe decided to kill everything in 2020? I do. There were sites – Miniclip, Newgrounds – that were a kind of uncurated digital playground, pages with black backgrounds and popups everywhere, where you could spend hours without really understanding what you were doing. You complain about brainrot? Maybe you don't remember the nineties web and that girl with the wart singing the polka... Anyway, it was one of those unremarkable afternoons. I don't remember the exact site – one of those places with incomprehensible URLs like “geocities.com/~someone/games” and graphics that hurt your eyes. I stumbled onto something strange. The Adobe Flash logo hadn't even finished loading, there were no instructions, no “Play” button. Just a grid of black and white cells changing, generation after generation, apparently at random.
I waited. I thought it was still loading. Nothing. The grid kept changing. I tried clicking on the cells. Nothing. I tried pressing keys on the keyboard. Nothing. I watched for a few minutes, waiting for something to happen – a game over, a score, an objective. Nothing. It wasn't a game. There was nothing to “play.” It was like watching rain fall, but digital. Hundreds and hundreds of pixels kept appearing and disappearing. I got bored, closed the tab. Years later – I don't remember how many, a lot – I happened to read an article on Wikipedia. The title was “Conway's Game of Life.” And the penny dropped.
What I had seen that day wasn't a game, or at least not in the traditional sense. It was a simulation. And that simulation, with four rules that even a child could understand, was doing something that none of those rules explicitly anticipated: producing complexity. Order from chaos. Structures that emerged, grew, interacted. Patterns that moved across the grid as if alive. And then – and this is where I had my epiphany – those structures could simulate an electronic circuit. Any electronic circuit. Theoretically, any computation that a Turing machine can perform. Four rules, binary cells. In essence: a universal computing machine.
Welcome to the story of how the English mathematician John Horton Conway, trying to build the simplest possible toy, accidentally built one of the most powerful demonstrations of how complexity can emerge from nothing. Dear creationists – yes, this one's for you too.
Von Neumann had a question
Before Conway, there was Von Neumann. John von Neumann – Bond, James... okay, I'll stop – was already asking, back in the 1940s, a question that sounds almost philosophical: can a machine build a copy of itself? It wasn't an abstract question. Von Neumann had already demonstrated theoretically that it was possible. His model – a two-dimensional “cellular automaton” – proved the principle. It worked like this: a configuration of cells on a grid contains within itself the “instructions” (encoded as the arrangement of active and inactive cells) to replicate itself. The structure reads these instructions, manipulates the surrounding cells, and generates an identical copy of itself in another area of the grid. The copy contains the same instructions, so it can repeat the process indefinitely. It's every engineer's dream (or nightmare, depending on your perspective): a machine that reproduces without external intervention.
The problem was the monstrous complexity of the system. Von Neumann's model required 29 different states per cell – twenty-nine – and a set of rules that filled pages and pages of algebra. It was functional, demonstrably correct, but it was a monster. Nobody could really grasp it at a glance, let alone implement it and study it in practice. It was like having the perfect recipe for a dish, but with 300 rare ingredients and 50 steps requiring laboratory equipment.
In 1962, the English mathematician John Horton Conway – professor at Cambridge, specialising in group theory and other things that sound complicated – decided to do something apparently simple. He looked for the most minimal possible version of Von Neumann's idea. A system of rules poor enough to be understandable by anyone, but rich enough to allow complex behaviour and, eventually, self-reproduction. It took him years. Not weeks, not months. Years. From 1962 to 1970. Eight years of proposals, tests, failures, adjustments. Every ruleset was analysed: too ordered? Everything converges to fixed configurations and the system dies. Too chaotic? Total noise, no structures. Conway was looking for a precise critical point: enough stability to allow persistent forms, enough instability to allow unpredictable and interesting behaviour.
He was obsessed with this balance. He tested it on graph paper (computers weren't yet fast enough to do it quickly), with groups of students, by hand, generation after generation. Painstaking work. Or the work of a madman, depending on how you look at it.
By 1970 he had found what he was looking for. He called it the “Game of Life.” Martin Gardner, who had a monthly column in Scientific American called “Mathematical Games,” presented it in October of that year. And within weeks it became one of the most famous objects in the entire history of recreational mathematics and computer science.
The four rules (and why each one matters)
The system is embarrassingly simple. You have an infinite two-dimensional grid (in practice: very large). Each cell can be in one of two states: alive (black) or dead (white). Each cell has eight neighbours – the four cardinal directions plus the four diagonals. At each generation, all cells simultaneously update their state following four rules:
A live cell with fewer than 2 live neighbours dies – isolation. There isn't enough interaction to sustain life. It's loneliness that kills.
A live cell with 2 or 3 live neighbours survives – stability. Local density is just right. There's enough support, but not too much competition. It's the point of equilibrium.
A live cell with more than 3 live neighbours dies – overpopulation. Too much competition for resources (it's a metaphor, but it works). Too much crowding suffocates.
A dead cell with exactly 3 live neighbours comes to life – reproduction. Three live cells create the conditions to generate new life. Not 2, not 4. Exactly 3.
That's it. Nothing else. No exceptions, no special conditions, no “if this cell is particular then...”. Four rules, applied uniformly to every cell, every generation, forever.
Now stop for a moment and think about this: where is the complexity in these rules? Where does it say that structures must emerge? Where does it say that patterns can exist that move, oscillate, interact in non-trivial ways? Nowhere. The rules only talk about individual cells and their immediate neighbours. Nothing more. And yet complexity emerges. It emerges necessarily, as an inevitable consequence of that subtle balance Conway spent eight years searching for. It isn't programmed into the rules. It's an emergent property of the system. And this is the point that made the penny drop for me, years after that grid: complexity doesn't need to be designed. It can simply happen, if the conditions are right.
The taxonomy
In the first year after publication in Scientific American, readers – programmers, mathematicians, students, enthusiasts – flooded the magazine with discoveries. It had become a viral phenomenon, in an era when “viral” still meant photocopies and letters sent by post. And very quickly a natural classification of structures emerged.
Still lifes – completely stable patterns that never change. The simplest is the “block”: a 2×2 square. In the block, every cell has exactly 3 neighbours – the other three cells of the square. Each one survives because it has exactly 3 live neighbours. The pattern doesn't change, doesn't move. It's just there, motionless, forever. Other examples: the “beehive,” the “loaf” – stable forms that once formed remain identical.
Oscillators – patterns that change but return to their initial configuration after a finite number of generations. The simplest is the “blinker”: three cells in a horizontal line. In the next generation they become three cells in a vertical line. Then back to horizontal. Then vertical. Period 2, infinite oscillation. Other more complex examples: the “toad” (period 2), the “beacon” (period 2), the “pulsar” (period 3, one of the most visually beautiful).
And then there's the one. The glider, my favourite – illustrated in detail on MathWorld.
Five cells, arranged in a specific configuration that looks almost like a wonky little triangle. And this thing – this small five-cell structure – moves. Not in the sense that the cells physically shift around the grid (the cells are fixed, remember). In the sense that the pattern propagates through space, one cell at a time, diagonally downward to the right (or in any direction, depending on the initial orientation). After four generations, the glider has returned to its original configuration, but shifted one position diagonally. And then it continues. Forever. It crosses the grid indefinitely, unless it meets an obstacle.
And here something starts to change in the way people thought about the system. Because a glider isn't just a pretty pattern to watch. It's a signal. It's something that carries information from point A to point B. It has a direction, it has a speed (c/4, where c is the maximum possible speed in the Game of Life, which is one cell per generation), it has persistence.
And if you have a glider, the next question is obvious: can you create something that generates more gliders? The answer arrived in 1970, a few months after the original publication. Bill Gosper – an MIT programmer, one of the first hackers in history – found the “glider gun.” A configuration of 36 cells that, every 30 generations, spits out a new glider. A periodic signal generator. A signal. A periodic source. A precise direction. In a 2D grid with binary cells and four elementary rules. This is where the story is going.
The heart: four rules, one Turing machine
TL;DR: The Game of Life is Turing-complete. This means that, in principle, you can perform any computation that a Turing machine can perform, inside a 2D grid with binary cells and four rules. No processor. No integrated circuits. Just cells being born and dying according to Conway's four rules.
To understand why the Game of Life is Turing-complete, you need to take a step back on what “Turing-complete” means. Alan Turing, in 1936 (at 24 years old – the age at which I was still playing at being a Wikipedia editor), defined an abstract model of computation: a machine that reads an infinite tape of cells, writes on it, and moves forward or backward, following a finite set of deterministic rules. If a system can simulate any Turing machine – that is, if you can configure it to perform any computation that is computable – that system is Turing-complete. Which means, in practice, that it's universal from a computational standpoint. There is nothing a Turing machine can do that this system cannot do (given enough space and time).
Now back to the Game of Life. We have the glider: a signal that moves. We have the glider gun: a periodic source of signals. But is this enough to build a computer? No. To have a logic circuit you need the fundamental logic gates – AND, OR, NOT. All basic boolean operations. Everything else – addition, multiplication, comparisons, conditional jumps, arbitrary algorithms – is built by combining logic gates.
Logic gates in the Game of Life are implemented by exploiting interactions between gliders. When two gliders intersect, the result depends on their relative configuration, the precise timing of the encounter, the direction of approach. Some combinations cause the two gliders to completely annihilate each other (output: no glider). Others produce new gliders in specific directions (output: one or more gliders). By changing the geometry of the encounter – the exact position of the glider guns that generate them, the timing, the distances – you can build configurations that behave like AND, OR, and NOT gates. The incoming gliders represent the input bits (0 or 1, depending on whether the glider is present or not). The outgoing gliders represent the result of the logical operation.
If you have logic gates, you have combinational circuits. If you have combinational circuits and a memory mechanism (implemented with glider loops and oscillating patterns), you have sequential circuits. And if you have arbitrary sequential circuits, you have a Turing machine.
This isn't theory. It's been done. In 2000, Paul Rendell built a functioning Turing machine entirely within the Game of Life – with tape, read/write head, states, transitions. In 2010, a group of researchers led by Paul Chapman took the concept further still and built a complete computer – including a display – that runs the Game of Life... inside the Game of Life.
These implementations are, obviously, infinitely slower than a real processor. A single clock cycle requires hundreds or thousands of generations. A simple addition takes billions of steps. But they work. The computation happens, correct, deterministic, verifiable.
But back to the main point. What does all this mean? It means that the grid I was staring at all those years ago – that thing I didn't understand, that looked like organised noise – had more theoretical computational power than any processor I've ever used. Not in terms of speed (that would be ridiculous) but in terms of what can be done.
The biology that isn't biology (but almost)
Conway never claimed that the Game of Life literally simulated biological life. The rules have nothing to do with DNA, cells, metabolism, evolution. There's no natural selection, no adaptation. It's a purely deterministic system where the same initial conditions always produce the same result. Zero stochasticity, zero mutations, zero genetics.
And yet the field of “artificial life” owes an enormous debt to the Game of Life. Because the GoL demonstrated experimentally a principle that before 1970 was more philosophical intuition than concrete proof: biological complexity doesn't require an intelligent designer. It can emerge from simple rules, applied uniformly, with nothing more than local interactions between identical elements. Self-organisation – structures that emerge without central coordination. Competition for space – patterns that survive are those satisfying the conditions of survival (the four rules). Emergence of hierarchical structures – from the single glider (elementary pattern) to the glider gun (generator) to logic circuits (systems of patterns that interact in a coordinated way).
It's not biological evolution in the Darwinian sense. But it's the same underlying principle: from simplicity, complexity emerges, without that complexity needing to be explicitly encoded in the fundamental rules.
The risk here is always falling into superficial analogies that don't hold up to analysis. The GoL doesn't simulate real ecosystems. The “cells” aren't biological cells. There's no metabolism, no sexual reproduction, no genetic variability. The parallel should be taken for what it is: an illustrative case of a more general principle, not a replica of real life. But it remains true that when you watch a glider gun fire gliders indefinitely, or when you see complex patterns emerging from random initial configurations, it's hard not to think: “this looks alive.” It isn't, of course. But the boundary between “looks alive” and “is alive” is more blurred than we like to admit.
Wolfram and the search for universality
If Conway showed that complexity emerges from simplicity in one specific case, Stephen Wolfram tried to do something more ambitious: systematically map all possible behaviour of simple cellular systems.
Wolfram – physicist, mathematician, creator of Mathematica (yes, that Mathematica) – published in the 1980s a series of papers on one-dimensional “cellular automata,” even simpler versions of the Game of Life. Imagine not a 2D grid, but a single row of cells. Each cell has only two neighbours (left and right) instead of eight. Each cell can be 0 or 1. And a cell's behaviour in the next generation depends only on its own state and those of its two neighbours.
How many possible rules are there for such a system? 256. Exactly 256, because there are 8 possible configurations of three cells (2³), and for each you must decide whether the central cell will be 0 or 1 in the next generation (2⁸ = 256 total combinations).
Wolfram numbered them all – Rule 0, Rule 1, Rule 2... Rule 255 – and tested them systematically, generation after generation, starting from different initial configurations. And he discovered that, despite their apparent diversity, all 256 automata naturally grouped into four categories of behaviour:
Class I – convergence to a uniform state. Everything dies or everything becomes the same. Total order, extremely boring.
Class II – simple periodic behaviour. Oscillators, stable patterns that repeat. Interesting order, but predictable.
Class III – complete chaos. Pseudo-random noise, no persistent structures. Unpredictable but not interesting.
Class IV – the interesting point. Complex non-periodic behaviour. Structures that emerge, interact, produce patterns that are neither ordered nor chaotic. It's the zone between order and chaos where interesting things happen.
Class IV is the one that matters. It's the critical point – the same balance Conway spent eight years chasing in the Game of Life. And in 2002, Matthew Cook (working with Wolfram) formally proved that Rule 110 – a single one-dimensional ruleset among the 256 possible – is Turing-complete.
Rule 110. Three bits of input, one bit of output, eight total rules. Simpler than the Game of Life. And universal.
Wolfram went further. In his controversial book A New Kind of Science (2002, over 1,200 pages that he wrote entirely himself, which already says something about the personality), he launched a much larger thesis: that the universe itself might fundamentally be a cellular automaton. That physical reality – the behaviour of particles, fields, forces, gravity – might be the result of simple rules applied uniformly to a discrete grid of “cells” at a sub-Planck scale.
It's a bold thesis. The scientific community received it with significant scepticism – it isn't easily falsifiable in the traditional sense, it requires enormous conceptual leaps, and Wolfram doesn't exactly have a reputation for modesty (understatement). But it hasn't been disproved. And the fact that systems as simple as Rule 110 are sufficient to produce universal behaviour is proof that the principle works: from simplicity, any level of computational complexity can emerge.
If the universe really is a cellular automaton, then God (or the Flying Spaghetti Monster) is a programmer who wrote very simple rules and then pressed “Enter.” Everything else – stars, galaxies, you reading this – is emergence. All consequence, no explicit design.
The cultural legacy
There's something strange about the cultural history of the Game of Life. There's nothing to win, nothing to lose, no objectives. It isn't a tool – it produces no practically useful results in any sense. It solves no real problems. It's a pure intellectual object. A puzzle with no solution because it has no question. And yet millions of people have implemented it. In every imaginable programming language. Python, Java, C, Rust, JavaScript, Haskell, Brainfuck (yes, really). On every platform. Arduino, Raspberry Pi, FPGA, GPU with CUDA. In every format. Terminal with ASCII art, graphical interfaces, physical LEDs, E-ink screens. On programmable calculators. On Game Boy. On two-euro microcontrollers.
It has become the “Hello World” of simulation. The first program you write when you want to understand emergence, cellular automata, complexity. And every time someone re-implements it – and they do it purely for pleasure – they repeat an act that Conway performed in 1970: taking an abstract idea and turning it into something concrete, tangible, visible.
I did it myself, years later – I implemented it in bash. There was no practical reason for it. I did it because I wanted to truly understand it, build it with my own hands, see how it worked. And this pattern repeats throughout hacker and open source culture. If you want to play the game of life – which sounds like it means something else entirely – you can download the script from here.
The Game of Life has been ported to systems Conway would never have imagined. Someone implemented it in Excel with formulas. Someone built it with real electronic circuits. Someone constructed it with quantum cellular automata. Someone used it to generate music (every live cell is a note). Someone made it three-dimensional. Pure pleasure in building something that works, that does what it should do, that is elegant in its simplicity. It's the practical demonstration that mathematical beauty exists.
Back to the grid
That Flash grid I was staring at all those years ago is still in my memory with a strange clarity, like the feeling of looking at something that made no sense. Today, though, I know it made more sense than I could have imagined. Four rules, no objective, no designer saying “now do this, now do that.” And the result was – and is – one of the most elegant demonstrations that complexity doesn't need an author. That it can emerge from nothing.
Von Neumann had asked: can a machine reproduce itself? Conway had searched for the simplest possible system that showed interesting behaviour. And what he found was something much larger: proof that universal computation can emerge from binary cells and four elementary rules. And everything else – the gliders, the guns, the logic circuits, Turing-completeness, the hypnotic beauty of the patterns that emerge – is consequence. Pure, inevitable consequence.
And perhaps this is why that Flash grid stayed with me for years, even without understanding it. Because at some unconscious level I could sense that there was something fundamental inside it. Something that spoke to how the universe works – not literally, perhaps, but as a metaphor. As a demonstration that simple rules, applied consistently, produce everything we see around us. That the complexity of the world – ourselves included – might simply be a consequence of rules we don't yet know how to read.
To quote an old UAAR slogan: “The bad news is that God doesn't exist. The good news is that you don't need him.”
Sources and further reading
Foundational papers and books – Gardner, M. (1970). “Mathematical Games: The Fantastic Combinations of John Conway's New Solitaire Game 'Life'“. Scientific American, 223(4), 120-123. – Von Neumann, J. (1966). Theory of Self-Reproducing Automata. University of Illinois Press. – Wolfram, S. (1983). “Statistical Mechanics of Cellular Automata”. Reviews of Modern Physics, 55(3), 601-644. – Berlekamp, E. R., Conway, J. H., & Guy, R. K. (1982). Winning Ways for Your Mathematical Plays, Volume 2: Games in Particular. Academic Press.
Turing-completeness and implementations – Rendell, P. (2000). “A Turing Machine in Conway's Game of Life”. – Chapman, P., et al. (2006). “OTCA Metapixel – Life in Life”. – Cook, M. (2004). “Universality in Elementary Cellular Automata”. Complex Systems, 15(1), 1-40.
Online resources – LifeWiki: https://conwaylife.com/wiki/ (the definitive resource, cataloguing thousands of patterns) – Wikipedia: https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life – Gosper's Glider Gun: https://en.wikipedia.org/wiki/Gun_(cellular_automaton) – Rule 110: https://en.wikipedia.org/wiki/Rule_110 – Turing completeness: https://en.wikipedia.org/wiki/Turing_completeness
Pattern explorers – Online simulators: https://playgameoflife.com/ – Golly (dedicated software): http://golly.sourceforge.net/
Interviews and biographical material – Numberphile – John Conway interview series
#GameOfLife #Conway #CellularAutomata #TuringComplete #Complexity #EmergentBehaviour #Mathematics #Wolfram #ComputerScience #Hacker