End of the Digital Age: From Babbage to Turing and Beyond
Book: Quantum Supremacy: How the Quantum Computer Revolution Will Change Everything Author: Michio Kaku ISBN: 978-0385548366
Chapter 2 of Quantum Supremacy takes us through the entire history of computing in about 30 pages. Kaku starts from the ancient Greeks and goes all the way to Alan Turing and the birth of artificial intelligence. For engineers who work with computers every day, this chapter is a solid reminder of how we got here and what fundamental limits still apply to everything we build.
The World’s Oldest Computer
Kaku opens the chapter with the Antikythera mechanism, a device pulled from the bottom of the Aegean Sea in 1901. It was crafted around 150-100 BCE, had at least 37 bronze gears, and could predict eclipses and track planetary motion. Essentially the world’s first analog computer.
The thing that stuck with me: the purpose of the first computer was simulation. Not accounting, not warfare. The ancient Greeks wanted to hold the cosmos in their hands and understand how it works. Kaku uses this to bridge into quantum computing, which he frames as the ultimate simulation machine, continuing a 2,000-year journey.
Babbage and Lady Ada
After the fall of Rome, not much happened for a long time. Kaku picks up the story in the 1800s with Charles Babbage, the “Father of the Computer.” Babbage wanted to build a mechanical computer that could produce accurate navigational charts and mathematical tables. Humans kept making errors in these calculations, and errors in navigation charts meant shipwrecks. Real economic pressure to automate.
His unfinished machine would have had 25,000 parts, weighed four tons, and could manipulate one thousand 50-digit numbers. That amount of memory was not matched by another machine until 1960. When the London Science Museum finally built one of his designs from the original blueprints, it worked exactly as predicted.
The more interesting part of the Babbage story is Lady Ada Lovelace though. She wrote what historians consider the first published computer program, a set of instructions for generating Bernoulli numbers. She saw something Babbage missed: computers are not just number crunchers. Numbers can represent letters, musical notes, symbols. If you have a machine that manipulates numbers, and those numbers represent other things, you have a general-purpose machine.
From an engineering perspective, this insight still sits at the core of everything we do. Every Docker container, every Kubernetes pod, every CI/CD pipeline is ultimately built on this idea that computation is about symbol manipulation, not just arithmetic.
Godel Breaks Mathematics
Kaku then takes a detour into pure mathematics. For 2,000 years, mathematicians assumed that every true statement could be proven. In 1900, David Hilbert listed the most important unsolved problems and challenged the world to prove that mathematics is complete and consistent.
In 1931, Kurt Godel proved it was impossible. There are true statements in mathematics that simply cannot be proven from a given set of axioms. This shattered 2,000 years of mathematical thinking.
Why does this matter for engineers? It sets the stage for understanding fundamental limits of computation. If mathematics itself is incomplete, then there are things no computer can ever compute. Not a hardware limitation or a software bug. A fundamental property of logic itself.
Turing and the Universal Machine
This is where the chapter gets really good for anyone who writes software. Alan Turing took Godel’s abstract result and made it concrete. In 1936, he introduced the universal Turing machine, a simple device with an infinite tape of 0s and 1s and just six operations: read, write, move left, move right, change a number, and stop.
Six operations. That is it. Turing showed that this simple machine can perform any computation that any computer can perform. Every supercomputer at the Pentagon, every phone in your pocket, every cloud instance you spin up on AWS is a Turing machine. The architecture changes, the speed changes, but the fundamental capability stays the same.
Turing then used his machine to prove Godel’s incompleteness theorem in a more intuitive way. He defined “computable” as something a Turing machine can solve in finite time. If a problem takes infinite time, it is not computable. And yes, there are problems that are not computable. No matter how much hardware you throw at them, they will never finish.
For anyone who dealt with the halting problem in computer science classes, this is the origin. Still relevant today when we think about what problems are tractable and what problems need entirely different approaches, like quantum computing.
Computers Win Wars
The chapter then covers Turing’s work at Bletchley Park during World War II. The Nazis had the Enigma machine for encrypting military communications. Turing and his team built the bombe, an electromechanical device for cracking these codes. They also worked on Colossus, which historians consider the world’s first programmable digital electronic computer. It used 2,400 vacuum tubes instead of mechanical gears, operating near the speed of light compared to the slow cogs of previous machines.
Kaku makes an important distinction here between analog and digital. Analog signals degrade with each copy, like photocopying a photocopy. Digital signals, as 0s and 1s, can be copied with almost no loss. This is why digital won. Same principle behind why we moved from analog tape backups to digital, why we use checksums, why data integrity matters in distributed systems.
The codebreaking effort shortened the war by about two years and saved an estimated 14 million lives. Because of secrecy laws though, Turing got no recognition. In 1952, after a burglary investigation revealed he was gay, the British government prosecuted him. He was given a choice between prison and chemical castration. He chose the latter. Two years later, he was found dead from cyanide poisoning.
Turing and AI
Before his death, Turing tackled artificial intelligence. In 1950, he proposed the Turing test: put a human and a machine in separate rooms, ask them written questions, and see if you can tell which is which. This replaced centuries of philosophical debate about consciousness with a simple, reproducible test.
Turing predicted that by about 2000, computers with enough memory would fool an average interrogator 30% of the time after five minutes of questioning. He was roughly in the right ballpark, though the real breakthroughs in AI came later than he expected.
Engineering Takeaway
What I take from this chapter is a sense of perspective. We work with incredibly powerful tools every day, but the theoretical foundations were laid by a handful of people working with pen and paper. Babbage imagined the hardware. Lovelace imagined the software. Godel found the limits. Turing unified everything into a mathematical framework that still holds today.
The quiet message of this chapter is that we are now hitting the limits of what Turing machines can do efficiently. Moore’s Law is slowing down. Classical computation has fundamental constraints. Kaku is setting the stage for quantum computing as the next step beyond the Turing machine. Not replacing it, but extending what is computable in practice.
For those of us who build systems for a living, understanding these foundations is not just academic. It helps you reason about what is possible, what is hard, and what might need a fundamentally different approach.
This is part of a chapter-by-chapter review of “Quantum Supremacy” by Michio Kaku.