Editors’ Note: This is the second in a series of four essays, written by Jack Copeland, spotlighting Alan Turing, who is considered the father of modern computing and whose work breaking German codes changed the course of World War II.
In 1936, at the age of just 23, Turing invented the fundamental logical principles of the modern computer—almost by accident. A shy, boyish-looking genius, the young Turing had recently been elected a fellow of King’s College, an unusual honor for such a young researcher. King’s College, in the heart of Cambridge, lies a few steps along narrow medieval lanes from Trinity College, where in the seventeenth century Isaac Newton revolutionized our understanding of the universe. King’s College was Turing’s intellectual home, and he remained a fellow there until the early 1950s, when he was a given a specially created Readership in the Theory of Computing at Manchester University, a new position in the new field that Turing was instrumental in creating, computer science.
At King’s, the young Turing worked alone in a spartan room at the top of an ancient stone building beside the River Cam (scarcely more than a narrow stream winding through the old masonry, and a productive source of damp and chill). It was all quite the opposite of a modern research facility—Cambridge’s scholars had been doing their thinking in comfortless stone buildings, reminiscent of cathedrals or monasteries, ever since the university had begun to thrive in the Middle Ages. Turing was engaged in highly abstract work in the foundations of mathematics. No one could have guessed that anything of any practical value would emerge from this research, let alone a machine that would change all our lives.
As everyone who can operate a personal computer knows, the way to make the machine perform the job you want—word-processing, say—is to locate the appropriate program in memory and click on it. That’s the so-called ‘stored program’ concept and it was Turing’s invention in 1936. (This was just on paper—no engineering at this stage.) Turing’s fabulous idea, dreamed up by pure thought, was of a single processor—a single slab of hardware—that, by making use of instructions stored inside its memory, could change itself from a machine dedicated to one specific task into a machine dedicated to a completely different task—from calculator to word-processor to chess opponent, for example. Turing called his invention the ‘universal computing machine’; now we call it simply the universal Turing machine. If Isaac Newton had known about it, he would probably have wished that he had thought of it first. Nowadays, though, when nearly everyone owns a physical realization of Turing’s universal machine, his idea of a one-stop-shop computing machine is apt to seem as obvious as the wheel and the arch. But in 1936, when engineers thought in terms of building different machines for different purposes, Turing’s idea of a single universal machine was revolutionary.
Turing was interested right from the start in building his universal machine, but he did not know of any viable technology for doing so. The leading digital technology of the time, widely used in telephone exchanges, punched-card calculating equipment and elsewhere, was much too bulky—and, most importantly, much too slow—to be viable for engineering a universal Turing machine. It was at Bletchley Park that Turing learned of the technology he needed in order to realize his world-changing idea, but not until the war had ended, in 1945, did he embark on the process of building a universal machine. Electronics was the way to do it. Tommy Flowers, designer of Bletchley Park’s Tunny-cracking ‘Colossus’ computers, which broke a code used by Hitler and his generals, had pioneered digital electronics during the 1930s. Flowers told me that, at the outbreak of the war with Germany, he was possibly the only person in Britain who realized that electronics could be used for large-scale, high-speed digital computing—and so, when he was sent to assist the codebreakers at Bletchley Park, he turned out to be the right man in the right place at the right time. As soon as Flowers was introduced to the Tunny cipher system, in 1942, he realized that digital electronics was the technology needed for making mincemeat of the new German cipher.
Flowers’ giant electronic Colossi were certainly computers—the world’s first large-scale electronic digital computers, in fact—but they lacked the universal Turing machine’s ultra-flexibility. Flowers had created the Colossi for just one very specific purpose, cracking Tunny. Even long multiplication, which was not needed in Tunny-breaking, lay beyond Colossus’s computational range. Nor did the Colossi incorporate Turing’s stored-program concept. In order to reprogram these machines for a new job, the operators had literally to rewire the computer, setting switches, and re-routing cables by pulling out plugs and pushing them into new sockets. It seems unbearably primitive from today’s perspective, when we take Turing’s glorious stored-program world for granted.
As soon as Turing saw Flowers’ racks of electronic equipment, he knew this was the way to build a miraculously fast universal machine. It was, though, more than four years after Flowers created Colossus that the first universal Turing machine in hardware ran the first stored program, and the modern computer age began. Electronic stored-program universal digital computers became ever smaller and ever faster, until now they fit into school satchels or jacket pockets, linking each of us to the whole world via the Internet. The first of these wonderful machines was named, aptly enough, the ‘Baby’ computer. Two brilliant electronics engineers, Freddie Williams and Tom Kilburn, built the Baby at Manchester University, in the Computing Machine Laboratory recently established there by Turing’s friend and colleague Max Newman, who had presided over Bletchley Park’s giant installation of nine Colossi, the world’s first electronic computing facility. Kilburn had learned from Turing how to design a computer, during a course of highly specialized lectures that Turing gave in London during 1946-47, on what we now call computer architecture. Kilburn was a good pupil: when he started the course, he didn’t know the first thing about computers, but by the end of it he was expertly applying the ideas that he had learned from Turing, and was soon drawing up preliminary schematics for the Manchester Baby.
Once the Baby had come to life and was working in a rudimentary way, Turing rolled up his sleeves to help Kilburn and Williams turn it into a fully functioning computer. The original Baby had no input mechanism apart from a bank of manual switches. These were used to insert a program into memory one bit at a time—not much use for real computing. If you made a single mistake, you had to wipe the memory and go right back to the beginning. The arrangements that Williams and Kilburn had included for output were equally crude: the user had to try to read patterns of zeroes and ones (dots and dashes) as they appeared on what was essentially a TV screen. Turing used codebreaking technology from Bletchley Park to get the computer working properly, designing an input-output system based on the same punched teleprinter tape that ran through Colossus. He also designed a programming system for the computer, and wrote the world’s first programming manual. Thanks to Turing, the first electronic universal computing machine was open for business.
In the weeks before he moved to Manchester, in 1948, Turing wrote what was, with hindsight, the first manifesto of Artificial Intelligence. His provocative title was simply ‘Intelligent Machinery‘. While the rest of the world was just beginning to wake up to the idea that computers were the new way to do high-speed arithmetic, Turing was talking very seriously about ‘programming a computer to behave like a brain’. Among other shatteringly original proposals, ‘Intelligent Machinery’ contained a short outline of what we now call genetic algorithms—algorithms based on the survival-of-the-fittest principle of Darwinian evolution—as well as describing the striking idea of building a computer out of artificial human nerve cells (an approach now called ‘connectionism’).
Strangely enough, Turing’s anti-Enigma bombe (code cracking machine) was the first step on the road to modern AI. The bombe worked by searching at high speed for the correct settings of the Enigma machine—and once it had found the right settings, the random-looking letters of the encrypted message turned into plain German. The bombe was a spectacularly successful example of the mechanisation of thought processes: Turing’s extraordinary machine performed a job, codebreaking, which requires intelligence when human beings do it. The fundamental idea behind the bombe, and one of Turing’s key discoveries at Bletchley Park, was what modern AI researchers call heuristic search. The use of heuristics—shortcuts or rules of thumb that cut down the amount of searching required to find the answer—is still a fundamental technique in AI today. The difficulty Turing had confronted in designing the bombe was that the Enigma had far too many possible settings for the bombe to search blindly through them, until it happened to stumble on the right answer—the war might be over before it produced a result. Turing’s brilliant idea was to use heuristics to speed up the search.
Thanks to the bombe, Turing glimpsed the possibility of achieving machine intelligence of a broader nature by means of guided search. This idea fascinated him for the rest of his life. He was soon talking enthusiastically to his fellow codebreakers about applying the new concept of heuristic search to mechanizing the thought processes used in playing chess. He also wanted to mechanize the process of learning itself. At Bletchley Park, he actually circulated a typescript on machine intelligence—now lost, this was undoubtedly the earliest paper in the field of AI. Once the war was over, Turing began making his radical ideas public. In February 1947, in an ornately grand lecture theatre in a Palladian mansion near London’s Piccadilly, Turing gave what was, so far as I know, the first public lecture ever to mention computer intelligence. He offered his audience a breath-taking glimpse of a new field, predicting the advent of machines that act intelligently, learn from experience, and routinely beat average human opponents at chess. Speaking more than a year before the Manchester Baby ran the first computer program, his far-seeing predictions must have baffled many in his audience. In the lecture Turing even anticipated some aspects of the Internet, saying ‘It would be quite possible to arrange to control a distant computer by means of a telephone line’.
In 1948, Turing wrote the world’s first AI program, with mathematician David Champernowne. It was a heuristic chess-playing program, named ‘Turochamp’. With Turing and Champernowne using paper and pencil to simulate the program’s play by hand, Turochamp quickly proved its ability to beat human players by defeating Champernowne’s wife. Turing planned to run Turochamp on the Manchester computer, but the serious-minded Tom Kilburn put his foot down. He wasn’t going to have his precious computer used for such nonsense.
Questions for Discussion:
Is your brain just a computer?
Is Artificial Intelligence dangerous? Could future Artificial Intelligences become the dominant species on earth?
What does the phenomenon of Artificial Intelligence have to tell us about the origins of life on earth?
As the capabilities of computers grow, what distinguishes computers from the human mind? Is consciousness the only difference?
Do you think you could really fall in love with an Artificial Intelligence?
Is it possible to create conditions today that foster creative genius like Turing’s –conditions that could lead to advances as world-changing as the invention of the computer?
What, if any, is the relationship between computer gaming today and artificial intelligence?
Have games advanced our understanding of AI?
What was the name and date of the first electronic stored-program computer to run in the United States?
What was the name of the first electronic stored-program computer to go on sale?
The US computer called ENIAC is often said to have been the first large-scale electronic computer, but in fact the British Colossus was first.
How long had the Colossus computers been in operation before the ENIAC first worked?
What role did the German genius Konrad Zuse play in the development of the modern computer?
What role did the giant of American science John von Neumann play in the development of the stored-program computer?
What is the point of trying to produce artificial intelligence when human intelligence is so easy to produce?
Could a computer (or creature) that is not conscious nevertheless think and be intelligent?
If offered the opportunity, would you elect to live for eternity as a brain-upload–that is, as a computer-based Artificial Intelligence made by harvesting your memories and copying the software of your brain? Is this form of life after death even possible?
Can the human mind achieve things that even the most intelligent computers will never be able to do?