HK = [ Faqs ] [ Archive ] [ Community ] [ Map ] [ Search ] [ Link ] = HK
[ Newsletter ] [ MailingList ] [ Blog ] [ Updates ] [ News ]
     
 
Torna al Hac-K-Menu

THE FIRST HACKER AND HIS IMAGINARY MACHINE by Howard Rheingold
Computation, when it was finally invented, a century after Babbage, did not come in the form of some new gadget in an inventor's workshop or a scientist's laboratory. The very possibility of building digital computers was given to the world in the form of an esoteric paper in a mathematics journal in 1936. Nobody realized at the time that this peculiar discovery in the obscure field of metamathematics would eventually lead to a world-changing technology, although the young author, Alan Mathison Turing, knew he was on the track of machines that could simulate the human thought processes.

That mathematics paper was a pivotal point in the cultural history of Western civilization. The first move in the intellectual game that resulted in digital computers was also the last move in another game that had gone on for millennia. In Egypt and Babylonia, where systems for measuring land and forecasting the course of the stars originated, only the priests and their chosen craftsmen were privileged to know the esoteric arts of reckoning. During the flowering of Greek civilization into the fifth and sixth centuries B.C., these protosciences were shaped into the mental tools known as axiomatic systems.

In an axiomatic system you start with premises that are known to be true, and rules that are known to be valid, in order to produce new statements that are guaranteed to be true. Conclusions can be reached by manipulating symbols according to sets of rules. Euclidean geometry is the classic example of the kind of generally useful tools made possible by formal axiomatic systems.

An axiomatic system is a tool for augmenting human thought. Except for rare "lightning" calculators, people are not able to add two six-figure numbers in their head. Give virtually all people over the age of ten a piece of paper and a pencil, however, and they'll tell you the answer in less than a minute. The magic ingredient that makes a schoolchild into a calculating machine is the kind of step-by-step recipe for performing a calculation that is known as an algorithm. The reason we know such algorithms work is because they are based on the formal system known as arithmetic, which we know to be true.

What Turing's paper did, and what made digital computers possible, resulted in the millennia-long effort to reduce the various formal systems to one basic system that underlies them all. Science--our civilization's preeminent system for gathering and validating knowledge--was built on mathematics, which was in turn a logical formalization of the primitive number theories of the Babylonians and the Greeks. Computation was the unexpected result of the attempt to prove that the mathematical truths could be reduced to logical truths.

At the same time that our civilization's methods for predicting and understanding the universe grew powerful as the result of these intellectual systems (i.e., science, mathematics, and logic), a few people continued to ask whether these same systems could be reduced to their basic components. If all sciences, when they become advanced enough, can be reduced to mathematical equations, is it possible to reduce mathematics to the most fundamental level of logic?

[ Top ]

Since our certainty in the completeness and consistency of our knowledge system could depend on whether such a reduction was possible, it was very disconcerting to Western thinkers when evidence began to appear that there were exceptions, anomalies, paradoxes--holes in the structure of mathematics that might prevent any such grand reduction of formal systems. Those two intellectual quests--the effort to reduce mathematics to a fundamental, formal symbol system, and the attempt to patch up the paradoxes that cropped up during the pursuit of that grand reduction--led directly but unexpectedly to computation.

In the first decades of the twentieth century, mathematicians and logicians were trying to formalize mathematics. David Hilbert and John von Neumann set down the rules of formalism in the 1920s (as we shall see in the next chapter). Before Hilbert and von Neumann, Alfred North Whitehead and Bertrand Russell demonstrated in their Principia Mathematica that some aspects of human reasoning could be formally described, thus linking this awakened interest in mathematical logic to the ideas of the long-forgotten originator of the field, George Boole. The idea of formal systems was of particular interest, because it appeared to bridge the abstractions of mathematics and the mysteries of human thought.

A formal system is a rigidly defined kind of game that specifies rules for manipulating tokens. The qualifications for making a formal system are very much like the rules for any other game. To tell someone how to play a game, and for the set of rules to qualify as a formal system, the same three aspects of the game must be communicated -- the nature of the tokens, a description of the starting position (or the starting layout of the "board"), and a listing of what moves are allowed in any given position. Chess checkers, mathematics, and logic are examples of formal systems that satisfy these criteria. By the 1930s, the effort to reduce mathematics to logically secure foundations brought about several attempts to treat arithmetic -- the branch of mathematics concerned with operations on numbers -- as a formal system.

In 1936, at the age of twenty-four, Alan M. Turing established himself as one of the greatest mathematical prodigies of all time when he pointed out to his colleagues that it was possible to perform computations in number theory by means of a machine -- a machine that embodied the rules of a formal system. Although the machine itself didn't exist as a working model, Turing emphasized from the beginning that such machines could actually be built. His finding was a milestone in the effort to formalize mathematics and, at the same time, a watershed in the history of computation.

In his brilliant solution to one of the key metamathematical problems posed by the formalists, Alan Turing described in precise mathematical terms how an automatic formal system with extremely simple rules of operation could have very powerful capabilities. An automatic formal system is a physical device which automatically manipulates the tokens of a formal system according to the system's rules. Turing's theoretical machine was both an example of his theory of computation and a proof that a certain kind of computing machine could, in fact, be constructed.

[ Top ]

When he brought mathematics and logic together in the form of a machine, Turing made symbol-processing systems possible. He proposed that the vast majority of intellectual problems could be converted to the form "find a number n such that . . . " Even more important than this provocative statement connecting the abstractions of intellect with the more concrete realm of numbers -- an implication that still inspires the efforts of artificial intelligence researchers -- was Turing's recognition that the numbers were more important as symbols in this case than as elements of mathematical calculations.

One of Turing's greatest insights was his understanding, from the very beginning, of something that the majority of the computer priesthood has yet to understand -- the fact that numbers are only one possible way of interpreting the internal states of an automatic formal system. Babbage's "patterns of action" were now formalized with mathematical rigor. Turing's "states" provided the crucial metaphor for bridging the power of human cognition and the capabilities of machines.

What, Turing asked, does a human symbol processor do when performing a calculation? He decided that mental calculations consist of operations for transforming the input numbers into a series of intermediate states which progress from one to the next according to a fixed set of rules, until an answer is found. Sometimes, people use pencil and paper to keep track of the states of their calculations. The rules of mathematics require more rigid definitions than those provided by the fussily described human states of mind discussed by metaphysicians, so Turing concentrated on defining these states in a way that was so clear and unambiguous that the description could be used to command the operations of a machine.

Turing started with a precise description of a formal system, in the form of "instruction tables" describing which moves to make for every possible configuration of states in the system. He then proved that the description of these instructions, the steps of formal axiomatic system like logic, and the machine states that make up the "moves" in an automatic formal system are all equivalent to one another. Such matters as formal systems and Turing machines sound very far away from what computers actually do, but in fact they underlie the entire technology of digital computers -- which wasn't to come into existence until over a decade after Alan Turing published his epochal paper.

The process of computation was graphically depicted in Turing's paper when he asked the reader to consider a device that can read and write simple symbols on a paper tape that is divided into squares. The "reading/writing head" can move in either direction along the tape, one square at a time, and a control unit that directs the actions of the head can interpret simple instructions about reading and writing symbols in squares. The single square that is "scanned" or "read" at each stage is known as the active square. Imagine that new sections can be added at either end of the existing tape, so it is potentially infinite.

Suppose the symbols are "X" and "O". Suppose that the device can erase either symbol when it reads it in the active square and replace it with the other symbol (i.e., erase an X and replace it with an O, and vice versa). The device also has the ability to move left or right, one square at a time, according to instructions interpreted by the control unit. The instructions cause a symbol to be erased, written, or left the same, depending on which symbol is read.

[ Top ]

Any number of games can be constructed using these rules, but they would not all necessarily be meaningful. One of the first things Turing demonstrated was that some of the games constructed under these rules can be very sophisticated, considering how crude and automaton-like the primitive operations seem to be. The following example illustrates how this game can be used to perform a simple calculation.

The rules of the game to be played by this Turing machine are simple: Given a starting position in the form of a section of tape with some Xs and Os on it, and a starting square indicated, the device is to perform the actions dictated by a list of instructions and follows the succeeding instructions one at a time until it reaches an instruction that forces it to stop. (If there is no explicit instruction in the table of instructions for a particular tape configuration, there is nothing that the machine can do when it reaches that configuration, so it has to stop.)

Each instruction specifies a particular action to be performed if there is a certain symbol on the active square at the time it is read. There are four different actions; they are the only legal moves of this game. They are:

    Replace O with X.
    Replace X with O.
    Go one square to the right.
    Go one square to the left.

An example of an instruction is: "If there is an X on the active square replace it with O." This instruction causes the machine to perform the second action listed above. In order to create a "game," we need to make a list that specifies the number of the instruction that is being followed at every step as well as the number of the instruction that is to be followed next. That is like saying "The machine is now following (for example) instruction seven, and the instruction to be followed next is (for example) instruction eight."

Here is a series of instructions, given in coded form and the more English-like translation. Taken together, these instructions constitute an "instruction table" or a "program" that tells a Turing machine how to play a certain kind off game:

1X02 (Instruction #1: if an X is on the active square, replace
it with O, then execute instruction #2.)
2OR3 (Instruction #2: if an O is on the active square, go right
one square and then execute instruction #3.)
3XR3 (Instruction #3: if an X is on the active square, go right
one square and then execute instruction #3;
3OR4 but if an O is on the active square, go right one square
and then execute instruction #4.)
4XR4 (Instruction #4: if an X is on the active square, go right
one square and then execute instruction #4;
4OX5 but if an O is on the active square, replace it with X and
then execute instruction #5.)
5XR5 (Instruction #5: if an X is on the active square, go right
one square and then execute instruction #5;
5OX6 but if an O is on the active square, replace it with X and
then execute instruction #6.)
6XL6 (Instruction #6: if an X is on the active square, go left
one square and then execute instruction #6
6OL7 but if an O is on the active square, go left one square and
then execute instruction #7.)
7XL8 (Instruction #7: if an X is on the active square, go left
one square and then execute instruction #8.)
8XL8 (Instruction #8: if an X is on the active square, go left
one square and then execute instruction #8;
8OR1 but if an O is on the active square, go right one square
and then execute instruction #1.)

Note that if there is an O on the active square in instruction #1 or #7, or if there is an X on the active square in instruction #2, the machine will stop.

[ Top ]

In order to play the game (run the program) specified by the list of instructions, one more thing must be provided: a starting tape configuration. For our example, let us consider a tape with two Xs on it, bounded on both sides by an infinite string of Os. The changing states of a single tape are depicted here as a series of tape segments, one above the other. The active square for each denoted by a capital X or O. When the machine is started it will try to execute the first available instruction, instruction #1. The following series of actions will then occur:

Instruction
Tape
What the Machine Does
#1
...ooXxooooooo...
One (of two) Xs is erased.
#2
...ooOxooooooo...
#3
...oooXooooooo...
Tape is scanned to the
#3
...oooxOoooooo...
right.
#4
...oooxoOooooo...
#5
...oooxoXooooo...
Two Xs are written.
#5
...oooxoxOoooo...
#6
...oooxoxXoooo...
#6
...oooxoXxoooo...
Scanner returns to the
#6
...oooxOxxoooo...
other original X.
#7
...oooXoXXoooo...
#8
...ooOxoxxoooo...
#1
...oooXoxxoooo...
#2
...oooOoxxoooo...
This X is erased.
#3
...ooooOxxoooo...
Scanner moves to the right
#4
...oooooXxoooo...
of the two Xs that were
#4
...oooooxXoooo...
written earlier.
#4
...oooooxxOooo...
#5
...oooooxxXooo...
Two more Xs are written.
#5
...oooooxxxOoo...
#6
...oooooxxxXoo...
#6
...oooooxxXxoo...
Scanner looks for any more
#6
...oooooxXxxoo...
original Xs.
#6
...oooooXxxxoo...
#6
...ooooOxxxxoo...
#7
...oooOoxxxxoo...
The machine stops because there is no
instruction for #7 if O is being scanned.

This game may seem rather mechanical. The fact that it is mechanical was one of the points Turing was trying to make. If you look at the starting position, note that there are two adjacent Xs. Then look at the final position and note that there are four Xs. If you were to use the same instructions, but start with a tape that had five Xs, you would wind up with ten Xs. This list of instructions in the specification for a calculating procedure that will double the input and display the output. It can, in fact, be done by a machine.

[ Top ]

In essence, every Turing machine moves marks from one position on a tape to another position on a tape, in the way the procedure outlined above moved Xs and Os from square to square. These days, the marks can be electronic impulses in microcircuits, and the tape can be an array of memory locations in a memory chip, but the essential idea is the same. Turing proved that his hypothetical machine is an automated version of a formal system specified by the starting position (the pattern of Os and Xs on the tape at the beginning of the computation) and the rules (the instructions given by the instruction tables). The moves of the game are the changing states of the machine that correspond to the specified steps of the computation.

Turing then proved that for any formal system, there exists a Turing machine that can be programmed to imitate it. This kind of general formal system with the ability to imitate any other formal system was what Turing was getting at. These systems are now known as "universal Turing machines." The theory was first stated in a paper with the forbidding title "On Computable Numbers, with an application to the Entscheidungsproblem."

The Turing Machine was a hypothetical device Turing invented on the way to settling a critical question about the foundations of mathematics as a formalized means of thinking. He showed that his device could solve infinitely many problems, but that there are some problems that cannot be solved because there is no way of predicting in advance whether or when the machine is going to stop. Here is where the parting of the ways between metamathematics and computation occurred.

Our simple example of a doubling program took only twenty-six steps. But there is no way of knowing whether or not other programs (which can be direct translations of theorems in number theory) will ever stop. By proving this, Turing made an equivalent point about all mechanical systems (i.e., systems in which the procedures are definite enough to be carried out by a machine).

Turing and his colleagues ended the long search for a logically certain basis underlying formal systems by making the shocking discovery that there are a number of important features about formal systems about which we can never be certain. Formal systems, by their very nature, have certain inherent limitations. At this point, the theory of computation became something more than an important branch of metamathematics, as the properties of formal systems faded into the background and the properties of machines emerged in a wholly unexpected and dramatic manner -- because The way the universal Turing machine imitates other Turing machines is as automatic as the way our doubling machine multiplies the input by two. Assuming that the control unit of the device is capable of interpreting simple instructions -- something that had been a matter for toolmakers, not mathematicians since Babbage's time -- it is possible to encode a more complex list of instructions describing various Turing machines and put them onto the input tape, along with the starting position.

[ Top ]

Just as the instructions followed by the machine can be stated in English (or German or French, etc.), or in an abbreviated form like "7XL8," they can be encoded in an even more primitive form. A code can be devised, using the same Xs and Os, that can uniquely represent every instruction and instruction table (program). Both the instructions and the data can be put onto the same tape. A universal Turing machine can then scan that coded tape and perform the function specified in the code (doubling the number on the data portion of the tape, in our example).

This code can be interpreted by a machine, a machine that automatically manipulates the tokens, given a list of instructions and a starting configuration. When the machine stops, you read the tape and you get the output of the program. In this case, you put the number you want to double in the starting configuration, and then let the machine metaphorically clank away one square at a time, erasing and writing Os or Xs. When the machine stops, you count the Xs in the final tape configuration.

The list of instructions is what turns the universal Turing machine into the doubling machine. Mechanically, there is no difference between the two machines. The particular instructions described by the code are what the universal Turing machine operates upon. If you can describe, in similarly codable instructions, a machine for tripling, or extracting square roots, or performing differential equations, then your basic, dumb old universal Turing machine can imitate your tripling machine or square root machine.

That ability to imitate other machines is what led to computers. The numbers (or Xs and Os) on the tape aren't that important. They are only symbols for states of a process -- markers in a "doubling game." The list of instructions (the program) is what enables the machine to double the input number. The instructions, not the symbols that keep track of the way they are carried out -- the rules, not the markers -- are what make the Turing machine work. Universal Turing machines are primarily symbol manipulators. And digital computers are universal Turing machines.

It isn't easy to think of the rules of a game as a kind of machine. The task is somewhat easier if you think about "mechanical processes" that are so clearly and specifically defined that a machine can perform them by referring to an instruction table. All universal Turing machines are functionally identical devices for following the program specified by an instruction table. The instruction tables can differ, and they can turn the universal Turing machine into many different kinds of machine. For this reason, the programs are sometimes called "virtual machines."

The distinction between a universal Turing machine and the many different Turing machines it is able to imitate is a direct analogy to digital computers. Like universal Turing machines, all digital computers are functionally identical. At the most basic level, every digital computer operates in the way our doubling machine did with the squares and Os and Xs. Instead of building a different physical machine to solve different problems, it is more practical to describe to an instruction-following machine different virtual machines (programs) that use this one-square-at-a-time mechanical instruction-following process to solve complicated problems through a pattern of simple operations.

[ Top ]

Following instructions is the nature of digital computers. The difference between a computer calculator and a computer typewriter, for example, lies in the instructions it follows -- the coded description it is given of the virtual machine it is meant to imitate in order to perform a task. Since computers understand "bits" that can correspond to O and X, or 0 and 1, or "on" and "off," you can use these symbols to write descriptions that turn the general machine into the specific machine you want. That's what programmers do. They think of machines people might want to use, and figure out ways to describe those machines to general machines -- computers, that is.

It would be too time-consuming to achieve anything significant in programming if programmers had to spend all their time thinking of ways to describe machines in strings of Os and Xs. The O and X code is similar to what is now called machine language, and a relatively small number of programmers are actually able to write programs in it. Assembly language, a close relative of machine language except that is uses recognizable words instead of strings of Xs and Os, is a lot more manageable than machine language, so that's what most programmers use when they write video games or word processors. Assembly language makes it easier to manipulate the information in the "squares" -- the memory cells of the computer -- by using words instead of numbers. You use the translation program described above, called an assembler, to translate assembly language into machine language.

Every different microprocessor (the actual silicon chip hardware at the core of every modern computer) has a list of around a hundred primitive machine language operations -- known as "firmware" -- wired into it. When the assembler follows the instructions in the assembly language programs, using machine language to talk to the microprocessor, the virtual machine meets the actual machine, and the computer is able to accomplish the specified task for the human who started the whole process.

Since you have to accomplish tasks in assembly language by telling the computer very specifically where to find the information you want, when to move it into an "active square" called an accumulator, and where to store it when it is processed, writing anything complicated in assembly language can be a chore -- like writing a book with semaphore flags, or measuring a city with a yardstick.

For example, to add two numbers in assembly language you have to specify what the first number is and assign it to the accumulator, then you have to specify the second number and instruct the machine to add it to the number already in the accumulator. Then you have to specify where to store the answer, and issue step-by-step instructions on how to send the answer to your printer or monitor.

Obviously, it is easier to do the whole thing in a procedure like the one in BASIC: You simply type something on the keyboard, like "PRINT 2 + 3," and some part of the software takes care of accumulators and memory addresses. Your printer prints out "5," or it is displayed on your monitor, and the computer doesn't bother you with details about its internal operations.

At the core of every computer language is something very much like the doubling machine. Since it is possible to describe machines that describe machines, under the rules of the universal Turing machine game, it is possible to write a machine language program that describes a machine that can translate assembly language into machine language. Having done that, this new tool can be used to create yet another level of communication that is even more manageable than assembly language, by making a code-language that is still closer to English.

[ Top ]

That last virtual machine -- the English-like one -- is called a high-level programming language. High-level doesn't mean that a language is intellectually lofty, only that it us a virtual machine interpreted by a lower-level machine, which in turn may be interpreted by an even lower level machine, until you get to the lowest level of on and off impulses that translate the Os and Xs into electronically readable form. BASIC and FORTRAN and other languages that programmers work with are actually virtual machines that are described to the computer by other virtual machines equivalent to the assemblers mentioned above, known as interpreters and compilers.

The first compiler, however, was not to be written until 1953, seventeen years after Turing's theoretical paper was published in 1936. The emergence of the digital computer, based on the principles of Turing's machine, was stimulated by World War II, which was still four years in the future. In 1936, Claude Shannon had yet to discover that the algebra invented by George Boole to formalize logical operations was identical with the mathematics used to describe switching circuits. John von Neumann and his colleagues had yet to devise the concept of stored programming. Norbert Wiener hadn't formalized the description of feedback circuits in control systems. Several crucial electronic developments were yet to come.

Although only a half-dozen metamathematicians thought about such things during the 1930s, the notion of machines whose functions depend on the descriptions of how they operate happened to have one real-world application that suddenly became very important toward the end of the decade. In 1940, the British government developed an intense interest in Turing's theories.

WWII

A top-secret project code-named "Ultra," under the direction of an intelligence officer code-named "Intrepid," had captured and brought to London the secret German cipher machine known as "Enigma." The machine enabled the Nazi high command to send orders to field commanders in the form of an uncrackable code. Even though they had the machine in their hands British intelligence was still baffled by the encoding mechanism. Even the best of the old-style cryptographers couldn't suggest a solution.

The British high command recruited brilliant mathematicians, engineers, and logicians, inadvertently creating one of the seminal research groups in the field that was to be known as artificial intelligence. Among them was Donald Michie, then only twenty-two, who was later to become the leading British machine intelligence researcher. Another very young colleague who later distinguished himself was I. J. Good, a prankster who once wrote Her Majesty the Queen suggesting that he be made peer of the realm, because then his friends would be forced to remark, "Good Lord, here comes Lord Good," when they saw him coming.

The place known as Bletchley Park is far less famous than Omaha Beach, but many historians contend that the European war was won, in large part, in a closely guarded Victorian mansion in Hertfordshire, England, by the group of thinkers who succeeded in breaking the German code. The brilliant, young, unorthodox code-crackers were housed near Bletchley Park while they performed their role in the top-secret operation. One of the code breakers was twenty-eight-year-old Alan Turing.

Turing was eccentric, fun-loving, disheveled, painfully honest, erratic, introspective, prodigiously and elegantly brilliant, and somewhat inept socially. Turing was an early model of the similar maladroit and analogously otherworldly computer hackers who were to come later: He was a sloppy dresser and a passionate chessplayer, fond of children's radio programs and dedicated to long-distance running. (Sometimes he even timed himself with an alarm clock tied around his waist.) Even one of his few intimate friends described his speech as "a shrill stammer and crowing laugh which told upon the nerves even of his friends."

He never quite got the hang of automobiles, which was probably safer, considering the way Turing's mind wandered far away from the realities of the roadway. He preferred the battered bicycle of the Cambridge don. The bicycle and his habit of running twenty or thirty miles to attend a meeting were the objects of sundry anecdotes about "the Prof," as Turing was known around Bletchley. He was once detained by the local constable for bicycling around in a gas mask, which Truing claimed alleviated his hay fever.

[ Top ]

Turing and his colleagues at Bletchley Park ended up solving the Enigma enigma by devising a series of machines known as "bombes," "the Robinsons," and a culminating contraption known as "Colossus." Their purpose? To imitate "Enigma," of course!

The Bletchley Park devices were by no means universal machines by Turing's 1936 definition, but they did use important aspects of Turing's ideas. Using high-speed devices for feeding instructions encoded on paper tapes, and electrical circuitry for performing simple but tedious logical operations upon coded messages, the decoding machines began operating in 1943. The machines enabled the British to crack Enigma's code, in part by imitating crucial functions of the enemy coding machine.

The fact that these young academecians had broken the code was a secret of unparalleled importance, perhaps the most closely kept secret of the war, because the ability of the Bletchley machines to continue to successfully decode German messages depended upon the Nazi high command's continuing ignorance that their unbreakable code had been cracked.

Despite the importance of this work, early wartime bureaucracy and the thickets of secrecy surrounding the project threatened to cancel the incredible strategic advantage the 1943 Enigma breakthrough had handed the Allies. Turing appealed directly to Winston Churchill, who gave the project top priority. The codes continued to be cracked throughout the duration of the war, and in 1944 and 1945 the valuable information was disguised in the form of other kinds of intelligence, then relayed to British commanders in the Atlantic.

The tide of the critical U-boat conflict was turned, and the invasion of Europe became possible, largely because of Turing's success with the naval version of the Enigma. The Germans never caught on, and Turing's esoteric work in metamathematics turned out to have dramatically practical applications after all. Because of the growing strategic significance of advanced cryptanalysis methods in the cold war era, the project continued to be held secret for decades after the war. After 1945, a very few people knew that Turing had done something important for the war effort but nobody knew exactly what it was, because he still wasn't allowed to allude to it.

His role at Bletchley wasn't Turing's only wartime contribution. He was sent over to America, at a time when it was indeed dangerous to take a North Atlantic cruise, to share crucial aspects of British cryptanalytic progress with American intelligence and to lend his intelligence to several American war-related scientific projects.

It was during this American visit that Turing picked up practical knowledge of electronics. Turing had first become acquainted with what were then called "electronic valves" when he investigated the possibility of using the exotic vacuum-tube devices coming out of radar research to speed up the massive information-processing tasks needed by the Bletchley code-breakers. In America, Turing was involved in another hypersecret project, this time involving voice encryption -- what the spy novels call "scramblers." Because of this work on the device that was code-named "Delilah," Turing learned his electronics from some of the best in the business -- the engineers at Bell Laboratories in New York (including one named Claude Shannon, a prodigy of a different kind, who will enter the story again).

[ Top ]

By the end of the war, the knowledge that electronic technology could be used to speed up logical switching circuits, and the possibility of building working models of Turing's universal machines, led His Majesty's government to once again support an automatic calculating device. This time, it was not called the "Analytical Engine," but the "Automatic Computing Engine" -- or ACE, as it became known. At the end of World War II, despite the work in America of Mauchly and Eckert (ENIAC's inventors), the British were in an excellent position to win the race to build the first true electronic digital computer. But unfortunately for Alan Turing, postwar computer research in Britain was not pursued as aggressively and on the same scale as the American effort.

Turing, of course, was in the thick of the postwar computer development effort, but not at the center, and certainly not in control. As it turned out, his heroic and secret war work helped to make him the victim of scientific politics, not their master. His reports on the hardware and software design for ACE were ambitious, and if the machine he originally envisioned had been constructed as soon as it was designed, it would have put ENIAC to shame.

While a succession of other men took over the direction of the computer projects at the National Physical Laboratory and at the University of Manchester, Turing hovered at the periphery of the political power while he put his mind to the actual construction of one of his long-imaginary universal machines. In this he was hampered by the attitude prevalent among his peers that upper-middle-class Cambridge theoreticians simply did not get their hands dirty with "engineering." But rigid conformity to social standards was not Alan's strong point. He forged ahead with what he knew was important -- the development of a science of software.

Programming Turing's ideas about the proper approach to computer design stressed the need to build computing capabilities into the program, not the hardware. He was particularly interested in the programming operations -- or "coding," as it was already coming to be called -- by which truly interesting mathematical operations, and possibly "thinking" itself, eventually might be simulated by an electronic computer. And while Turing's first attempt at writing programming languages would be considered crude by today's standards, his ideas were far more advanced than the state of the hardware then available.

While his colleagues and the American team scrambled to put together the most elementary models of electronic digital computers, Turing was already looking far beyond the clumsy contraptions constructed in the late forties and early fifties. His public talks and private conversations indicated a strong belief that the cost of electronic technology would drop while its power as a medium for computation would increase in the coming decades. He also believed that the capabilities of these devices would quickly extend beyond their original purposes.

Programs for doubling numbers or extracting square roots or breaking codes are handy tools, but Turing was aware that calculation was only one of the kinds of formal systems that could be imitated by a computational device. In particular, he saw how the simple "instruction tables" of his theoretical machines could become elements of a powerful grammar that the machines could use to modify their own operations.

One innovation of Turing's stemmed from the fact that computers based on Boolean logic operate only on input that is in the form of binary numbers (i.e., numbers expressed in powers of two, using only two symbols), while humans are used to writing numbers in the decimal system (in which numbers are expressed in powers of ten, using ten symbols. Turing was involved in the writing of instruction tables that automatically converted human-written decimals to machine-readable binary digits. If basic operations like addition, multiplication, and decimal-to-binary conversion could be fed to the machine in terms of instruction tables, Turing saw that it would be possible to build up heirarchies of such tables. The programmer would no longer have to worry about writing each and every operational instruction, step by repetitive step, and would thus be freed to write programs for more complex operations.

[ Top ]

Turing wrote a proposal shortly after the end of the war in which he discussed both the hardware and "coding" principles of his long-hypothetical machines. He foresaw that the creation of these instruction tables would become particularly critical parts of the entire process, for he recognized that the ultimate capabilities of computers would not always be strictly limited by engineering considerations, but by considerations of what was not yet known as "software."

Turing not only anticipated the fact that software engineering would end up more difficult and time-consuming than hardware engineering, but anticipated the importance of what came to be known as "debugging":

Instruction tables will have to be made up by mathematicians with computing experience and perhaps a certain puzzle-solving ability. There will probably be a good deal of work of this kind to be done, for every known process has got to be translated into instruction table form at some stage. This work will go on whilst the machine is being built, in order to avoid some delay between the delivery of the machine and the production of the results. Delay there must be, due to the virtually invisible snags, for up to a point it is better to let the snags be there than to spend such time in design that there are none (how many decades would this course take?). This process of constructing instruction tables should be very fascinating. There is no real danger of it ever becoming a drudge, for any processes that are quite mechanical may be turned over to the machine itself.

Except for the almost equally advanced ideas of a German inventor by the name of Konrad Zuse, which were long unknown to British and American scientists, Turing's postwar writings about the logical complexities and mathematical challenges inherent in the construction of instruction tables were the first significant steps in the art and science of computer programming. Turing was fascinated with the intricacies of creating coded instruction tables, but he was also interested in what might be done with a truly sophisticated programming language. His original metamathetical formalism had stemmed from his attempt to connect the process of human thought to the structure of formal systems, and Turing was still intrigued by the possibility that automatic formal systems -- computers -- might one day emulate aspects of human reasoning.

The most profound questions Turing raised concerning the capabilities of universal machines were centered around this hypothesized future ability of computing engines to simulate human thought. If machinery might someday help in creating its own programming, would machinery ever be capable, even in principle, of performing activities that resembled human thought? His 1936 paper was published in a mathematical journal, but it eventually created the foundation of a whole new field of investigation beyond the horizons of mathematics -- computer science. In 1950, Turing published another article that was to have profound impact; the piece, more simply titled "Computing Machinery and Intelligence," was published in the philosophical journal Mind. In relatively few words, using tools no more esoteric than common sense, and absolutely no mathematical formulas, Turing provided the boldest subspecialty of computer science -- the field of artificial intelligence.

Despite the simplicity of Turing's hypothetical machine, the formal description in the mathematics journal makes very heavy reading. The 1950 article, however, is worth reading by anyone interested in the issue of artificial intelligence. The very first sentence still sounds as direct and provocative as Turing undoubtedly intended it to be: "I propose to consider the question 'Can machines think?' "

In typical Turing style, he began his consideration of deep AI issues by describing -- a game! He called this one "The Imitation Game," but history knows it as the "Turing Test." Let us begin, he wrote, by putting aside the question of machine intelligence and consider a game played by three people -- a man, a woman, and an interrogator of either gender, who is located in a room apart from the other two. The object of the game is to ask questions of the people in the other room, and to eventually identify which one is the man and which is the woman -- on the basis of the answers alone. In order to disguise the appearance, voice, and other sensory clues from the players, the interrogation takes place over a teletype.

[ Top ]

Turing then asks us to substitute a machine for one of the unknown players and make a new object for the game: This time, the interrogator is to guess, on the basis of the teletyped conversation, which inhabitant of the other room is a human being and which one is a machine. In describing how such a conversation might go, Turing quoted a brief "specimen" of such a dialog:

    Q: Please write me a sonnet on the subject of the Forth Bridge.
    A: Count me out on this one. I could never write poetry.
    Q: Add 44957 to 70764.
    A: (pause about 30 seconds and then give as answer) 105621.
    Q: Do you play chess?
    A: Yes.
    Q: I have K at my K1, and no other pieces. You only have K at K6 and R at R1. It is your move. What do you play?
    A: (After a pause of 15 seconds) R-R8 mate.

Note that if this dialog is with a machine, it is able to do faulty arithmetic (39457 + 7064 does not equal 105621) and play decent chess at the same time.

Having established his imitation game as the criterion for determining whether or not a machine is intelligent, and before proceeding to consider various objections to the idea of artificial intelligence, Turing explained his own beliefs in the matter:

. . . I believe that in about fifty years' time it will be possible to program computers, ... to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning. The original question, "Can machines think?" I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and educated opinion will have altered so much that one will be able to speak of machines thinking without expecting it to be contradicted.

In the rest of the paper, Turing presented, then countered, a number of principal objections to the possibility of artificial intelligence. The titles Turing gave these objections reveal his whimsical streak "The Theological Objection," "The 'Heads in the Sand' Objection," "The Mathematical Objection," "Lady Lovelace's Objection," "The Argument from Consciousness," "Arguments from the Continuity in the Nervous System," "The Argument from Informality of Behavior," and "The Argument from Extrasensory Perception."

In this paper, Turing made evident his knowledge of his intellectual antecedents in this field by countering the objection raised by Ada in her commentary, in which she stated the problem that is still cited by most people in an argument about the possibility of machine intelligence: "The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform. Turing pointed out that Ada might have spoken differently if she had seen, as he had, evidence that electronic equipment could be made to exhibit a primitive form of "learning," by which programs would be able to eventually master tasks that had never been specifically programmed, but which emerged from trial-and-error techniques that had been preprogrammed.

Turing's work in computing, mathematics, and other fields was cut short by his tragic death in June, 1954, at the age of forty-two. Besides being a genius, Turing was also a homosexual. During the early 1950s, following the defection of two homosexual spies to the Soviet Union, Great Britain was an especially harsh environment for anyone caught engaging in prohibited sexual acts -- especially for someone who had something even more secret than radar or the atomic bomb in his head. Turing was arrested and convicted of "gross indecency," and sentenced to probation on the condition that he submit to humiliating and physically debilitating female hormone injections. Turing's war record was still too secret to even be mentioned in his defense.

Turing put up with the hormones and the public disgrace, and quietly began to break ground for another cycle of brilliant work in the mathematical foundations of biology -- work that might have had even more momentous consequences, if it had been completed, than his work with computable numbers. For nearly two years after his arrest, during which time the homophobic and "national security" pressures grew even stronger, Turing worked with the ironic knowledge that he was being destroyed by the very government his wartime work had been instrumental in preserving. In June, 1954, Alan Turing lay down on his bed, took a bite from an apple, dipped it in cyanide, and bit again.

[ Top ]

Like Ada, Alan Turing's unconventionality was part of his undoing, and like her he saw the software possibilities that stretched far beyond the limits of the computing machinery available at the time. Like her, he died too young.

Other wartime research projects and other brilliant mathematicians were aware of Turing's work, particularly in the United States, where scientists were suddenly emerging into the nuclear age as figures of power. Military-sponsored research-and-development teams on both sides of the Atlantic continued to work on digital computers of their own. A few of these independent research efforts grew out of Ballistics work. Others were connected with the effort to build the first nuclear fission and fusion bombs.

Over a hundred years had passed between Babbage and Turing. When an equally, perhaps even more gifted thinker happened upon the same ideas Turing had been pursuing, it was no accident of history that Turing's theoretical insights were converted to workable machinery. A theory of computation is one very important step -- but you simply cannot perform very sophisticated computations in a decently short interval if you are restricted to a box that chugs along a tape, erasing Os and writing Xs. The next step in both software and hardware history was precipitated by the thinking of another unique, probably indispensable figure in the history of programming -- John von Neumann.

Turing had worked with von Neumann before the war, at Princeton's Institute for Advanced Study. Von Neumann wanted the young genius to stay on with him, as his protégé and assistant, but Turing returned to Cambridge. Von Neumann's profound understanding of the implications of Turing's work later became a significant factor in the convergence of different lines of research that led to the invention of the first digital computers.

It isn't often that the human race produces a polymath like von Neumann, then sets him to work in the middle of the biggest crisis in human history. Von Neumann was far more than an embellisher of Turing's ideas -- he built the bridge between the abstractions of mathematicians and the practical concerns of the people who were trying to create the first generation of electronic computers. He was a key member of the team who designed the software for the first electronic computer and who created the model for the physical architecture of computers. He also added elegance and power to Turing's first steps towards creating a true programming language.

[ Top ]

Released By DaMe`
Visits [1459525]