Computing machinery and intelligence A.M. Turing, MIND 1950
This is most certainly a classic paper. We’ve all heard of the ‘Turing Test,’ but have you actually read the paper where Alan Turing defines it? I confess I hadn’t until recently, and there’s a whole lot more to it than I was expecting. Yes, it describes the Turning Test (Turing didn’t call it that at the time, his name was the ‘imitation game’), but it is also presents his thoughts on whether or not a digital computer can ever pass the test, and some remarkable observations on what it might take to build a learning machine given that Turing was writing in 1950! Plus, the writing style is a delight.
The Imitation Game
I propose to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms ‘machine’ and ‘think’.
To avoid the discussion degenerating into something to be settled by an opinion poll, Turing quickly refines it to a more testable proposition. The imitation game is first introduced with a man (player A) and a woman (player B) in one room, and an interrogator in another. By posing questions (in typewritten form) to the two participants, the interrogator must try to determine which of the two is the woman. The man tries to fool the interrogator into thinking he is the woman.
We now ask the question, ‘What will happen when a machine takes the part of A in this game?’
Can the interrogator tell the difference between human and machine? This question replaces the original ‘Can machines think?’
May not machines carry out something which ought to be described as thinking but which is very different from what a man does? This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection.
The definition of machine is subsequently narrowed to mean a digital computer. “We are not asking whether all digital computers would do well in the game nor whether the computers at present available would do well, but whether there are imaginable computers which would do well.” After an introduction to the concept of a universal machine, we are permitted to focus our attention on one particular digital computer, C. We can modify it to have adequate storage, we can suitably increase its speed of action, and we can provide it with an appropriate programme. Can C be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a man?
Turing’s personal belief
It will simplify matters for the reader if I explain first my own beliefs in the matter. Consider first the more accurate form of the question. I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about (125MB!), to make them play the imitation game so well that an average interrogator will not have more than 70 per cent. chance of making the right identification after five minutes of questioning. The original question, ‘Can machines think?’ I believe to be too meaningless to deserve discussion.
In the original paper, Turing now proceeds to address a series of imagined objections. I’ll come back to those briefly at the end, because I want to focus first on Turing’s vision for learning machines.
Towards learning machines
…[consider] an atomic pile of less than critical size: an injected idea is to correspond to a neutron entering the pile from without. Each such neutron will cause a certain disturbance which eventually dies away. If, however, the size of the pile is sufficiently increased, the disturbance caused by such an incoming neutron will very likely go on and on increasing until the whole pile is destroyed. Is there a corresponding phenomenon for minds, and is there one for machines?
If you present an idea to most minds, you get less than one idea back. But a smallish proportion of minds are super-critical, and an idea presented to such a mind may give rise to a whole ‘theory’ with secondary, tertiary, and more remote ideas… can a machine be made to be super-critical? It’s not the storage or the speed that will be the problem argues Turing, but the programming.
Think about an adult mind: how did it arrive at its current state?
We may notice three components, (a) the initial state of the mind, say at birth, (b) the education to which it has been subjected, (c) other experience, not to be described as education, to which it has been subjected. Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one that simulates the childs! … We have thus divided our problem into two parts. The child-programme and the education process.
We won’t find a good child machine at the first attempt, but Turing suggests we may be able to find one through a process of experimentation and evolution. Let the structure of the child machine be the equivalent of hereditary material, changes to the machine be mutations, and the role of natural selection be played by the judgement of the experimenter.
One may hope, however, that this process will be more expeditious than evolution. The survival of the fittest is a slow method for measuring advantages. The experimenter, by the exercise of intelligence, should be able to speed it up.
For the core of the education process, Turing suggests a form of reinforcement learning: “the machine has to be so constructed that events which shortly preceded the occurrence of a punishment-signal are unlikely to be repeated, whereas a reward-signal increased the probability of repetition of the events which led up to it.” And, “these definitions do not pre-suppose any feelings on the part of the machine.”
But reinforcement learning on its own will not be enough. Think how long it would take to a pupil to repeat ‘Casabianca’ if only rewards and punishments could be given for utterances. We need some additional channels of communication, for example, a symbolic language.
Opinions may vary as to the complexity which is suitable in the child machine. One might try to make it as simple as possible consistently with the general principles. Alternatively one might have a complete system of logical inference ‘built in’.
It will be most important, Turing says, to regulate the order in which the rules of the logical system concerned are to be applied. For herein lies the difference between a brilliant and a footling reasoner (not the difference between an sound and a fallacious one).
Does it matter if we understand the inner workings of such a machine?
An important feature of a learning machine is that its teacher will often be very largely ignorant of quite what is going on inside, although he may still be able to some extent to predict his pupil’s behaviour. This should apply most strongly to the later education of a machine arising from a child-machine of well-tried design (or programme).
It will also help if a learning machine includes a random element. “A random element is rather useful when we are searching for a solution of some problem.”
Objections to thinking machines
Turing considers (and dismisses) 9 possible objections to the idea that a machine may someday be successful in the imitation game. We don’t have the space here to go into them, although they are full of character and worth reading if you have the inclination. In brief they are:
- The theological objection – thinking is a function of the immortal soul, and only God can create a soul. (“I am not very impressed with theological arguments. Such arguments have often been found unsatisfactory in the past.” E.g., Galileo).
- The ‘heads in the sand’ objection – the consequences would be too dreadful, so let us hope and believe it’s not possible. (“… not sufficiently substantial to require refutation”).
- The mathematical objection – Godel’s theorem, and results from Church, Kleen, Rosser, and Turing show that there are limitations to the powers of discrete state machines. (“… but it has only been stated, without any sort of proof, that no such limitations apply to the human intellect. But I do not think this view can be dismissed quite so lightly.”)
- The argument from consciousness – “Yes, but can a machine really feel.” (“This argument appears to be a denial of the validity of our test.”)
- Arguments from various disabilities – you will never be able to make a machine do X (e.g., enjoy strawberries and cream). (Mostly limited imagination based on the machines people have seen so far).
- Lady Lovelace’s objection – the machine can do whatever we know how to order it to perform (with an implied ‘only’ before the do). A variant of this objection is that a machine can never take you by surprise. (“Machines take me by surprise with great frequency”).
- Argument from continuity in the nervous system – the nervous system is not a discrete-state machine (“… the interrogator will not be able to take any advantage of this difference.”)
- Argument from informality of behaviour – you can’t describe an explicit set of rules for all situations. (“ ‘… there are no such rules so men cannot be machines.’ The undistributed middle is glaring.”)
- Argument from extra-sensory perception (“If telepathy is admitted… then putting the competitors into a ‘telepathy-proof room’ would satisfy all the requirements.”)
We may hope that machines will eventually compete with men in all purely intellectual fields. But which are the best ones to start with? Even this is a difficult decision. Many people think that a very abstract activity, like the playing of chess, would be best. It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English… Again I do not know what the right answer is, but I think both approaches should be tried.