Can Machines Think?
What Alan Turing's seminal paper Computer Machinery and Intelligence tells us about the nature of thinking.
In 1950, Alan Turing, then a well-established mathematician and logician recognized by many as one of the forefathers of the nascent Computer Science discipline, published one of his seminal papers, Computing Machinery and Intelligence. In it, Turing formulated his most famous thought experiment, the Imitation Game. It has since been known as the Turing Test, widely regarded as the quintessential artificial intelligence test.
But Turing never intended his imitation game to be taken as an actual, operationalizable test. He was going for a deeper philosophical discussion, looking for a definition of “thinking.” And by designing his test as he did and claiming what a positive result would imply, he told us his philosophical stance on this question.
recently published a deep dive into the Turing Test at . I recommend you read it before moving on. It is a nice complement to this post and goes deeper into the history and the story behind the test, and the many attempts to solve it.
In this post, I invite you to analyze Turing’s imitation game from a philosophical perspective. First, we’ll go over the original test as Turing devised it. Then, we’ll tackle what “passing” the test implies and look at some criticism. Finally, we’ll briefly discuss how close we are to claiming something like “We’ve cracked the Turing Test.”
The Imitation Game
Turing was a very good logician. So, it is only normal that he tackles this problem with the utmost formality. He opens the paper with the following statement:
I propose to consider the question, “Can machines think?” This should begin with definitions of the meaning of the terms ‘machine’ and ‘think’. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. […] Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words. - Alan Turing, 1950.
Thus, instead of trying to answer this abstract question, Turing proposes a different question, which he considers equivalent in the sense that matters to us. He calls it “the imitation game,” which works as follows.
Consider two humans and a computer. One of the humans and the computer are located behind a closed door and act as guests, while the second human acts as host or judge. The role of this judge is to determine which of the two guests is human and which is a computer. To achieve this, the judge can ask whatever question they want via a chat interface —a technical question like “Factorize this huge prime number”, a subjective question like “What is your favorite movie?,” or anything in between. If the computer consistently succeeds in tricking the judge, we say it “won” the imitation game.
Notice that winning the game for the computer is not simply a matter of being more intelligent, knowledgeable, or even more “human-like” than the human. The objective of the computer is to be indistinguishable from the human, which means the judge should, on average, be no better at detecting who is who than chance. In contrast, for humans to win, they must be consistently detected as such. So the game is asymmetric in this sense.
Turing claims that when taken to its extreme, this imitation game will require the computer to display the full range of cognitive capabilities we consider to make up the human thinking process. Note that this includes purposefully pretending to be less capable on some tasks to avoid being detected —e.g., purposefully failing to solve a 9x9 sudoku, a trivial task for any modern computer that no human could do in a few seconds.
If we agree with Turing, we must concede the following statement:
Any machine that can consistently win the imitation game is doing something indistinguishable from what humans call “thinking.”
An important note is that this says nothing about whether that machine is sentient or has any motivations or subjective experience. Turing is not claiming that such a machine would “feel” something. He claims that whatever this process we call “thinking” is about, it can’t be anything beyond what this machine is doing.
What does “thinking” mean?
Assuming Turing’s claim is true implies subscribing to a specific definition of “thinking.” To discover that definition, let’s try to falsify the consequent.
Suppose we accept that a computer can consistently beat the imitation game but still claim it is not thinking. But how do we know? By the definition of the imitation game, whatever the computer does is indistinguishable from what a human would do in the same situation.
The only way to argue that this machine is not thinking is to claim that thinking is not just what can be observed. A perfect imitation of thinking is not equivalent to thinking. There’s got to be something fundamental to the definition of “thinking” that cannot be distinguished from the outside.
Turing is thus arguing for a functionalist definition of “thinking”: what we mean when we say “someone is thinking” can be captured entirely in the function of thinking. What thinking is, is nothing more than what thinking does, according to Turing, at least.
There are many places where a functionalist definition seems accurate. For example, what is the definition of “to move?” Regardless of how something moves —whether using wheels, legs, combustion, or air; self-propelled or by external action; with purpose or randomly; etc.— if it moves, it moves. There is nothing more to “moving” than its function —to change the location of something. In other words, you cannot perfectly imitate motion while not actually moving.
Turing argues that the same happens with “thinking.” There is nothing more to thinking than its function.
However, even if you agree that thinking is purely a functional concept, it is still unclear if the imitation game completely captures that functionality. Turing claims that even if we can’t define thinking as a concrete set of enumerable qualities, we can undoubtedly discover what not thinking is. Whenever we accurately distinguish the computer from the human, we detect an instance of not thinking. If we can’t find any, we must agree that what has happened is an instance of thinking.
In summary, to agree with Turing means we agree with the following two claims:
Thinking is a functional concept. This means there is nothing to the definition of “thinking” beyond its function, whatever that function is. Equivalently, whatever system —an information processing machine or a biological brain— that performs the function that characterizes thinking must be considered to be thinking.
The only way to consistently beat the imitation game is by performing this function. This means that whatever clever trick you could come up with to build a computer program that imitates thinking but is not fully thinking, it can be detected, in principle, by a sufficiently competent judge (or judges).
Thus, to falsify Turing’s claim, we have to attack either of those two arguments. Let’s look at some of the main criticisms.
Criticism
Turing considers nine different counterarguments in his paper, from theological to philosophical to practical. He focuses on what he saw, at his time, as the main opposing viewpoints. Instead of reviewing them in detail, I want to focus on two main broad attack strategies, one for each claim. My purpose is not to convince you either side is correct but to give you as much ammunition as possible to think it through yourself.
Attacking the first claim —that thinking is a functional concept— requires us to pose an alternative definition for thinking. We could claim that machines cannot think because the essence of thinking somehow eludes machines or because it is special to biological entities, animals, or even humans. Regardless of how indistinguishable from human thinking the computer seems to behave, it will never amount to actual thinking because it lacks the essence of thinking.
This essentialist definition translates to: “Machines cannot think because thinking is not something that machines can do.” This is straight circular reasoning, but you’d be surprised at how many seemingly good rebuttals are just this in disguise.
The most famous argument against the functionalist definition of thinking is John Searle’s Chinese room thought experiment. Searle posits a setup in which an information processing system —a man following a rules book for translating English to Chinese— exhibits a behavior indistinguishable from “understanding.” Still, he claims that no part of that system —neither the man nor the book— understands Chinese, and crucially, neither does the system as a whole. Searle goes much deeper, and I want to give his argument the respect it deserves, so I’ll leave this discussion for a future post.
In any case, attacking the functionalist definition of thinking is a very difficult position to adopt because if there’s something essential to thinking that can’t be distinguished from the outside —in the sense that no experiment one could make could differentiate a sufficiently good “imitation of thinking” from “actual thinking”— then we can’t understand what thinking is with the tools of science. All science can do is design experiments and observe behaviors from the outside.
Attacking the second claim is somewhat easier. Even if we accept the functional definition of thinking, who’s to say the imitation game captures all the complexities necessary to ensure there’s no way to win it but by thinking? Here we can point out that humans are gullible —as the many recent stories of people claiming some AI program is sentient show us.
However, this is an attack on concrete implementations of the experiment, not its essence. Most people are gullible in certain circumstances, but even if several people have claimed over the years that some computer programs are conscious or sentient, the collective humanity is convinced we aren’t there yet. So, in principle, we have uncovered the computer so far, every single time.
Another attack focuses on the anthropomorphic bias of the imitation game. Surely there are ways a thinking entity —an extraterrestrial being, for example— could be discovered in the imitation game, not because of a lack of cognitive abilities but because their background knowledge and reasoning modes would be so different from ours that any judge would immediately recognize the human. And while this is a strong argument, it doesn’t undermine Turing’s original claim.
Turing claims that anything that can pass the imitation game can think, but he doesn’t claim that anything that can think should be able to pass the imitation game. In formal terms, the imitation game is a sufficient but not necessary condition to claim someone (or something) is capable of thinking, at least at the cognitive level humans can. So, while the anthropomorphic bias argument does highlight a relevant limitation of Turing’s imitation game, it doesn’t technically falsify the claims.
Cracking the Turing Test
We started this post by emphasizing that Turing didn’t expect this imitation game to be implemented as an objective test of “intelligence” for computers. Yet, it has been interpreted as such by myriad computer experts and non-experts alike over the years, and many attempts have been made to operationalize it. It doesn’t help that Turing claims he hoped by the year 2000, we would have computers that could, in principle, beat 30% of the judges in such a setup. Many took this as an objective milestone to claim we reached strong AI.
Any concrete implementation of the imitation game runs into the practical issues of human gullibility and biases, which makes it almost impossible to select judges who are guaranteed not to fall for cheap tricks. These issues alone explain all the occasions before 2022 when someone claimed they beat the Turing Test.
However, starting in 2023, for the first time, we have technology that is eerily close to what many people would claim a worthy contender for the imitation game: large language models (LLMs). Some of the wildest claims about modern LLMs like GPT-4 seem to imply the most powerful of these models are capable of human-level reasoning, at least in some domains.
But can GPT-4 pass the Turing Test? Again, this is hard to evaluate objectively because there are so many implementation details to get right. But I, and most other experts, don’t believe we are there yet. For all their incredible skills, LLMs fail in predictable ways, allowing any sufficiently careful and well-informed judge to detect them. So no, my bet is current artificial intelligence can’t yet beat the imitation game, at least in the spirit originally proposed by Turing.
In any case, there’s no GPT-4 out there consistently tricking humans into believing it is one of us.
Or is it? And how would we know?
I'm deeply thankful to and for their helpful feedback.
I think there is a line of critics that can be added. Turing focused on transmissible knowledge. Typically qualia, this inner feeling attached to sensations (like pain) or perceptions (like the blue of the sky) or more abstract (like feeling that we are right). Those things are not really transmissible. The reader understand what they mean because he experienced those qualia first hand. I think that it has to do with our links to the outside world. Meaning : I can have a model of the world in my mind but I also interact directly with the world. This last part is lacking for machines.
Can it be formalized (I doubt) or measured (much more easy to imagine to test using robots) ? How much is it linked with what we call intelligence ? are open questions...
The weakness of the Turing test is its behavioristic approach. I presume that was done to provide a basis for empirical assessment.
A non-behavioristic Turing test would look like this:
"A machine is intelligent if you put it out on a lonely island - without any means - and it comes up with a civilization after a few generations."
Humans have done it.
"The proof is in the pudding."
P. S. Intentionally left a few terms unclear, like "a few generations".