Can Machines Think?
What Alan Turing's seminal paper Computer Machinery and Intelligence tells us about the nature of thinking.

In 1950, Alan Turing, then a well-established mathematician and logician recognized by many as one of the forefathers of the nascent Computer Science discipline, published one of his seminal papers, Computing Machinery and Intelligence. In it, Turing formulated his most famous thought experiment, the Imitation Game. It has since been known as the Turing Test, widely regarded as the quintessential artificial intelligence test.
But Turing never intended his imitation game to be taken as an actual, operationalizable test. He was going for a deeper philosophical discussion, looking for a definition of āthinking.ā And by designing his test as he did and claiming what a positive result would imply, he told us his philosophical stance on this question.
published a deep dive into the Turing Test at . I recommend you read it before moving on. It is a nice complement to this post and goes deeper into the history and the story behind the test, and the many attempts to solve it.
In this post, I invite you to analyze Turingās imitation game from a philosophical perspective. First, weāll go over the original test as Turing devised it. Then, weāll tackle what āpassingā the test implies and look at some criticism. Finally, weāll briefly discuss how close we are to claiming something like āWeāve cracked the Turing Test.ā
The Imitation Game
Turing was a very good logician. So, it is only normal that he tackles this problem with the utmost formality. He opens the paper with the following statement:
I propose to consider the question, āCan machines think?ā This should begin with definitions of the meaning of the terms āmachineā and āthinkā. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. [ā¦] Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words. - Alan Turing, 1950.
Thus, instead of trying to answer this abstract question, Turing proposes a different question, which he considers equivalent in the sense that matters to us. He calls it āthe imitation game,ā which works as follows.
Consider two humans and a computer. One of the humans and the computer are located behind a closed door and act as guests, while the second human acts as host or judge. The role of this judge is to determine which of the two guests is human and which is a computer. To achieve this, the judge can ask whatever question they want via a chat interface āa technical question like āFactorize this huge prime numberā, a subjective question like āWhat is your favorite movie?,ā or anything in between. If the computer consistently succeeds in tricking the judge, we say it āwonā the imitation game.
Notice that winning the game for the computer is not simply a matter of being more intelligent, knowledgeable, or even more āhuman-likeā than the human. The objective of the computer is to be indistinguishable from the human, which means the judge should, on average, be no better at detecting who is who than chance. In contrast, for humans to win, they must be consistently detected as such. So the game is asymmetric in this sense.
Turing claims that when taken to its extreme, this imitation game will require the computer to display the full range of cognitive capabilities we consider to make up the human thinking process. Note that this includes purposefully pretending to be less capable on some tasks to avoid being detected āe.g., purposefully failing to solve a 9x9 sudoku, a trivial task for any modern computer that no human could do in a few seconds.
If we agree with Turing, we must concede the following statement:
Any machine that can consistently win the imitation game is doing something indistinguishable from what humans call āthinking.ā
An important note is that this says nothing about whether that machine is sentient or has any motivations or subjective experience. Turing is not claiming that such a machine would āfeelā something. He claims that whatever this process we call āthinkingā is about, it canāt be anything beyond what this machine is doing.
What does āthinkingā mean?
Assuming Turingās claim is true implies subscribing to a specific definition of āthinking.ā To discover that definition, letās try to falsify the consequent.
Suppose we accept that a computer can consistently beat the imitation game but still claim it is not thinking. But how do we know? By the definition of the imitation game, whatever the computer does is indistinguishable from what a human would do in the same situation.
The only way to argue that this machine is not thinking is to claim that thinking is not just what can be observed. A perfect imitation of thinking is not equivalent to thinking. Thereās got to be something fundamental to the definition of āthinkingā that cannot be distinguished from the outside.
Turing is thus arguing for a functionalist definition of āthinkingā: what we mean when we say āsomeone is thinkingā can be captured entirely in the function of thinking. What thinking is, is nothing more than what thinking does, according to Turing, at least.
There are many places where a functionalist definition seems accurate. For example, what is the definition of āto move?ā Regardless of how something moves āwhether using wheels, legs, combustion, or air; self-propelled or by external action; with purpose or randomly; etc.ā if it moves, it moves. There is nothing more to āmovingā than its function āto change the location of something. In other words, you cannot perfectly imitate motion while not actually moving.
Turing argues that the same happens with āthinking.ā There is nothing more to thinking than its function.
However, even if you agree that thinking is purely a functional concept, it is still unclear if the imitation game completely captures that functionality. Turing claims that even if we canāt define thinking as a concrete set of enumerable qualities, we can undoubtedly discover what not thinking is. Whenever we accurately distinguish the computer from the human, we detect an instance of not thinking. If we canāt find any, we must agree that what has happened is an instance of thinking.
In summary, to agree with Turing means we agree with the following two claims:
Thinking is a functional concept. This means there is nothing to the definition of āthinkingā beyond its function, whatever that function is. Equivalently, whatever system āan information processing machine or a biological brainā that performs the function that characterizes thinking must be considered to be thinking.
The only way to consistently beat the imitation game is by performing this function. This means that whatever clever trick you could come up with to build a computer program that imitates thinking but is not fully thinking, it can be detected, in principle, by a sufficiently competent judge (or judges).
Thus, to falsify Turingās claim, we have to attack either of those two arguments. Letās look at some of the main criticisms.
Criticism
Turing considers nine different counterarguments in his paper, from theological to philosophical to practical. He focuses on what he saw, at his time, as the main opposing viewpoints. Instead of reviewing them in detail, I want to focus on two main broad attack strategies, one for each claim. My purpose is not to convince you either side is correct but to give you as much ammunition as possible to think it through yourself.
Attacking the first claim āthat thinking is a functional conceptā requires us to pose an alternative definition for thinking. We could claim that machines cannot think because the essence of thinking somehow eludes machines or because it is special to biological entities, animals, or even humans. Regardless of how indistinguishable from human thinking the computer seems to behave, it will never amount to actual thinking because it lacks the essence of thinking.
This essentialist definition translates to: āMachines cannot think because thinking is not something that machines can do.ā This is straight circular reasoning, but youād be surprised at how many seemingly good rebuttals are just this in disguise.
The most famous argument against the functionalist definition of thinking is John Searleās Chinese room thought experiment. Searle posits a setup in which an information processing system āa man following a rules book for translating English to Chineseā exhibits a behavior indistinguishable from āunderstanding.ā Still, he claims that no part of that system āneither the man nor the bookā understands Chinese, and crucially, neither does the system as a whole. Searle goes much deeper, and I want to give his argument the respect it deserves, so Iāll leave this discussion for a future post.
In any case, attacking the functionalist definition of thinking is a very difficult position to adopt because if thereās something essential to thinking that canāt be distinguished from the outside āin the sense that no experiment one could make could differentiate a sufficiently good āimitation of thinkingā from āactual thinkingāā then we canāt understand what thinking is with the tools of science. All science can do is design experiments and observe behaviors from the outside.
Attacking the second claim is somewhat easier. Even if we accept the functional definition of thinking, whoās to say the imitation game captures all the complexities necessary to ensure thereās no way to win it but by thinking? Here we can point out that humans are gullible āas the many recent stories of people claiming some AI program is sentient show us.
However, this is an attack on concrete implementations of the experiment, not its essence. Most people are gullible in certain circumstances, but even if several people have claimed over the years that some computer programs are conscious or sentient, the collective humanity is convinced we arenāt there yet. So, in principle, we have uncovered the computer so far, every single time.
Another attack focuses on the anthropomorphic bias of the imitation game. Surely there are ways a thinking entity āan extraterrestrial being, for exampleā could be discovered in the imitation game, not because of a lack of cognitive abilities but because their background knowledge and reasoning modes would be so different from ours that any judge would immediately recognize the human. And while this is a strong argument, it doesnāt undermine Turingās original claim.
Turing claims that anything that can pass the imitation game can think, but he doesnāt claim that anything that can think should be able to pass the imitation game. In formal terms, the imitation game is a sufficient but not necessary condition to claim someone (or something) is capable of thinking, at least at the cognitive level humans can. So, while the anthropomorphic bias argument does highlight a relevant limitation of Turingās imitation game, it doesnāt technically falsify the claims.
Cracking the Turing Test
We started this post by emphasizing that Turing didnāt expect this imitation game to be implemented as an objective test of āintelligenceā for computers. Yet, it has been interpreted as such by myriad computer experts and non-experts alike over the years, and many attempts have been made to operationalize it. It doesnāt help that Turing claims he hoped by the year 2000, we would have computers that could, in principle, beat 30% of the judges in such a setup. Many took this as an objective milestone to claim we reached strong AI.
Any concrete implementation of the imitation game runs into the practical issues of human gullibility and biases, which makes it almost impossible to select judges who are guaranteed not to fall for cheap tricks. These issues alone explain all the occasions before 2022 when someone claimed they beat the Turing Test.
However, starting in 2023, for the first time, we had technology that is eerily close to what many people would claim a worthy contender for the imitation game: large language models (LLMs). Some of the wildest claims about modern LLMs like GPT-5 seem to imply the most powerful of these models are capable of human-level reasoning, at least in some domains.
But can GPT-5 pass the Turing Test? Again, this is hard to evaluate objectively because there are so many implementation details to get right. But I, and most other experts, donāt believe we are there yet. For all their incredible skills, LLMs fail in predictable ways, allowing any sufficiently careful and well-informed judge to detect them. So no, my bet is current artificial intelligence canāt yet beat the imitation game, at least in the spirit originally proposed by Turing.
In any case, thereās no GPT-5 out there consistently tricking humans into believing it is one of us.
Or is it? And how would we know?
I'm deeply thankful to and for their helpful feedback.
I think there is a line of critics that can be added. Turing focused on transmissible knowledge. Typically qualia, this inner feeling attached to sensations (like pain) or perceptions (like the blue of the sky) or more abstract (like feeling that we are right). Those things are not really transmissible. The reader understand what they mean because he experienced those qualia first hand. I think that it has to do with our links to the outside world. Meaning : I can have a model of the world in my mind but I also interact directly with the world. This last part is lacking for machines.
Can it be formalized (I doubt) or measured (much more easy to imagine to test using robots) ? How much is it linked with what we call intelligence ? are open questions...
The weakness of the Turing test is its behavioristic approach. I presume that was done to provide a basis for empirical assessment.
A non-behavioristic Turing test would look like this:
"A machine is intelligent if you put it out on a lonely island - without any means - and it comes up with a civilization after a few generations."
Humans have done it.
"The proof is in the pudding."
P. S. Intentionally left a few terms unclear, like "a few generations".