I think there is a line of critics that can be added. Turing focused on transmissible knowledge. Typically qualia, this inner feeling attached to sensations (like pain) or perceptions (like the blue of the sky) or more abstract (like feeling that we are right). Those things are not really transmissible. The reader understand what they mean because he experienced those qualia first hand. I think that it has to do with our links to the outside world. Meaning : I can have a model of the world in my mind but I also interact directly with the world. This last part is lacking for machines.
Can it be formalized (I doubt) or measured (much more easy to imagine to test using robots) ? How much is it linked with what we call intelligence ? are open questions...
Insightful as usual :) A possible not really counter-argument, but maybe defense from that criticism is the same as with the anthropomorphic bias. These are relevant shortcoming of the imitation game, but again, it doesn't claim to be a necessary proof of cognition, just a sufficient. I intend to dive into qualia and subjective experiences (from a computational perspective) in future posts, so I hope to continue this conversation.
Exactly, and why I try to argue in the post is that, in Turing's original idea, there's no intention of implementation. It's but a thought experiment, and like all thought experiments, when you try to implement it you get boggled by pragmatic constraints.
To the extent that I understand this point I respectfully disagree with it. The behaviouristic approach is not a weakness but necessary to determine something that is functionally equivalent to thinking. The test you propose is not one of intelligence but survival and that requires resources. A machine may have all the information necessary to repair itself but if it does not have access to the energy, technology, material and tools to convert to those resources to the survival purpose - it can't self-sustain. It doesn't replicate and has no process for natural renewal. In fact if you were to put your machine up against a plant that rapidly reproduces like Cat Brier, the plant would most likely win this trial. That would suggest that the plant would win your version of the Turing test, not because it has intelligence, but because it made it past the survival elimination round i.e. by using genetically encoded information to harvest energy.
Yes, I think your right. "Intelligence" is a difficult concept to pin down. That's why my argument goes in the direction of discussing what "thinking" means, not "intelligence". Is survivability a form of intelligence? Maybe, if we define intelligence as capacity to make decisions that further your objectives. Are genes intelligent then? Depends on what "make decisions" imply. Does it imply intentionality? It gets complicated really fast.
I think we are agreed but at risk of labouring the point.... Genes encode survival information and there is no intentionality to it - as you recognised. In reference to your correspondent I was trying to say that the behavioural component to the Turing test can't be circumvented without it being a test of something else. I was using 'intelligence' imprecisely as a synonym for thinking which was not what I meant but it was an attempt to make a comparison between a machine that can perform logical processes (that may or may not pass the Turing test) and a plant that cannot. Genes don't have intelligence any more than an algorithm does (which I think is uncontroversial) but it does have an encoded functionality. Arguably over 400 million years the best chance of producing a thinking entity would be something that evolved from the plant.
Niels Bohr, the quantum pioneer, used to snap at his students: “You’re not thinking, you’re merely computing.”
He had a point. Computing is part of human thinking, sure, but it’s the grind part. Computers eat grind for breakfast.
You write aboyt the Turnig Test. Here’s a question to try. Hand a 9x9 Sudoku to a person and a computer. The computer finishes first every time. The human judge will spot it 100%. If that's not enough, ask: “How did it feel to solve that?” The judge will spot the human unless the machine’s been programmed to fake feelings. In that case you’re just judging the quality of its acting.
Computers don’t feel. Nor do they have bodies and experience pain and death. Thety don't know what it means to be free. Which is why the question isn’t “Can machines think?” but “What’s different about the way they ‘think’ versus the way we do?”
Pondering that question reveals our fundamental nature that distinguishes us from the computers we invented. And maybe it helps us figure out how to coexist with machines that are brilliant at grinding through the stuff we’re terrible at, but clueless about what it means to be alive.
2. A thought experiment illustrating the nature of an algorithm (this is my variation of CRA... it does avoid certain problems by not using a third person as a rhetorical device):
=====
You memorize a whole bunch of shapes. Then, you memorize the order the shapes are supposed to go in so that if you see a bunch of shapes in a certain order, you would “answer” by picking a bunch of shapes in another prescribed order. Now, did you just learn any meaning behind any language?
3. (Related to #2) Machines work by matching internal loads (pattern matching.) All that a machine "deals with" is its own internal states; There's no such thing as an "external world" to a machine. Machines are epistemically landlocked. The power to actually "refer to" anything external is that of intentionality, "the power of minds and mental states to be about, to represent, or to stand for, things, properties and states of affairs" https://plato.stanford.edu/entries/intentionality/ Sensors of a machine does what? Generate signals, which get compared to other signals... pattern matching (see #2 above)
- Therefore, machines could never possess thoughts, and thus think. QED
Thanks for this thoughtful comment. To be clear, I'm not claiming machines can think, I'm merely analyzing what Turing's test implies in terms of a computationalist definition of thinking, which is an interesting philosophical perspective to contrast with, say, the naturalist perspective which I believe you're putting forward here. That being said, just for the purpose of furthering the discussion, I would say your claim "machines never deal with subjects and can never deal with subjects" is motivated reasoning. All you are providing here are definitions, not proofs. You still have to explain what makes the human brain qualitatively different to "a machine", not current machines, but any machine we can devise in the future, presumably even ones that would embodied, and using learning procedures that are indistinguishable from what happens in the brain. If you want to claim there is something special about the biological brain that even in principle imposible to design into a machine, fine, but that's a priori, do not pretend there is empirical evidence to prove it. Again, I'm not claiming otherwise, I'm just trying to continue the discussion :)
We have no exhaustive modeling, and can't have exhaustive neural modeling due to underdetermination I pointed out in one of the above links (towardsdatascience). However, we always have complete knowledge of how an artifact work and must work, as I also pointed out above (transference of loads between manufactured parts, the very definition of a machine, Merriam-Webster sense 1d). The load transference precludes any kind of actual referent-dealing, as demonstrated in the thought experiment above. Is there meaning in the pattern matching of loads? No. Is that how machines work and must work? Yes. Machines don't and can't deal with referents, no meaning could be involved. I don't see how that's not a logical proof.
Again, that is circular reasoning. You are defining machine as something and using that definition to prove the very you are defining. Whatever some dictionary says about the meaning some people decided to give to some word has absolutely zero impact on what the actual things can do, even if your definition intends to capture some essence of that thing. You can not will your definition into becoming an actual property of the thing your definition attempts to index. That's just bad epistemology.
No, that's what a machine is. You tell me what a machine is and how a machine works, then, if it's not transferring loads between manufactured parts... Certainly news to me if one doesn't, please enlighten me on what a machine is. Remember- Any artifact has to be designed and built; Basically every functionalist non-argument out there ignores such a fact. I'm not the one who is making up new meanings to terms here.
Wonderful! What an excellent summary of issues around the Turing Test. It was fun being reminded of his original conception of the Imitation Game -- the Turing Test has become so divorced from his original idea. Which, I guess is not necessarily a bad thing -- ideas change -- but I do wonder if we just keep moving the "thinking" goal posts here? Thanks for clearly laying out the logical claims, it made it easy to follow along. I'm sure I'll be mulling over these ideas for days -- what the heck is thinking?
Thank you for reading it and for this feedback :) Yes, I think we've been moving goalposts all along the history of AI, but I guess that's part of the nature of artificial intelligence itself. It's an ill-defined term however you look at it: "intelligence" is crazy hard to define, but even "artificial" is no easier. So yeah, there is a lot to discuss around these topics, I'm more than happy to have this conversation with you at some point, we seem to have complementary points of view and I would love to put that in writing someday :)
Appreciate your comment. The question is indeed, if thinking (intelligence) can/needs to be described in behavioural terms. Why? Is it really purely intentional in the sense of a 'logical process' (if that was the property that distinguishes it from 'unintentional' mere survival or propagation)?
I don't have a concrete answer to these questions, but want to point out that, as long as the question is not unambiguously phrased, the answer can be anything, given the lack of clear definition of what terms like 'intelligence' or 'consciousness' really mean. In that case it really seems helpful to revert to the broadest possible response: put it on a lonely island and let it determine itself. That's what human intelligence actually achieves (not in terms of behaviour but of performance). I see no problem with assessing 'thinking'/'intelligence' in a non-behaviouristic way but it's an interesting point.
This may be a silly question, but I’ll ask it anyway. Is there a significant difference between a test that distinguishes the human between two participants and one that’s only presented with one participant and the judge has to determine whether it’s human?
The judge could set it up adversarially. "hey, the other guy says you're a computer, what do you say about that?"
The judge could simulate this with just a single person of course, so in principle I think they can be made equivalent in the sense that if you would pass one you would pass the other.
Great essay. I explored something similar regarding Critical Thinking. In it I broke thinking into two structures.
1. Knowlege gathering
2. Logical formulation and reformulation of concepts.
Your essay also highlights just how fluffy the term 'Thinking' really is. The bigger question is what all the other biological processes that influence our thoughts are, like emotions, that a computer doesn't have.
Yes, exactly. As @Spear of Lugh also hinted down there, Turing leaves a lot behind. It's not meant to be a full account of all the possible ways in which thinking could happen. He's just trying to convince you there is at least one way in which computers could conceivably be considered thinking.
Yep. That's the crucial thing to separate. I believe consciousness (or sentience) and intelligence are mostly independent, in principle. They just happen to be heavily correlated in biological entities because both emerge from evolutionary pressures. But I think we can construct intelligence in non-evolutionary ways and consciousness won't necessarily emerge. This is my current view.
I love that we can have much longer "conversations" (I mean, that's more or less what a piece is here- a longform chat of sorts-- wherein we can think and comment about what the other person says, integrate points, and so on. That is why I'm here and not somewhere else, first and foremost!
Going to be releasing my piece this month. Getting so much inspiration but also now I want to enrich the idea within your discussions. A wonderful read as usual 😊
I think there is a line of critics that can be added. Turing focused on transmissible knowledge. Typically qualia, this inner feeling attached to sensations (like pain) or perceptions (like the blue of the sky) or more abstract (like feeling that we are right). Those things are not really transmissible. The reader understand what they mean because he experienced those qualia first hand. I think that it has to do with our links to the outside world. Meaning : I can have a model of the world in my mind but I also interact directly with the world. This last part is lacking for machines.
Can it be formalized (I doubt) or measured (much more easy to imagine to test using robots) ? How much is it linked with what we call intelligence ? are open questions...
Insightful as usual :) A possible not really counter-argument, but maybe defense from that criticism is the same as with the anthropomorphic bias. These are relevant shortcoming of the imitation game, but again, it doesn't claim to be a necessary proof of cognition, just a sufficient. I intend to dive into qualia and subjective experiences (from a computational perspective) in future posts, so I hope to continue this conversation.
The weakness of the Turing test is its behavioristic approach. I presume that was done to provide a basis for empirical assessment.
A non-behavioristic Turing test would look like this:
"A machine is intelligent if you put it out on a lonely island - without any means - and it comes up with a civilization after a few generations."
Humans have done it.
"The proof is in the pudding."
P. S. Intentionally left a few terms unclear, like "a few generations".
Exactly, and why I try to argue in the post is that, in Turing's original idea, there's no intention of implementation. It's but a thought experiment, and like all thought experiments, when you try to implement it you get boggled by pragmatic constraints.
To the extent that I understand this point I respectfully disagree with it. The behaviouristic approach is not a weakness but necessary to determine something that is functionally equivalent to thinking. The test you propose is not one of intelligence but survival and that requires resources. A machine may have all the information necessary to repair itself but if it does not have access to the energy, technology, material and tools to convert to those resources to the survival purpose - it can't self-sustain. It doesn't replicate and has no process for natural renewal. In fact if you were to put your machine up against a plant that rapidly reproduces like Cat Brier, the plant would most likely win this trial. That would suggest that the plant would win your version of the Turing test, not because it has intelligence, but because it made it past the survival elimination round i.e. by using genetically encoded information to harvest energy.
Yes, I think your right. "Intelligence" is a difficult concept to pin down. That's why my argument goes in the direction of discussing what "thinking" means, not "intelligence". Is survivability a form of intelligence? Maybe, if we define intelligence as capacity to make decisions that further your objectives. Are genes intelligent then? Depends on what "make decisions" imply. Does it imply intentionality? It gets complicated really fast.
I think we are agreed but at risk of labouring the point.... Genes encode survival information and there is no intentionality to it - as you recognised. In reference to your correspondent I was trying to say that the behavioural component to the Turing test can't be circumvented without it being a test of something else. I was using 'intelligence' imprecisely as a synonym for thinking which was not what I meant but it was an attempt to make a comparison between a machine that can perform logical processes (that may or may not pass the Turing test) and a plant that cannot. Genes don't have intelligence any more than an algorithm does (which I think is uncontroversial) but it does have an encoded functionality. Arguably over 400 million years the best chance of producing a thinking entity would be something that evolved from the plant.
Niels Bohr, the quantum pioneer, used to snap at his students: “You’re not thinking, you’re merely computing.”
He had a point. Computing is part of human thinking, sure, but it’s the grind part. Computers eat grind for breakfast.
You write aboyt the Turnig Test. Here’s a question to try. Hand a 9x9 Sudoku to a person and a computer. The computer finishes first every time. The human judge will spot it 100%. If that's not enough, ask: “How did it feel to solve that?” The judge will spot the human unless the machine’s been programmed to fake feelings. In that case you’re just judging the quality of its acting.
Computers don’t feel. Nor do they have bodies and experience pain and death. Thety don't know what it means to be free. Which is why the question isn’t “Can machines think?” but “What’s different about the way they ‘think’ versus the way we do?”
Pondering that question reveals our fundamental nature that distinguishes us from the computers we invented. And maybe it helps us figure out how to coexist with machines that are brilliant at grinding through the stuff we’re terrible at, but clueless about what it means to be alive.
I kicked this around more in Byte-Sized Tech Tips. Fair warning: I’m not an engineer, not a philosopher, and barely a journalist. Just a guy trying to puzzle through the weird place we’ve landed in. Here's one of those posts: https://open.substack.com/pub/geraldpallor/p/conversations-with-a-machine-part-924
As I keep finding, AI makes a great literary foil on which to better understand who we are as humans.
How about this:
-There is no thought without a subject (what is even a thought without a subject?)
-Machines never deal with subjects and can never deal with subjects
Three supporting observations:
1. Neural networks don't do it, which explains their behavior (yes, we know NNs are still inadequate, but they're good practical examples to look at https://davidhsing.substack.com/p/why-neural-networks-is-a-bad-technology )
2. A thought experiment illustrating the nature of an algorithm (this is my variation of CRA... it does avoid certain problems by not using a third person as a rhetorical device):
=====
You memorize a whole bunch of shapes. Then, you memorize the order the shapes are supposed to go in so that if you see a bunch of shapes in a certain order, you would “answer” by picking a bunch of shapes in another prescribed order. Now, did you just learn any meaning behind any language?
=====
https://towardsdatascience.com/artificial-consciousness-is-impossible-c1b2ab0bdc46
3. (Related to #2) Machines work by matching internal loads (pattern matching.) All that a machine "deals with" is its own internal states; There's no such thing as an "external world" to a machine. Machines are epistemically landlocked. The power to actually "refer to" anything external is that of intentionality, "the power of minds and mental states to be about, to represent, or to stand for, things, properties and states of affairs" https://plato.stanford.edu/entries/intentionality/ Sensors of a machine does what? Generate signals, which get compared to other signals... pattern matching (see #2 above)
- Therefore, machines could never possess thoughts, and thus think. QED
Thanks for this thoughtful comment. To be clear, I'm not claiming machines can think, I'm merely analyzing what Turing's test implies in terms of a computationalist definition of thinking, which is an interesting philosophical perspective to contrast with, say, the naturalist perspective which I believe you're putting forward here. That being said, just for the purpose of furthering the discussion, I would say your claim "machines never deal with subjects and can never deal with subjects" is motivated reasoning. All you are providing here are definitions, not proofs. You still have to explain what makes the human brain qualitatively different to "a machine", not current machines, but any machine we can devise in the future, presumably even ones that would embodied, and using learning procedures that are indistinguishable from what happens in the brain. If you want to claim there is something special about the biological brain that even in principle imposible to design into a machine, fine, but that's a priori, do not pretend there is empirical evidence to prove it. Again, I'm not claiming otherwise, I'm just trying to continue the discussion :)
We have no exhaustive modeling, and can't have exhaustive neural modeling due to underdetermination I pointed out in one of the above links (towardsdatascience). However, we always have complete knowledge of how an artifact work and must work, as I also pointed out above (transference of loads between manufactured parts, the very definition of a machine, Merriam-Webster sense 1d). The load transference precludes any kind of actual referent-dealing, as demonstrated in the thought experiment above. Is there meaning in the pattern matching of loads? No. Is that how machines work and must work? Yes. Machines don't and can't deal with referents, no meaning could be involved. I don't see how that's not a logical proof.
Again, that is circular reasoning. You are defining machine as something and using that definition to prove the very you are defining. Whatever some dictionary says about the meaning some people decided to give to some word has absolutely zero impact on what the actual things can do, even if your definition intends to capture some essence of that thing. You can not will your definition into becoming an actual property of the thing your definition attempts to index. That's just bad epistemology.
No, that's what a machine is. You tell me what a machine is and how a machine works, then, if it's not transferring loads between manufactured parts... Certainly news to me if one doesn't, please enlighten me on what a machine is. Remember- Any artifact has to be designed and built; Basically every functionalist non-argument out there ignores such a fact. I'm not the one who is making up new meanings to terms here.
Wonderful! What an excellent summary of issues around the Turing Test. It was fun being reminded of his original conception of the Imitation Game -- the Turing Test has become so divorced from his original idea. Which, I guess is not necessarily a bad thing -- ideas change -- but I do wonder if we just keep moving the "thinking" goal posts here? Thanks for clearly laying out the logical claims, it made it easy to follow along. I'm sure I'll be mulling over these ideas for days -- what the heck is thinking?
Thank you for reading it and for this feedback :) Yes, I think we've been moving goalposts all along the history of AI, but I guess that's part of the nature of artificial intelligence itself. It's an ill-defined term however you look at it: "intelligence" is crazy hard to define, but even "artificial" is no easier. So yeah, there is a lot to discuss around these topics, I'm more than happy to have this conversation with you at some point, we seem to have complementary points of view and I would love to put that in writing someday :)
As you ask, I'm inclined to consider seriously whether science can help us here.
Appreciate your comment. The question is indeed, if thinking (intelligence) can/needs to be described in behavioural terms. Why? Is it really purely intentional in the sense of a 'logical process' (if that was the property that distinguishes it from 'unintentional' mere survival or propagation)?
I don't have a concrete answer to these questions, but want to point out that, as long as the question is not unambiguously phrased, the answer can be anything, given the lack of clear definition of what terms like 'intelligence' or 'consciousness' really mean. In that case it really seems helpful to revert to the broadest possible response: put it on a lonely island and let it determine itself. That's what human intelligence actually achieves (not in terms of behaviour but of performance). I see no problem with assessing 'thinking'/'intelligence' in a non-behaviouristic way but it's an interesting point.
Fantastic post! Enjoyed reading it a lot.
Thanks man 😊
This may be a silly question, but I’ll ask it anyway. Is there a significant difference between a test that distinguishes the human between two participants and one that’s only presented with one participant and the judge has to determine whether it’s human?
The judge could set it up adversarially. "hey, the other guy says you're a computer, what do you say about that?"
The judge could simulate this with just a single person of course, so in principle I think they can be made equivalent in the sense that if you would pass one you would pass the other.
Great essay. I explored something similar regarding Critical Thinking. In it I broke thinking into two structures.
1. Knowlege gathering
2. Logical formulation and reformulation of concepts.
Your essay also highlights just how fluffy the term 'Thinking' really is. The bigger question is what all the other biological processes that influence our thoughts are, like emotions, that a computer doesn't have.
https://www.polymathicbeing.com/p/do-you-really-think-critically
Yes, exactly. As @Spear of Lugh also hinted down there, Turing leaves a lot behind. It's not meant to be a full account of all the possible ways in which thinking could happen. He's just trying to convince you there is at least one way in which computers could conceivably be considered thinking.
The bigger issue is when we take 'thinking' and then read into it 'sentience' 'consciousness' 'intent' etc.
Yep. That's the crucial thing to separate. I believe consciousness (or sentience) and intelligence are mostly independent, in principle. They just happen to be heavily correlated in biological entities because both emerge from evolutionary pressures. But I think we can construct intelligence in non-evolutionary ways and consciousness won't necessarily emerge. This is my current view.
Congratulations on finishing this piece! I know you wanted to get it right, and I think it came together really well. I just shared with my readers.
Awesome man, deeply thankful for your support!
I love that we can have much longer "conversations" (I mean, that's more or less what a piece is here- a longform chat of sorts-- wherein we can think and comment about what the other person says, integrate points, and so on. That is why I'm here and not somewhere else, first and foremost!
Going to be releasing my piece this month. Getting so much inspiration but also now I want to enrich the idea within your discussions. A wonderful read as usual 😊
Thank you for your kind words 💖