24 Comments
Jul 10, 2023Liked by Alejandro Piad Morffis

I think there is a line of critics that can be added. Turing focused on transmissible knowledge. Typically qualia, this inner feeling attached to sensations (like pain) or perceptions (like the blue of the sky) or more abstract (like feeling that we are right). Those things are not really transmissible. The reader understand what they mean because he experienced those qualia first hand. I think that it has to do with our links to the outside world. Meaning : I can have a model of the world in my mind but I also interact directly with the world. This last part is lacking for machines.

Can it be formalized (I doubt) or measured (much more easy to imagine to test using robots) ? How much is it linked with what we call intelligence ? are open questions...

Expand full comment
author

Insightful as usual :) A possible not really counter-argument, but maybe defense from that criticism is the same as with the anthropomorphic bias. These are relevant shortcoming of the imitation game, but again, it doesn't claim to be a necessary proof of cognition, just a sufficient. I intend to dive into qualia and subjective experiences (from a computational perspective) in future posts, so I hope to continue this conversation.

Expand full comment
Jul 10, 2023Liked by Alejandro Piad Morffis

The weakness of the Turing test is its behavioristic approach. I presume that was done to provide a basis for empirical assessment.

A non-behavioristic Turing test would look like this:

"A machine is intelligent if you put it out on a lonely island - without any means - and it comes up with a civilization after a few generations."

Humans have done it.

"The proof is in the pudding."

P. S. Intentionally left a few terms unclear, like "a few generations".

Expand full comment
author

Exactly, and why I try to argue in the post is that, in Turing's original idea, there's no intention of implementation. It's but a thought experiment, and like all thought experiments, when you try to implement it you get boggled by pragmatic constraints.

Expand full comment

To the extent that I understand this point I respectfully disagree with it. The behaviouristic approach is not a weakness but necessary to determine something that is functionally equivalent to thinking. The test you propose is not one of intelligence but survival and that requires resources. A machine may have all the information necessary to repair itself but if it does not have access to the energy, technology, material and tools to convert to those resources to the survival purpose - it can't self-sustain. It doesn't replicate and has no process for natural renewal. In fact if you were to put your machine up against a plant that rapidly reproduces like Cat Brier, the plant would most likely win this trial. That would suggest that the plant would win your version of the Turing test, not because it has intelligence, but because it made it past the survival elimination round i.e. by using genetically encoded information to harvest energy.

Expand full comment
author

Yes, I think your right. "Intelligence" is a difficult concept to pin down. That's why my argument goes in the direction of discussing what "thinking" means, not "intelligence". Is survivability a form of intelligence? Maybe, if we define intelligence as capacity to make decisions that further your objectives. Are genes intelligent then? Depends on what "make decisions" imply. Does it imply intentionality? It gets complicated really fast.

Expand full comment

I think we are agreed but at risk of labouring the point.... Genes encode survival information and there is no intentionality to it - as you recognised. In reference to your correspondent I was trying to say that the behavioural component to the Turing test can't be circumvented without it being a test of something else. I was using 'intelligence' imprecisely as a synonym for thinking which was not what I meant but it was an attempt to make a comparison between a machine that can perform logical processes (that may or may not pass the Turing test) and a plant that cannot. Genes don't have intelligence any more than an algorithm does (which I think is uncontroversial) but it does have an encoded functionality. Arguably over 400 million years the best chance of producing a thinking entity would be something that evolved from the plant.

Expand full comment
Feb 7Liked by Alejandro Piad Morffis

Wonderful! What an excellent summary of issues around the Turing Test. It was fun being reminded of his original conception of the Imitation Game -- the Turing Test has become so divorced from his original idea. Which, I guess is not necessarily a bad thing -- ideas change -- but I do wonder if we just keep moving the "thinking" goal posts here? Thanks for clearly laying out the logical claims, it made it easy to follow along. I'm sure I'll be mulling over these ideas for days -- what the heck is thinking?

Expand full comment
author

Thank you for reading it and for this feedback :) Yes, I think we've been moving goalposts all along the history of AI, but I guess that's part of the nature of artificial intelligence itself. It's an ill-defined term however you look at it: "intelligence" is crazy hard to define, but even "artificial" is no easier. So yeah, there is a lot to discuss around these topics, I'm more than happy to have this conversation with you at some point, we seem to have complementary points of view and I would love to put that in writing someday :)

Expand full comment
Jul 30, 2023Liked by Alejandro Piad Morffis

As you ask, I'm inclined to consider seriously whether science can help us here.

Expand full comment
Jul 13, 2023Liked by Alejandro Piad Morffis

Appreciate your comment. The question is indeed, if thinking (intelligence) can/needs to be described in behavioural terms. Why? Is it really purely intentional in the sense of a 'logical process' (if that was the property that distinguishes it from 'unintentional' mere survival or propagation)?

I don't have a concrete answer to these questions, but want to point out that, as long as the question is not unambiguously phrased, the answer can be anything, given the lack of clear definition of what terms like 'intelligence' or 'consciousness' really mean. In that case it really seems helpful to revert to the broadest possible response: put it on a lonely island and let it determine itself. That's what human intelligence actually achieves (not in terms of behaviour but of performance). I see no problem with assessing 'thinking'/'intelligence' in a non-behaviouristic way but it's an interesting point.

Expand full comment
Jul 11, 2023Liked by Alejandro Piad Morffis

Fantastic post! Enjoyed reading it a lot.

Expand full comment
author

Thanks man 😊

Expand full comment

This may be a silly question, but I’ll ask it anyway. Is there a significant difference between a test that distinguishes the human between two participants and one that’s only presented with one participant and the judge has to determine whether it’s human?

Expand full comment
author

The judge could set it up adversarially. "hey, the other guy says you're a computer, what do you say about that?"

The judge could simulate this with just a single person of course, so in principle I think they can be made equivalent in the sense that if you would pass one you would pass the other.

Expand full comment
Jul 10, 2023·edited Jul 10, 2023Liked by Alejandro Piad Morffis

Great essay. I explored something similar regarding Critical Thinking. In it I broke thinking into two structures.

1. Knowlege gathering

2. Logical formulation and reformulation of concepts.

Your essay also highlights just how fluffy the term 'Thinking' really is. The bigger question is what all the other biological processes that influence our thoughts are, like emotions, that a computer doesn't have.

https://www.polymathicbeing.com/p/do-you-really-think-critically

Expand full comment
author

Yes, exactly. As @Spear of Lugh also hinted down there, Turing leaves a lot behind. It's not meant to be a full account of all the possible ways in which thinking could happen. He's just trying to convince you there is at least one way in which computers could conceivably be considered thinking.

Expand full comment

The bigger issue is when we take 'thinking' and then read into it 'sentience' 'consciousness' 'intent' etc.

Expand full comment
author

Yep. That's the crucial thing to separate. I believe consciousness (or sentience) and intelligence are mostly independent, in principle. They just happen to be heavily correlated in biological entities because both emerge from evolutionary pressures. But I think we can construct intelligence in non-evolutionary ways and consciousness won't necessarily emerge. This is my current view.

Expand full comment
Jul 10, 2023Liked by Alejandro Piad Morffis

Congratulations on finishing this piece! I know you wanted to get it right, and I think it came together really well. I just shared with my readers.

Expand full comment
author

Awesome man, deeply thankful for your support!

Expand full comment
Jul 10, 2023Liked by Alejandro Piad Morffis

I love that we can have much longer "conversations" (I mean, that's more or less what a piece is here- a longform chat of sorts-- wherein we can think and comment about what the other person says, integrate points, and so on. That is why I'm here and not somewhere else, first and foremost!

Expand full comment

Going to be releasing my piece this month. Getting so much inspiration but also now I want to enrich the idea within your discussions. A wonderful read as usual 😊

Expand full comment
author

Thank you for your kind words 💖

Expand full comment