I’m always fascinated by these discussions. As someone with not formal CS background, I’m always keen to fill in the gaps I have on many theoretical aspects of CS.
Here’s a thought that I’m sure is discusses in some parts of CS that someone will direct me to. I’m using my own terminology here since I don’t know the formal ones. LLMs seem to have what I’d define as “very broad but shallow” knowledge. They know facts about almost everything but can’t “reason” beyond a certain amount of logical steps. We humans, individually, have a much narrower knowledge base, but it’s much deeper. We can keep following a train if thought for as long as needed, as deep as needed. It may be intellectually hard to do so, and not everyone has the same capacity.
How much if this is a fundamental limitation of either LLMs specifically, or computation in general? I suspect there’s no definitive answer to this, but I’ll ask anyway
I can say something to that, we kinda touched upon this idea in the talk: for general purpose computation you need to be able to loop indefinitely, because for some problems you just don't know before hand how much computation you'll require.
Here's a simple example: is there a perfect number longer than N? You start at N and try each number, returning true once you found a perfect number. But you can never return false.
So any computacional model that doesn't include partially defined functions is by definition less powerful than Turing machines. For example, no LLM can answer this question for sufficiently large N.
Now, LLMs enhanced with the ability to generate and run code can theoretically surpass this limitation, but we don't know if they would be capable of generating the full set of computable functions.
That makes sense. Thanks. I think, where I'm finding it hard to imagine in my little head is when the problem is no longer an easily-definable mathematical problem (I'm using "easy" relatively), but more a more "human" question. I'll need to process my thoughts a bit better before I can express them clearly enough, I think…
This is typically what I call the mystery of incarnation. There are things that you have to do in the real world not in abstract.
If you are a plumber you cannot think out a leaking pipe. You can read all the books you like about plumbing the only question is : can you fix the leak or not ? I bet LLMs cannot resolve this mystery even at basics level. It can be experimentally tested. Just give the LLM the control of a robot and ask the robot to bring you food or to produce iron. But do it in an uncontrolled environment : drop it in the middle of the jungle and look at what it does.I bet LLMs cannot even reach the level of ants that grow mushrooms or reach the iron age on terms of artefact it can produce. Even having the head start of having to all *human transmissible knowledge*.
I'd like to think that we humans still have a significant advantage (and maybe will always have??) when it comes to making links between different parts of our knowledge base—those serendipitous discoveries that we make…
It looks like consciousness is what allows us to escape from instincts. You can watch yourself thinking in your thoughts and build up from there. There are no AI that have this introspective capabilities: basically self reference quickly runs into paradoxes in formal models.
Very cool! I am learning to leverage collaborating here on Substack as well; there are so many great minds right here for us to tap into. I have little to contribute to the "meaning of life" speculation, but I'm riveted by the idea of what an appropriate Turing Test would look like for an LLM.
One idea would be to see how it resolves real world problem : you can give the LLM the control of a robot and observe how it solves various tasks (bringing food, crafting object from 0 etc.). This is testable, and I bet that we would be surprised that even though LLM has the headstart of all human knowledge, still they could not achieve a lot in the real world.
It's wild how there's such a long way to go in so many areas, and yet AI has left humans in the dust in others. That's why it seems so alien to us, I think.
I’m always fascinated by these discussions. As someone with not formal CS background, I’m always keen to fill in the gaps I have on many theoretical aspects of CS.
Here’s a thought that I’m sure is discusses in some parts of CS that someone will direct me to. I’m using my own terminology here since I don’t know the formal ones. LLMs seem to have what I’d define as “very broad but shallow” knowledge. They know facts about almost everything but can’t “reason” beyond a certain amount of logical steps. We humans, individually, have a much narrower knowledge base, but it’s much deeper. We can keep following a train if thought for as long as needed, as deep as needed. It may be intellectually hard to do so, and not everyone has the same capacity.
How much if this is a fundamental limitation of either LLMs specifically, or computation in general? I suspect there’s no definitive answer to this, but I’ll ask anyway
I can say something to that, we kinda touched upon this idea in the talk: for general purpose computation you need to be able to loop indefinitely, because for some problems you just don't know before hand how much computation you'll require.
Here's a simple example: is there a perfect number longer than N? You start at N and try each number, returning true once you found a perfect number. But you can never return false.
So any computacional model that doesn't include partially defined functions is by definition less powerful than Turing machines. For example, no LLM can answer this question for sufficiently large N.
Now, LLMs enhanced with the ability to generate and run code can theoretically surpass this limitation, but we don't know if they would be capable of generating the full set of computable functions.
That makes sense. Thanks. I think, where I'm finding it hard to imagine in my little head is when the problem is no longer an easily-definable mathematical problem (I'm using "easy" relatively), but more a more "human" question. I'll need to process my thoughts a bit better before I can express them clearly enough, I think…
This is typically what I call the mystery of incarnation. There are things that you have to do in the real world not in abstract.
If you are a plumber you cannot think out a leaking pipe. You can read all the books you like about plumbing the only question is : can you fix the leak or not ? I bet LLMs cannot resolve this mystery even at basics level. It can be experimentally tested. Just give the LLM the control of a robot and ask the robot to bring you food or to produce iron. But do it in an uncontrolled environment : drop it in the middle of the jungle and look at what it does.I bet LLMs cannot even reach the level of ants that grow mushrooms or reach the iron age on terms of artefact it can produce. Even having the head start of having to all *human transmissible knowledge*.
I'd like to think that we humans still have a significant advantage (and maybe will always have??) when it comes to making links between different parts of our knowledge base—those serendipitous discoveries that we make…
It looks like consciousness is what allows us to escape from instincts. You can watch yourself thinking in your thoughts and build up from there. There are no AI that have this introspective capabilities: basically self reference quickly runs into paradoxes in formal models.
Very cool! I am learning to leverage collaborating here on Substack as well; there are so many great minds right here for us to tap into. I have little to contribute to the "meaning of life" speculation, but I'm riveted by the idea of what an appropriate Turing Test would look like for an LLM.
One idea would be to see how it resolves real world problem : you can give the LLM the control of a robot and observe how it solves various tasks (bringing food, crafting object from 0 etc.). This is testable, and I bet that we would be surprised that even though LLM has the headstart of all human knowledge, still they could not achieve a lot in the real world.
It's wild how there's such a long way to go in so many areas, and yet AI has left humans in the dust in others. That's why it seems so alien to us, I think.
What a wonderful post! It is very insightful and stimulating. Huge congrats to both the Interviewer and the Interviewee