Discussion about this post

User's avatar
Stephen Gruppetta's avatar

I’m always fascinated by these discussions. As someone with not formal CS background, I’m always keen to fill in the gaps I have on many theoretical aspects of CS.

Here’s a thought that I’m sure is discusses in some parts of CS that someone will direct me to. I’m using my own terminology here since I don’t know the formal ones. LLMs seem to have what I’d define as “very broad but shallow” knowledge. They know facts about almost everything but can’t “reason” beyond a certain amount of logical steps. We humans, individually, have a much narrower knowledge base, but it’s much deeper. We can keep following a train if thought for as long as needed, as deep as needed. It may be intellectually hard to do so, and not everyone has the same capacity.

How much if this is a fundamental limitation of either LLMs specifically, or computation in general? I suspect there’s no definitive answer to this, but I’ll ask anyway

Expand full comment
Andrew Smith's avatar

Very cool! I am learning to leverage collaborating here on Substack as well; there are so many great minds right here for us to tap into. I have little to contribute to the "meaning of life" speculation, but I'm riveted by the idea of what an appropriate Turing Test would look like for an LLM.

Expand full comment
8 more comments...

No posts