Discussion about this post

User's avatar
Michael Woudenberg's avatar

Fantastic description of LLMs. I've always said they were intended to be linguistically accurate, not factually accurate but I haven't thought that the hallucinations are a feature, not a bug.

Expand full comment
Daniel Nest's avatar

This article is at least 78% factual!

Thanks for this exploration. You're not the first one to point out that hallucinations aren't "solvable" within the current LLM architecture.

I find it fascinating that AI is kind of caught in this awkward middle of being subpar for any of its potential applications:

If you're using AI for practical, data-grounded purposes, you have to contend with hallucinations and unreliability.

If you want to use it for stuff where facts don't matter, like creative writing, you run into the issue of recycled prose and themes.

I feel like one of the most useful applications in my own life is using LLMs to brainstorm - LLMs are often "creative" enough to nudge me into an interesting direction while being able to output lots of ideas very quickly.

Expand full comment
45 more comments...

No posts