This work is so rich it’s hard to know where to start! You’ve just blown up my mental model of LLMs and forced me to rethink everything, Alejandro. Thanks!! As your three year old daughter taught us all, we can always build new castles after a wave washes one away:) nice note. So is a few thousand tokens the limit for a sustained chat?
Thanks Terry, you're always too kind ;) Technically you can sustain a continuous chat for close to 100 thousand tokens for most frontier models, but what will happen in large chats is the nuance gets lost the more you talk, because models are very good at needle-in-a-haystack retrieval (pinpointing one specific claim in this large past context) but very bad at getting a general overview of a large context, which is almost always what you want in long conversations.
I have been building a teaching app using an AI assistant. I don’t know how to code myself. How accessible is your research to someone like me who is trying to build by relying on AI coding assistants?
This work is so rich it’s hard to know where to start! You’ve just blown up my mental model of LLMs and forced me to rethink everything, Alejandro. Thanks!! As your three year old daughter taught us all, we can always build new castles after a wave washes one away:) nice note. So is a few thousand tokens the limit for a sustained chat?
Thanks Terry, you're always too kind ;) Technically you can sustain a continuous chat for close to 100 thousand tokens for most frontier models, but what will happen in large chats is the nuance gets lost the more you talk, because models are very good at needle-in-a-haystack retrieval (pinpointing one specific claim in this large past context) but very bad at getting a general overview of a large context, which is almost always what you want in long conversations.
I have been building a teaching app using an AI assistant. I don’t know how to code myself. How accessible is your research to someone like me who is trying to build by relying on AI coding assistants?