Insiders Weekly #7 - The road to AGI
Wrapping up our latest conversations on AGI and why scaling LLMs is not sufficient. In this post, I'll tell you what I believe is the right recipe.
Hey folks!
Welcome to another issue of Insiders Weekly, the subscriber-only newsletter where I share my views on recent events in the world of artificial intelligence and computer science.
I'm still working on the same articles as last week, so I will focus on wrapping up last week's discussion on this issue. Last time we left at “current language models can't reason since they can't even solve all computable problems.” We hinted at a possible solution in the form of code generation and execution, which is something that OpenAI is already pushing to production.
However, just generating code during inference is likely insufficient for general-purpose AI. Instead, we want models that can generate code during training, get feedback, and learn from that code. And even then, we probably won't reach AGI still. This post is about crucial ingredients that I think we are still missing.
Before moving on, let me tell you another thing I’ve been working on. I recently co-founded an AI startup. We’re still putting things together, so I’ll tell you more about it in the upcoming weeks, but for now, I can share with you the following post. In it, we explain how we built a simple machine-learning model to predict the medals for all events in the current World Athletics Championship. If you’re into sports, machine learning, or both, I think you’ll enjoy it.
And now we’re ready to move on. In this post, I will argue why code generation and execution is the final frontier of AGI, and I will highlight what I think are the key components that a system endowed with general-purpose problem-solving abilities should possess.
Keep reading with a 7-day free trial
Subscribe to Mostly Harmless Ideas to keep reading this post and get 7 days of free access to the full post archives.