So you're saying it was a bad idea to outsource my entire decision-making process in all areas of my life to ChatGPT several months ago? That would explain a lot!
Question: So how do we call or think about the point at which the LLM decides which tool or approach to use?
For example, the model fails to count the R’s in strawberry when it simply tries to do it, but it can pretty easily write the code and produce a React app based on one prompt that counts and highlights the instances.
So the failure is in the LLM not “realizing” or making the decision to use code.
Question: So how do we call or think about the point at which the LLM decides which tool or approach to use?
For example, the model fails to count the R’s in strawberry when it simply tries to do it, but it can pretty easily write the code and produce a React app based on one prompt that counts and highlights the instances.
So the failure is in the LLM not “realizing” or making the decision to use code.
Very interesting stuff! It seems like there will never really be any "there" there.
I'm sure you've thought a lot about emergence, but it's only recently really, truly arrived on my radar. It's like all I can think about now. I love the tantalizing idea that reality itself could be emergent, and that certainly includes everything to the north of that baseline. Things emerge everywhere... could logical reasoning potentially emerge here through some means we're not aware of?
You can sort of see that I'm just really wrapping my mind around all this stuff, but I tried to jot my thoughts down here: https://goatfury.substack.com/p/emergence
Nice! I haven't learned much about LLMs, so these posts are very educational.
Thanks 🙏
Excellent analysis as usual
Thanks :)
So you're saying it was a bad idea to outsource my entire decision-making process in all areas of my life to ChatGPT several months ago? That would explain a lot!
Nah it just depends on how complex your decision making already was.
Phew, luckily, I never made a complex decision in my life. I'm safe!
Living the life of a state machine. That is the way.
Great article. Thanks for that.
Question: So how do we call or think about the point at which the LLM decides which tool or approach to use?
For example, the model fails to count the R’s in strawberry when it simply tries to do it, but it can pretty easily write the code and produce a React app based on one prompt that counts and highlights the instances.
So the failure is in the LLM not “realizing” or making the decision to use code.
Great article. Thanks for that.
Question: So how do we call or think about the point at which the LLM decides which tool or approach to use?
For example, the model fails to count the R’s in strawberry when it simply tries to do it, but it can pretty easily write the code and produce a React app based on one prompt that counts and highlights the instances.
So the failure is in the LLM not “realizing” or making the decision to use code.
Very interesting stuff! It seems like there will never really be any "there" there.
I'm sure you've thought a lot about emergence, but it's only recently really, truly arrived on my radar. It's like all I can think about now. I love the tantalizing idea that reality itself could be emergent, and that certainly includes everything to the north of that baseline. Things emerge everywhere... could logical reasoning potentially emerge here through some means we're not aware of?
You can sort of see that I'm just really wrapping my mind around all this stuff, but I tried to jot my thoughts down here: https://goatfury.substack.com/p/emergence