Your "minimum valuable prompt" is something I called a "minimum viable prompt" back in the day (https://www.whytryai.com/p/minimum-viable-prompt) - sounds like they're based on the same principles!
Personally, I think prompt engineering is going to become more important over time. The techniques will vary but the need will remain. After all, getting people to do what you want can be terribly difficult. Do you really believe AIs will be easier?
I do believe AIs will be easier because we are designing them for instruction following and they're not human (at least for now) in the sense they have no intrinsic motivations that can contradict the instructions you give them. But I also agree that as they become more versatile and smart we'll need principled ways to converse with them.
When I first started using GPT3.5 in December, 2022, I was at the start of a steep learning curve. I asked the bot to explain itself to me routinely for several weeks. I asked it to compare and contrast prompts, to revise them—if a bot could be driven crazy, I would have been the one to send it over the edge. After a time, I would plan a strategy for a chat with specific outcomes, carry out the plan, and at the conclusion ask the bot to critically evaluate my prompts and write an essay for me, highlighting the most and least effective prompts from the bots perspective. Over time I’ve felt less need for this, and I see from your analysis here that I unknowingly picked up some of these principles. What advice do you have for using the bot as a prompt partner?
That's a very good question, Terry, for which I don't have a definitive answer but I'll say a few thinks:
It is at least somewhat surprising that LLMs can criticize their own prompts because where would they have picked this skill? Seems like some whacky metacognitive skill ...
However I think for the most part it's just a case of extrapolating from their general formal writing skills. A lot of prompt engineering is just common sense advice on writing good, informative, concise, but detailed instructions.
Now, I can imagine someone like OpenAI could have spent some time teaching their models to improve prompts because they are very conscious about user experience and this is definitely a worthwhile skill to make chatbots more accessible.
In any case, I'd be wary of trusting too much the chatbot's advice because there are quirks in prompt engineering (like don't use negative examples) which they probably won't have incorporated into their writing skills learned from other sources. There are papers showing the optimal prompt for some tasks often involve crazy roleplaying like "you're a detective I'm a serial killer case" to solve some puzzles and stuff like that. I don't imagine ChatGPT would come up with that, but maybe it will ;)
This is a solid "glossary" of go-to approaches.
Your "minimum valuable prompt" is something I called a "minimum viable prompt" back in the day (https://www.whytryai.com/p/minimum-viable-prompt) - sounds like they're based on the same principles!
Absolutely! I picked it from you ;)
Ah! In that case, I'll revel in my unrecognized "MVP Pioneer" status ;) - I'm sure my Nobel Prize is coming any day now!
What's due is due.
Personally, I think prompt engineering is going to become more important over time. The techniques will vary but the need will remain. After all, getting people to do what you want can be terribly difficult. Do you really believe AIs will be easier?
I do believe AIs will be easier because we are designing them for instruction following and they're not human (at least for now) in the sense they have no intrinsic motivations that can contradict the instructions you give them. But I also agree that as they become more versatile and smart we'll need principled ways to converse with them.
Nice, was thinking about writing this, too :)
Would love to hear your version of it ;)
When I first started using GPT3.5 in December, 2022, I was at the start of a steep learning curve. I asked the bot to explain itself to me routinely for several weeks. I asked it to compare and contrast prompts, to revise them—if a bot could be driven crazy, I would have been the one to send it over the edge. After a time, I would plan a strategy for a chat with specific outcomes, carry out the plan, and at the conclusion ask the bot to critically evaluate my prompts and write an essay for me, highlighting the most and least effective prompts from the bots perspective. Over time I’ve felt less need for this, and I see from your analysis here that I unknowingly picked up some of these principles. What advice do you have for using the bot as a prompt partner?
That's a very good question, Terry, for which I don't have a definitive answer but I'll say a few thinks:
It is at least somewhat surprising that LLMs can criticize their own prompts because where would they have picked this skill? Seems like some whacky metacognitive skill ...
However I think for the most part it's just a case of extrapolating from their general formal writing skills. A lot of prompt engineering is just common sense advice on writing good, informative, concise, but detailed instructions.
Now, I can imagine someone like OpenAI could have spent some time teaching their models to improve prompts because they are very conscious about user experience and this is definitely a worthwhile skill to make chatbots more accessible.
In any case, I'd be wary of trusting too much the chatbot's advice because there are quirks in prompt engineering (like don't use negative examples) which they probably won't have incorporated into their writing skills learned from other sources. There are papers showing the optimal prompt for some tasks often involve crazy roleplaying like "you're a detective I'm a serial killer case" to solve some puzzles and stuff like that. I don't imagine ChatGPT would come up with that, but maybe it will ;)
Intriguing question! Now I want to know more.