This article is based on Chapter 4 of my upcoming book Mostly Harmless AI.
Having journeyed through the foundations of artificial intelligenceâits history, its mechanics, and its limitationsâwe arrive at the most pressing question: How do we actually use the powerful new tools it has produced?
This article serves as a bridge from the theoretical to the practical. This is not a list of traditional prompt engineering hacks. The internet is filled with tips and tricks for coaxing a specific output from a language model for a single task. Instead, the advice that follows builds on our foundational understanding to offer something more durable: a general mindset for working with these tools. This approach is about engaging with language models in a way that allows you to get the best out of them without subcontracting your own critical thinking. It is a methodology for augmenting your intellect, not replacing it.
The principles you learn here are foundational, offering a universal toolkit for interacting with large language models in daily lifeâwhether you are planning a vacation, trying to understand a complex news article, or drafting a simple email. In the following chapters (in the book), we explore how to adapt and intensify these practices for specialized, high-stakes professional environments. But first, every user must learn how to engage with these powerful yet fallible tools safely, critically, and effectively.
A Methodology for Effective Interactions with Language Models
To move beyond simple queries and unlock the true potential of language models, we need a more structured approach. This methodology is divided into three parts: establishing the right Mindset, employing effective Tactics during the conversation, and building a System to make your successes repeatable.
The Mindset
The most significant shift is in your mental model. Instead of treating the model as a search engine, you should approach it as a conversational partner. This means recognizing that the interaction is iterative and that your most important role is to guide the conversation.
A key part of this mindset is adopting a Socratic (or inquiring) approach, where you use the model not just to get answers, but to help you ask better questions. This is invaluable for sensitive and important tasks.
For example, instead of starting with âWrite an email asking for a raise,â a partner-based approach would be to ask the model to guide you: âI need to write an email to my manager to ask for a raise. What are the key pieces of information and evidence I should gather first to make the strongest possible case?â The model will then prompt you for your accomplishments and market data, helping you build your argument before a single word is written.
Similarly, when organizing a childâs birthday party, you could ask, âIâm planning a science-themed party for my 7-year-old. What are the key logistical details I need to consider to make sure it runs smoothly?â In both cases, you are using the model to help you define the problem, which is a far more powerful use of its capabilities.
The Tactics
With the right mindset, you can employ specific tactics to steer the conversation toward a high-quality outcome. The most fundamental tactic is to be explicit and strategic with your queries. To ground the modelâs response in reliable information, tell it where to look.
A generic query for medical advice is risky, whereas a much safer prompt would be: âSearch for information from the Mayo Clinic and the World Health Organization on the common symptoms of iron deficiency.â This specificity is also crucial when comparing complex options, like, for example, buying an EV, you can be as specific as: âCompare the Tesla Model 3, the Hyundai Ioniq 5, and the Ford Mustang Mach-E for a family of four. Focus on real-world range, charging speed on a standard home charger, and available cargo space.â
To get an even more robust answer, you can move beyond a single query and assemble a âcommittee of experts.â A single language model will give you its most statistically likely answer, which might not be the most creative or well-rounded one. To overcome this, you can generate multiple, independent perspectives.
For the EV comparison, you could open three separate conversation windows. In the first, youâd ask the model to act as a pragmatic engineer and, perhaps, the model will argue for the Hyundai. In the second prompt, youâd ask it to be a tech enthusiast, and maybe it makes the case for the Tesla. In the third, youâd have it act as a family-focused reviewer, causing it to arguing for the Ford.
By copying these three independent analyses into a final chat window, you can then ask the model to act as a senior editor, synthesizing the competing viewpoints into a final, balanced recommendation that weighs factors like cost, range, and reliability.
Finally, after the model provides a responseâeither a single answer or a synthesized one from your committeeâyou can employ self-criticism as a final refinement tactic.
Once you have a draft of your email asking for a raise, you can prompt it: âRead the email you just drafted. Now, act as my manager who is busy and skeptical. What parts of this email are unconvincing? Is the tone too demanding or not confident enough?â This critical step often surfaces weaknesses that you might have missed, allowing you to create a much stronger final product.
The System
The final part of the methodology is to turn your successful interactions into a repeatable system. A common mistake is to treat prompts as disposable. A more powerful approach is to build a library of reusable prompts, thinking of them as personal ânatural language programs.â The multi-step process you used to plan the birthday party can be saved as a âKidsâ Party Plannerâ template.
The Socratic prompt that helped you prepare for your salary negotiation can be generalized into a âCareer Conversation Prepâ tool. The ultimate expression of this principle is the use of features like OpenAIâs âCustom GPTs,â which allow you to encapsulate a complex task into a dedicated tool that you or your team can use with a simple request.
A Practical Example
To see how these principles combine into a powerful workflow, letâs walk through a comprehensive, real-world task: planning a 10-day family vacation to Italy.
Rather than beginning with a vague request like âplan a trip,â the process starts by applying the Socratic approach. You would first ask the model to frame the problem for you: âI want to plan a 10-day family vacation to Italy. What key information do you need from me to create the best possible itinerary?â
This immediately shifts the dynamic, positioning the model as a guided partner. In response, it would act as a consultant, asking for crucial details like the number of travelers, the childrenâs ages, your budget, family interests, and preferred travel pace.
Once youâve provided this context, the next step is to ensure alignment. You would instruct the model to synthesize and confirm the constraints: âGreat, thank you. Based on my answers, please summarize all of my constraints for this trip in a structured list.â
With a clear, confirmed set of requirements, you can then confidently ask for a first draft. The iterative heart of the process begins now. Upon receiving the initial itinerary, you would employ the self-criticism tactic: âThis is a good start. Now, act as a skeptical travel agent. Criticize this itinerary and tell me whatâs missing or what could go wrong.â
The model might point out that visiting three major cities in ten days is too ambitious for a family with young children. Based on this valuable feedback, you can guide the revision, continuing this loop of drafting and critiquing until the plan is refined to your satisfaction. Only then would you ask for the final, detailed output.
The final, powerful step is to generalize this success. You would ask the model to convert the entire conversation into a reusable âFamily Vacation Plannerâ template, complete with placeholders for key details. This turns a one-time effort into a valuable, programmable asset for future trips, demonstrating the true power of thinking of prompts as reusable programs.
Common Pitfalls for the Everyday User
The good practices above are designed to improve the quality of a language modelâs output. This section focuses on the mental traps and risks you must be aware of to use these tools safely.
The âEliza Effectâ and Misplaced Trust
Because chatbots are designed to be conversational and helpful, itâs easy to start treating them as if they have genuine understanding, intentions, or even consciousness. This is a modern version of the âELIZA effectâ we discussed in the history chapter. The danger is that this leads to misplaced trust, where we stop critically questioning the modelâs output because it feels so confident and knowledgeable. This is the psychological trap that makes us vulnerable to hallucinations; we are less likely to fact-check a âpartnerâ than a machine.
Cognitive Offloading and The âLazy Brainâ Problem
The ease of asking a language model to summarize an article, draft an email, or brainstorm ideas can lead to a subtle but significant danger: cognitive offloading. By outsourcing the fundamental work of thinking, synthesizing, and structuring our thoughts, we risk letting our own critical thinking and creative muscles atrophy. The goal is to use these tools to think better, not to think less. Over-reliance can make us less capable problem-solvers in the long run.
The Privacy Risk of Casual Conversation
In a casual conversation with a chatbot, itâs easy to forget that you are interacting with a complex system run by a corporation. Users often paste sensitive personal informationâmedical details, financial data, private emails, proprietary work contentâinto public language models without considering where that data goes, how itâs used for future training, or who might have access to it. What you tell the model does not stay between you and the model.
You Are the Final Authority
The techniques above teach you how to get better raw material from the language model. This final principle is about what you, the human, must do with that material. It is the most critical step in using these tools responsibly.
First, never trust, always verify. The language model is an unreliable narrator. Treat its output as a well-written first draft, not a finished fact. For any critical piece of informationâa date, a statistic, a medical suggestion, a legal pointâyou must verify it using an independent, authoritative source. The model can help you find potential sources, but you are the fact-checker.
Second, synthesize, donât just copy-paste. The modelâs output is information; your goal is knowledge. The most important work happens after the model has responded. Your job is to synthesize its suggestions with your own experience, judgment, and goals. The model can generate a list of tourist sites for your Italy trip, but only you can synthesize that into a vacation plan that feels right for your family.
Finally, own the outcome. The language model is a tool, and you are the user. Any decision made, any email sent, or any action taken based on the modelâs output is your responsibility. This principle of accountability is non-negotiable. The model is an assistant that can help you think, but it is not a replacement for your personal judgment.
Conclusion
The journey from a novice user to a skilled one is not about memorizing clever prompts; itâs about a fundamental shift in mindset. Instead of treating generative AI as a vending machine for answersâan approach fraught with risks of shallowness, bias, and errorâweâve seen the power of engaging it as a conversational partner.
The practices outlined in this articleâthe Socratic method, strategic querying, and, most importantly, critical verificationâform a framework for responsible engagement. This framework places you, the user, firmly in the driverâs seat.
The quality of the modelâs output is not a feature of the model alone; it is a direct reflection of the quality of your guidance and the rigor of your review. You are not just a prompter; you are a director, a critic, and a synthesizer. This is what makes these powerful tools âmostly harmlessâ: not their inherent nature, but our commitment to using them with critical awareness and human authority.
By mastering these foundational skills, you are not just learning to use a new tool. You are developing a new form of literacy for the 21st century. As we move into the specialized applications for knowledge workers, developers, and creatives in the following chapters (of the book), this ability to think with AI, not just ask of it, will be your most valuable asset.
Thanks again for reading. If you want to dive deeper into Artificial Intelligence and learn to make the best out of it, from a techno-pragmatist, human-centered, responsible perspective, please check out my book Mostly Harmless AI.
I have some thoughts today on:
- what are the human challenges of engagement
- what if the human is anxious or deferential to authority
- how should the human think of the AI, perhaps as a lawyer speaks to a paralegal?
- question about the context window and how much lookback it has
Thank you for this useful model about how to engage with the LLM to gain understanding and knowledge. Perhaps you could speak to some of the human challenges of engaging in critical thinking.
- some people are afraid to make a decision
- some people want to take the first answer so they can be done with it
- some people have only budgeted the time and capacity for a 30-second conversation, rather than a 30-minute reflection
- some people come to the prompt with anxiety - anxiety that they don't know what to say - but also anxiety that their question is incomplete - or that the topic is more complex and less definitive
- for some people, critical thinking is a hard thing. Perhaps part of that is that their circumstances expect deferral to authority. And that they are the authority, not the LLM.
- it might be hard for some people to prompt the AI with several very explicit and specific sentences. When talking to another equal person, you wouldn't speak like that.
What are the mental hurdles a person would have in talking to an AI in a critical mode?
One thing I have a question about is the context window. How much does it remember about what you have previously talked about?
If I were to do what you said, to have several sessions, generate several points of view, and paste them into a single session - is the AI capable of processing an unlimited amount of text that you dump in there?
I like your method.
I assume most people have become as addicted to this stuff as I have, and so have eventually fallen into one of the traps you mention here. Software development has become the easy target of 'get rid of everyone and let the LLM do it' and eventually I ran into the reality that we aren't there yet, we are just building shittier software using cheaper labor. Learning to use the cheap LLM labor well is an important skill, and your framework provides a good starting point.
...we arrive at the most pressing question: How do we actually use the powerful new tools it has produced?
an equally important question is how do these tools use us?