Five proven prompt engineering techniques (and a few more-advanced tactics)
How to get exactly what you want when working with AI
👋 Hey, I’m Lenny, and welcome to a 🔒 subscriber-only edition 🔒 of my weekly newsletter. Each week I tackle reader questions about building product, driving growth, and accelerating your career. For more: Hire your next product leader | Favorite Maven courses | Lennybot | Podcast | Swag
❤️🔥 A quick note on the Lenny and Friends Summit
The inaugural Lenny and Friends Summit was a smashing success. I shared some reflections on LinkedIn and X.
If you couldn’t make it, videos of the talks from the Summit will be going up on my YouTube channel over the next couple of weeks. Subscribe to catch them as they come out.
My live podcast recording with Shreyas Doshi will be coming out this Thursday as an episode in your regular podcast feed.
I’m constantly hearing about people doing mind-blowing things with AI, like building entire products, offloading big parts of their job, and saving hundreds of hours doing research. But most of the time when I try using ChatGPT/Claude/Gemini, I get very meh results. Maybe you’re having the same luck.
The difference, I’m learning, is in crafting your prompts. The nuance and skill needed to get good results became clear to me when Mike Taylor published his guest post “How close is AI to replacing product managers?” and included prompts that did unexpectedly well in a blind test vs. human performance. I wanted to learn more—and so did many readers who emailed me after that post came out. I asked Mike to write a deeper dive just on prompt engineering.
Mike is a full-time professional prompt engineer. He wrote a book for O’Reilly on prompt engineering and created a course on AI taken by 100,000 people, and over the past few years he’s built up a collection of techniques that have proven useful again and again. Below, Mike shares the five prompting techniques he’s found to have the most impact when prompting LLMs, plus three advanced bonus techniques if you want to go further down the rabbit hole.
For more from Mike, check out his book, Prompt Engineering for Generative AI, and his AI engineering studio, Brightpool. Follow him on LinkedIn and X.
Much like how becoming a better communicator leads to better results from the people you work with, writing better prompts improves the responses you can get from AI. We already know that people can’t read our minds. But neither can AI, so you have to tell it what you want, as specifically as possible.
Say you’re asking ChatGPT to write an announcement for a new feature. You want to make sure the phrasing is attention-grabbing and factual but that the message is also authentic to your product and customer base. Naively asking ChatGPT to do the task without giving it any real direction will likely result in a generic response that leans too heavily on emojis and over-enthusiastic corporate speak.
With no stylistic direction, you’ll get one of these fake, corporate-sounding responses, because that’s the average of what’s out there. The same thing might happen if you delegated this task to a member of your team without being clear about what you actually want. One quick trick: ChatGPT is capable of emulating any famous style or format—you just need to specify that in your prompt so it knows what you’re looking for. Something as simple as appending the words “in the style of [insert famous person]” can make a huge difference to the results you get.
Specifying a style is just one tactic, though, and there are hundreds of prompt engineering techniques, many of them proven effective in scientific studies. The good news is that you don’t have to read through all those papers on ArXiv. Every week, I spend a full day researching and experimenting with the latest techniques, and in this post, I’ll walk you through the five easy-to-use prompt engineering tactics that I actually use day-to-day, plus three more that are a bit more advanced and tailored to certain circumstances. What’s more, I’ll give you plug-and-play templates you can start using today to improve your own prompts.
My five favorite prompt engineering tactics
These techniques work across any large language model, so whether you use ChatGPT, Claude, Gemini, or Perplexity, you can get better results. I’ve included examples of how these techniques work, what problems they solve for, and when to use them. Most are backed by scientific evidence, so I’ve also linked to those papers for further reading.
With today’s latest models, there’s a lot less need for prompting tricks than there was back in 2020 with GPT-3. However, no matter how smart AI gets, it’ll always need guidance from you, and the more guidance you give it, the better results you’ll get. The types of tactics I’ve focused on in this list are ones that I anticipate will continue to be useful far into the future.
Tactic 1: Role-playing
Role-playing is the technique we already demonstrated, where you instruct the AI to assume the persona of an expert, celebrity, or character. This approach leverages the AI’s broad knowledge base to mimic the style, expertise, and perspective of the chosen role. By doing so, you can obtain responses that are more tailored to the specific domain or viewpoint you’re interested in. For example, asking the AI to respond as a renowned scientist might yield more technical and research-oriented answers, while role-playing as a creative writer could result in more imaginative and narrative-driven responses.
Prompt template: “You are an expert in [field] known for [key adjective]. Help me [task].”
Example:
Situation: You’re preparing for a crucial meeting with the engineering team to discuss a new feature’s technical feasibility.
Problem: You’re not confident in your ability to articulate the technical requirements clearly and persuasively.
Prompt: “You are an expert software architect known for bridging the gap between product vision and technical implementation. Help me prepare talking points for a meeting with our engineering team about the technical feasibility of our new AI-powered recommendation engine.”
Try it with ChatGPT: https://chatgpt.com/share/67070632-89d8-800d-b153-83c8a743b819
Source: https://arxiv.org/abs/2305.16367
Tactic 2: Style unbundling
Style unbundling involves breaking down the key elements of a particular expert’s style or skill set into discrete components. Instead of simply asking the AI to imitate someone, you prompt it to analyze and list the specific characteristics that make up that person’s unique approach. Then you can use those characteristics to prompt the AI to create new content. This technique allows for a more nuanced understanding and application of the desired style. It’s particularly useful when you want to incorporate certain aspects of an expert’s method without fully adopting their persona, giving you more control over which elements to emphasize in the AI’s output.
Prompt templates:
“Describe the key elements of [expert]’s style/skill in bullet points.”
“Do [task] in the following style: [style].” Note: The key-element bullet points we get in response to the first prompt go into the second prompt as the style guide.
Example:
Situation: You admire how a competitor’s product manager communicates product updates, but you don’t want to copy their style directly.
Problem: You’re struggling to pinpoint what makes their communication effective.
Prompts:
“Describe the key elements of Apple’s product announcement style in bullet points. Focus on how they communicate new features and benefits to users.” Note: the output of this prompt becomes the bullet points in the second prompt below.
“Write a product announcement for our new project management software feature in the following style:
- Simplicity: Clear messaging without technical jargon
- Storytelling: Narratives highlighting user benefits
- Visuals: High-quality demos and graphics
- Live demos: Showcasing features in real time
- Customer focus: Emphasizing personal benefits
- Key features: Highlighting important advancements
- User testimonials: Reinforcing value through experiences
- Comparative context: Showing improvements over past models
- Emotional appeal: Connecting technology to lifestyle
- Call to action: Encouraging audience engagement”
Try it with ChatGPT: https://chatgpt.com/share/67070931-6780-800d-a8d0-c6a198de3479
Source: https://bakztfuture.substack.com/p/dall-e-2-unbundling
Tactic 3: Emotion prompting
Emotion prompting is a technique that involves adding emotional context or stakes to your request. By framing the task as personally important or impactful, you can potentially elicit more careful and thoughtful responses from the AI. This method taps into the AI’s programming to be helpful and considerate, potentially leading to more thorough or empathetic outputs. However, it’s important to use this technique judiciously, as it can sometimes have the opposite intention and lead to worse results.
Prompt template: “Help me [task]. Please make sure [attribute]. This task is very important for my career.”
Example:
Situation: You need to write a compelling product roadmap presentation for the executive team.
Problem: You’re concerned that your presentation might not convey the urgency and importance of your proposed initiatives.
Prompt: “Help me draft a product roadmap presentation that will resonate with our executive team. Please make sure it conveys a sense of urgency and highlights the strategic importance of each initiative. This task is very important for my career.”
Try it with ChatGPT: https://chatgpt.com/share/6707074a-c83c-800d-9654-0aca023fc523
Source: https://arxiv.org/abs/2307.11760
Tactic 4: Few-shot learning
Few-shot learning, also known as in-context learning, is a technique where you provide the AI with a few examples of the task you want it to perform before asking it to complete a similar task. This method helps to guide the AI’s understanding of the specific format, style, or approach you’re looking for. By demonstrating the desired output through examples, you can often achieve more accurate and relevant results, especially for tasks that might be ambiguous or require a particular structure.
Prompt template: “Here are some examples of [task]. Generate a [task] for [new context].”
Example:
Situation: You need to write user stories for a new feature, but you’re new to the team and unsure of their preferred format.
Problem: You want to ensure that your user stories align with the team’s existing style and structure.
Prompt: “Here are some examples of user stories from our backlog:
- As a user, I want to reset my password so that I can regain access to my account if I forget it.
- As an admin, I want to view user activity logs so that I can monitor for suspicious behavior.
Generate a user story for adding a new ‘dark mode’ feature to our mobile app.”
Try it with ChatGPT: https://chatgpt.com/share/67070778-8ce8-800d-a45a-0405480a9600
Source: https://arxiv.org/abs/2005.14165
Tactic 5: Synthetic bootstrap
Synthetic bootstrap is a practical technique where you use the AI to generate multiple examples based on given inputs. These AI-generated examples can then be used as a form of in-context learning for subsequent prompts or as test cases you can use as inputs for your existing prompt template. This method is particularly useful when you don’t have a lot of real-world examples readily available or when you need a large number of diverse input examples quickly. It allows you to bootstrap the learning process, potentially improving the AI’s performance on the target task even without the help of a domain expert.
Prompt templates:
“Generate ten examples of [examples] for [context]. Here are the inputs: [inputs].”
“Generate [task] using [examples].”
Example:
Situation: You’re creating personas for a new target market, but you lack real user data.
Problem: You need diverse, realistic personas to guide product development, but you don’t have the resources for extensive user research.
Prompts:
“Generate ten examples of user personas for our new fitness tracking app. Here are the inputs:
- Name and age
- Occupation
- Fitness goal
- Current fitness routine
- Technology comfort level
- Key pain points in their fitness journey”“Generate potential customer feedback on our idea to track calories burned during work meetings, using our user personas.”
Try it with ChatGPT: https://chatgpt.com/share/67070802-0564-800d-b96b-020fa30d8382
Source: https://arxiv.org/abs/2310.03714
BONUS: Three more advanced tactics
If you got this far and still want to push your prompting skills further, the next level up is learning ways to split up the task into multiple steps. Rather than trying to do it all in one prompt, most professionals in the AI space build a system that corrects for the errors AI models commonly make. These tactics can take more time or be harder to implement—particularly if you can’t code—but they can make all the difference when AI is failing at a task. With better structuring interactions with AI, you can leverage its strengths, mitigate weaknesses, and create more robust and reliable outcomes.