🐙 Promptpack: AI-enhanced prompting
How large language models are the best prompt engineering tools
Today, I’ll take you through how to use LLMs as partners in prompting to help you get the most out of AI tools such as ChatGPT and Claude.
But before we begin, let me remind you of all of the AI guides my colleagueand I put together to help you leverage generative AI in your work.
To build an AI-enhanced second brain, start with: 🐙 Promptpack: How to build a second brain with AI
For a guide on how to enter the expansive thinking space with AI, this one is perfect: 🐙 Promptpack: Generative AI for exponentialists
To learn how to start using Code Interpreter, this guide is for you: 🐙 Promptpack: Getting started with Code Interpreter
For a more advanced example of how to use Code Interpreter, definitely read: 🐙 Promptpack: Using Code Interpreter to crack your marketing funnel
Prompting generative AI systems is tricky. Slight differences in prompt phrasing, or the model in use, yield very different results. As the AI community learns about the science of prompting, we are discovering how powerful generative AI really can be.
In particular, evidence is building up for using LLMs to write and iterate prompts to get better results. This is not because genAI necessarily has an intrinsic understanding of how its own LLMs work1, but rather because of its ability to handle natural language and do so quickly. LLMs can speed up the often lengthy, iterative process of prompting, and design better-performing prompts than humans.
In this Promptpack, I will show you three ways to start using AI for better prompting, either to save time or increase the quality of your outputs2.
Here is what you’ll learn:
How an LLM can write and iterate on its own prompts faster and better than a human,
How a “metaprompt” can produce drastically better summaries of text documents,
How to get the most out of an image generator such as Midjourney with LLM-enhanced prompting.