đź Six mental models for working with AI
Plus a stack of 50+ AI tools we use at Exponential View
The question of whether AI is âgood enoughâ for serious knowledge work has been answered. The models crossed that threshold this year. Whatâs slowing organizations down now isnât capability, but the need to redesign work around what these systems can do.
Weâve spent the past 18 months figuring out how. We made plenty of mistakes but today I want to share what survived. Six mental models that can genuinely change the quality of work you get from generative AI. Together with the seven lessons we shared earlier in the year, this is the operating manual we wish we had all along.
At the end, youâll also get access to our internal stack of 50+ AI tools. Weâve documented everything weâre actively using, testing or intend to test, to help you decide what tools might work for you.
Enjoy!
1. The 50x reframe
Most people start working with AI by asking something along the lines of: how do I speed up what Iâm already doing?
That question is comfortable and wrong. I find that it anchors me to existing constraints.
A more useful question is:
What would I do if I had 50 people working on this?
Then work backwards.
The 50x reframe forces you to imagine the ideal outcome unconstrained by time or labor. Only then do you ask which parts of that hypothetical organization can be simulated with software. I now encourage our team members to think of who they would hire, what work that person would do, how theyâd know if they were successful.
If youâve not had the experience of hiring fifty people on a project (fair enough!), use this prompt to get you started to identify what it is that you may need:
A prompt you could use:
I currently [describe your task/process]. Walk me through what this would look like if I had a team of 50 people dedicated to doing this comprehensively and systematically. What would each role focus on? What would the ideal output look like? Then help me identify which parts of that hypothetical teamâs work could be automated or assisted by AI tools.For example, we use this approach for podcast guest prospecting and research. We used to rely on network and serendipity to identify 20-30 strong candidates for each season; a mix of the right expertise, timing and editorial fit that consistently delivered good conversations â but left too much to chance. Instead, 50x thinking asks what if we could systematically evaluate the top 1,000 potential guests? What if we could track the people weâre interested in so they surface when theyâre most relevant? We built a workflow that researches each candidate, classifies expertise, identifies timely angles, and suggests the most relevant names for any given weekâs news cycle.
2. Adversarial synthesis
Even experienced operators have blind spots. We internalize standards of âgoodâ based on limited exposure. No one has seen great outputs across every domain but the models, collectively, have come closer to it than anyone else.
To make the most of this superpower, I give Gemini, Claude and ChatGPT the same task â and make them argue. Then have each critique the others. Youâll quickly surface gaps in your framing, assumptions you didnât realise you were making, and higher quality bars than you expected.
When models disagree, itâs usually a sign that the task is underspecified, or that there are real trade-offs you havenât surfaced yet. Which brings us to the next point.
3. Productize the conversation
If youâre having the same conversation with AI repeatedly, turn it into a tool. Every repeated prompt is a signal. Your workflow is basically telling you that this (t)ask is valuable enough to formalize.
I found that when I productize a conversation by turning it into a dedicated app or agentic workflow, my process gets better at the core and my tool evolves over time. So the benefits of the original conversation end up compounding in a completely new way.
A prompt you could use:
## Context
I have a recurring [FREQUENCY] task: [BRIEF DESCRIPTION].
Currently I do it manually by [CURRENT PROCESS - 1-2 sentences].
Hereâs an example of this task in action:
<example>
Input I provided: [PASTE ACTUAL INPUT]
Output I needed: [PASTE ACTUAL OUTPUT OR DESCRIBE]
</example>
## What I Need
Turn this into a reusable system with:
1. **Input specification**: What information must I provide each time?
2. **Processing instructions**: What should the AI do, step by step?
3. **Output structure**: Consistent format for results
4. **Quality criteria**: How to know if the output is good
## Constraints
- Time/effort budget: [e.g., âshould take <5 min to runâ]
- Depth: [e.g., âverify top 10 claims, not exhaustiveâ]
- Tools available: [e.g., âhas web searchâ or âno external lookupsâ]
- Error handling: [e.g., âflag uncertain items vs. skip themâ]
## Desired Format
Deliver this as a: [CHOOSE ONE]
- [ ] System prompt I can paste into Claude/ChatGPT
- [ ] Zapier/Make automation spec
- [ ] Python script (Replit-ready)
- [ ] Lindy/agent configuration
- [ ] All of the above with tradeoffs explained
## Success Looks Like
A good output will: [2-3 bullet points describing what âdone wellâ means]I kept asking LLMs for editorial feedback multiple times a week, for weeks. After some fifteen hours of repeat prompting, I built a virtual editor panel in Replit and Lindy.

