Exponential View
Exponential View by Azeem Azhar
šŸ”® Reflecting on 2025, the year AI became real
Preview
0:00
-25:17

šŸ”® Reflecting on 2025, the year AI became real

šŸŽ…šŸ¼ We recorded this to publish after Christmas, but demand for year‑end reflections prompted an early release - so if you hear me say Christmas has passed in the video or podcast, that’s why!

As we approach the end of the year, I want to reflect on what made 2025 special – what we learned about AI, what surprised me, and what it all means for the road ahead. This was the year artificial intelligence stopped being a curiosity and became infrastructure. The year models got good enough to do real work. And the year we began to understand just how profoundly this technology will reshape everything.

You can listen/watch my reflections on 2025 (widely available) or read all my notes below if you’re a paying member.

I cover:

  1. The models matured

  2. The work shifted

  3. Orgs are slow

  4. Atoms still matter

  5. Money’s real

  6. K is the letter of the year

  7. And my seasonal movie recommendation for you šŸŽ


1. The models matured

My favourite development of 2025 has to be Nano Banana, Google’s image generation service. Beyond its fantastic name (which, as a Brit, I’ll admit Americans say much better than I do 🤭), it represents something remarkable. Pair it with Gemini 3 Pro, present a complex idea, and ask it to turn that into an explanatory diagram. The visual results can be phenomenal with some clever prompting. It’s also really fun for video generation:

But the deeper story is that in 2025, many models got good enough to do long stretches of work. Claude 4.5 from Anthropic is fantastic for working with long documents and for coding. GPT-5 excels at deep research and certain classes of problem-solving – I’ve been using version 5.2 for financial modeling. Google’s Gemini 3 Pro is markedly better than previous generations. And I’ve been turning to Manus more often.

All of these models are now capable of doing what I would describe as a few hours’ worth of high-quality work. This has meant that I, as someone who hasn’t been allowed near software code for more than a decade, have been able to build my own useful applications. What was derisively called ā€œvibe codingā€ a year ago has transformed into something genuinely productive for me. I will write more on this in my EOY lessons for genAI use in the next few days.

2. The work shifted

One of the biggest changes I’ve noticed is cognitive. There’s been a shift from the effort of actually doing the work to the effort of judging the work, specifying problems well, turning those problems over to a model, examining the output, and deciding what and how to move forward. You need to maintain a high degree of mental acuity as you evaluate: are the assumptions reasonable? Are there obvious errors?

Those of us who have managed people and led teams will recognise that it’s a bit like managing and leading, except your team member is a machine that can work in parallel across many domains simultaneously. And therein lies an additional challenge: the cognitive load of selecting which model to use for which task has now landed squarely on me, the user. It’s clear for coding – that’s Claude. But for the nebulous set of other problems, the choice between GPT 5.2 and Gemini 3 is never obvious beforehand, though it usually becomes clear in retrospect.

In a way, we all start to behave like developers. Developers think constantly about their tooling and workflows. They consider what in their weekly cadence can be automated. If you’re using AI effectively, you’ll spend less time putting bricks in the wall, more time figuring out how best to organise the bricks and specify what you need. If you have a static way of working with your AI system, you’re missing out. The models are becoming more capable. More importantly, the application layer around the models – the tools they can access, the use of memory, the use of projects – is making them dramatically more useful. So you simply have to keep experimenting and trying out new things.

My experience is that about three-quarters of what I’m doing today, I wasn’t doing three or four months ago. Models have become that much more capable. If you treat AI like a simple operating system upgrade, rather than something whose capacity grows and requires regularly changing how you work, you will miss out on the benefits.

3. Orgs are slow

One of the toughest lessons of 2025 for many will be how slow and painful organizational rewiring is compared to the progress we’re making with the AI systems themselves. Let me share Exponential View’s story.

This post is for paid subscribers