🔮 X-raying OpenAI’s unit economics
Notes from our live discussion
AI companies are being valued in the hundreds of billions. $650 billion in capital expenditure commitments are being made by big tech for 2026. Yet one question remains unanswered: does it make economic sense?
We recently partnered with Epoch AI to analyze GPT-5’s unit economics, and figure out whether frontier models can be profitable (full breakdown here).
To dig deeper into what our results tell us about the wider industry, we hosted a live conversation last week between myself, Hannah Petrovic, Jaime Sevilla, moderated by Matt Robinson.
We cover:
The research findings,
Possible paths to profitability,
OpenAI vs Anthropic playbook,
Winning the enterprise
Why this research made some bulls more pessimistic
What the market gets wrong.
Watch here:
Listen here:
Or read our notes:
What did you actually find?
Matt: For someone just getting into the research, what’s the big takeaway — and how did you even think about building a framework to analyse a business like this?
Jaime: To our understanding, no one had really taken on this task of piecing together all the public information about the finances of OpenAI — or any large AI company — and trying to paint a picture of what their margins actually look like. So we did this hermeneutic exercise of hunting for every data point we could find and trying to make sense of it.
The two most important takeaways: first, it seems likely that OpenAI during the past year, especially while operating GPT-5, was making more money than the cost of the compute — which is the primary expense of operating their product. But they appear to have made a very thin margin, or even lost money, after accounting for all other operating expenses: staff, sales and marketing, administrative costs, and the revenue-sharing agreement with Microsoft.
Second — and this is the part I found quite shocking — if you look at how much they spent on R&D in the four months before they released GPT-5, that quantity was likely larger than what they made in gross profits during the entire tenure of GPT-5 and GPT-5.2.
Hannah: A lot of our methodology was based on numbers we could find historically, then trying to project what would happen through the rest of 2025. For example, we had data showing 2024 was $1 billion in sales and marketing, and H1 of 2025 was $2 billion. So we built the picture using constraints like this, breaking each category down into its separate components so we could assess whether each was a realistic approximation.
Models as depreciating assets
Azeem: This is a complicated exercise, and one of the things that comes out of it is the question of that short model life. The family we looked at was only really the preeminent family for a few months. Enterprises don’t change the API they’re using the day a new one comes out — there’s always a lag. But consumers do, because that’s what you get access to on ChatGPT.
You may remember that when GPT-4 was set aside, it was an emotional support tool for many users, and they were very upset with how methodical and mechanical GPT-5 felt. The uncertainty is: to what extent do you actually learn and prepare for your next model based on the short life of the existing model?
There are two elements. One is more nebulous — by having a really good model, even if it lasts for a short period, you maintain your forward momentum in the market. The second is harder to unpick: what do you learn about running better and better models from actually having run a better model, even if it only lasts four months? That learning might be down in the weeds of R&D — training data choices, reinforcement learning — or in operations and just running a model at that scale. I suspect it’s hard for even OpenAI to know the contribution of that second part.
Matt: It reminds me of GPUs. I’ve been talking to finance folks about what the value of H100 chips will be in a few years, and everyone’s shrugging their shoulders. It’s a parallel to these models, what is GPT-4 worth now, when three years ago it was the frontier?
Is there actually a path to profitability?
Azeem: I think the two challenges around this model are: first, is the OpenAI approach the only way to do this? We’ve seen Anthropic do something completely different. And second, is there a path to positive unit economics? Are they producing something for X dollars that they can sell for 1.3X? Or are they producing something for X that they sell for half X, which was the story of a lot of the dot-com era?
We got partway to answering that second question: yes, it’s expensive, but yes, there is some kind of gross profit margin. The level we estimate — Hannah can speak more accurately to this — is lower than a traditional software business. So we’re learning that perhaps foundation labs don’t look like software businesses. They look like something different.
Jaime: Think about the game OpenAI is playing. It’s not about becoming profitable right away. What they’re trying to do is convince investors that they have a business and research product worth scaling as much as possible, driven by the conviction that through scale, they’ll unlock new capabilities which in turn will unlock new markets.


