Exponential View

Exponential View

🔮 Exponential View #573: Are the AI labs building for an intelligence explosion?

Plus: Mythos Preview, jobs, fusion economics & personhood++

Azeem Azhar
May 10, 2026
∙ Paid

One of the best to understand AI capability curves. — Mark C., a paying member


Hi all,

Back home after three weeks on the road and easing back in.

In this week’s issue:

First, AI self-improvement. Jack Clark thinks there is a real chance a frontier model trains its successor by 2028. If that is true, what should we already be seeing in how the labs hire, spend and build?

Then, jobs and AI. In some of the occupations most exposed to AI, postings are rising.

Finally, what will more capable AI agents mean for token budgets? Exponential View members get access to my interactive model.

Let’s jump in!


The signs of self-improvement

Anthropic’s Jack Clark has argued that there is a 60% chance that a frontier model will train its successor by 2028. It is an exciting claim, perhaps revolutionary, perhaps frightening; the prospect of a recursive intelligence explosion.

There are plenty of reasons to read Jack’s essay and conclude that the picture might not be so clean.

One objection is that frontier training now looks less like a pure research challenge and more like an industrial scaling problem. The bottlenecks are not only about optimizing CUDA kernels. They are about negotiating land leases in Wyoming, securing power infrastructure, obtaining chips, and hiring the electricians to wire it all together. Over a three-year horizon, those physical constraints may matter more than algorithmic advances.

So what can we infer from the revealed preferences of frontier labs? If automated R&D were truly likely by 2028, what would we expect their behavior to look like now?

First, hiring would change. Labs would still want elite researchers, but the profile would shift towards people who can make research agents useful. Fewer pure researchers, more research multipliers, people who can build an automated research factory.

Second, labs would overinvest in compute before the automation arrives. Because of those physical constraints, they would want more GPUs, more memory, more power, more data centers, more inference capacity, and better internal tooling.

If a lab believes the R&D loop is about to accelerate, then waiting becomes expensive. You would expect it to tolerate ugly near-term cash burn in order to secure the pre-commitments it needs.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 EPIIPLUS1 Ltd · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture