š® Exponential View #556: When execution gets cheap. Capital gains, labor pains. AI buys the grid. CRASH clock, taming complexity & new zones of influence++ ++
An insiderās view of AI and exponential technologies
Hi all,
Welcome to the Sunday edition.
Inside:
What building two dozen apps over the holidays taught me about the shrinking distance between āChief Question Officerā and no officer at all.
Labor pains, capital gains: US GDP is up, employment not so much. What is going on?
The data center, a microgrid: AI labs outran the grid, then hit the turbine factories. Now theyāre buying the infrastructure companies themselves.
Plus: Utahās bet on AI prescriptions, taming complexity, robots performing surgeries, and new spheres of influenceā¦
In the latest member briefing, I share my expectations for 2026. Itās the year AI stops feeling like a tool and starts feeling like a workforce. Plus Q&A.
The year ahead
2026 is the year when AI will feel less like a set of tools and more like a workforce, where the advantage moves to those who can orchestrate it into reliable outcomes.
Execution is cheap
Over the break, I spun up multiple agents in parallel ā one building a fact-checker, another an EV slide deck maker and a third a solar forecast model. All ran simultaneously in the background while I did other work.1 I described the problems, LLMs create detailed product specs and the agents made the apps. In my first meeting back, I demoed two dozen working tools to the team. This follows my rule of 5x ā if I do something more than five times, I build an app for it. A year ago, each of those apps would have cost a developer weeks to build.
My friend Erik Brynjolfsson calls the human in this arrangement the āChief Question Officer.ā We ask, machines do, we evaluate. Erikās framing is elegant, but I donāt think itās true to the moment; weāve moved even further. I used to check every output against a strict spec; now I mostly trust the agent to catch and fix its own mistakes ā and it usually does. Before Opus 4.5, I had to rescue the model from dead ends. Now it asks good clarifying questions, corrects itself and rarely stalls.
This velocity changes behaviors. For instance, I used to frame briefs carefully; now I leave them a bit looser because the agent fills the gaps. I remain the Chief, yet the role feels like a pilot toggling autopilot ever higher. If progress continues, will I always occupy the cockpit? Would stepping aside, ceding the questions as well as the answers, actually increase what gets built?
Erik warns of the āTuring Trap,ā the temptation for firms to use AI to mimic and replace humans. He frames this as a societal choice between augmentation and replacement. I agree we shouldnāt drift into replacement. But my holiday build sprint made it clear that convenience pulls hard. The pressure isnāt just from companies; itās from us, users making choices. When each small handoff to the agent feels free, can we really resist going all the way to full automation?
Hereās this weekendās essay on the work after work, the value of human judgement and authenticity:
š® The work after work
Nine years ago, I told an audience that the future of human work is artisanal cheese. I got a laugh. GPT-4 didnāt exist yet, the trend line was invisible to most, and āknowledge workā still felt impregnable.
See also:
This is one of the hot debates among economists: how should we distribute AIās gains? Philip Trammell and Dwarkesh Patel argue that a global, steeply progressive tax on capital is the only credible way to prevent inequality from compounding in an AI-automated world.
Capital gains, labor pains
The US economy has decoupled growth from hiring (see EV#545). While the Atlanta Fed projects a massive 5.4% GDP surge for Q4 2025, the labor market has effectively stalled, adding only 584,000 jobs all year. This is the worst non-recession performance since 2003. This divergence is driven by acceleration in productivity and back-to-back quarterly declines in unit labor costs. Is AI the cause?


