đŽ Exponential View #564: Intelligence as a target; the future of knowledge; AI, productivity & economy; CO2 armor, ultra-violent ancestors & Brand Age++
âI had subscribed via an employer funded training budget. When that ran out, I realized how much I learned/gained and am now subscribing with my own money.â â Elliot W., a paying member
Hi,
Welcome to the Sunday edition, in which we make sense of the week behind us. Todayâs edition is open to everyone, so share it widely.
Thanks for reading!
The Strait of Compute
Three drone strikes hit critical infrastructure in Bahrain and the UAE this week. The targets werenât the usual ones of warfare â shipping lanes, military bases, nuclear facilities or power plants. This time, the targets were 21st-century intelligence factories, three AWS data centers.
The structure of vulnerability is changing (see my first book Exponential). The AI production line is highly concentrated, converging into a few narrow lanes with a handful of fabs, a few giant cloud platforms, and a dominant chipmaker. Quantitatively, by our calculations, the Herfindahl-Hirschman Index for AI chips is 0.59, where 1.0 is a pure monopoly, and 0.25 would be considered a âhighly concentrated market.â
Washington is tightening its grip on that stack for that reason. The US is debating whether to impose a tiered oversight of large Nvidia clusters: licenses for smaller deployments, government-to-government assurances for clusters up to about 100,000 chips and potential on-site inspections as installations approach roughly 200,000. Initially, this tool was aimed mostly at constraining China; now it could become a general instrument of geopolitical leverage over many countries and firms.
We are already seeing the effects of prior chip controls in Chinaâs AI ecosystem. Lin Junyang, the former technical lead of Alibabaâs Qwen model, said that China was ârelatively strappedâ for compute and that serving users âlikely consumes the majority of our infrastructureâ. Several members of the Qwen research team, including Lin, have since left the company. When capacity is tight, ecosystems must choose between serving users today and training the next frontier model.
The uncommon commons
Nobel Prize winner Daron Acemoglu and colleagues argue in a new paper that AI could shrink the knowledge commons by eliminating the public trail of human problem-solving. For instance, a software engineer may use AI to diagnose a bug and ship the fix. Instead of working through it in public on platforms like Stack Overflow, they solve the issue in private and leave nothing behind for others to learn from. This could thin out the commons, the paper argues.
But Iâd argue that AI opens new ways to share and create knowledge. As Acemoglu knows, when people can share and pool what they learn more effectively, welfare rises unambiguously â and AI can do exactly that.
Most knowledge is not common. It gets trapped behind expertise, institutions and technical language. Take this section from Acemogluâs most famous economics paper:
This is difficult to understand. Iâm curiousâŚ
But if I ask AI to explain it through an analogy:
We also see AI increasingly participating in the creation of knowledge itself. Donald Knuth, considered the father of the analysis of algorithms, let Claude loose on an open combinatorics problem. It autonomously wrote code, tested hypotheses and iterated through failure until it found a construction that worked for all odd cases. Knuth then did the (for now) distinctly human part: he proved why it worked. And he published it on the internet for you, me and AI to read.
The best is yet to come
US productivity grew by 2.8% in 2025 (Q4-to-Q4). That is roughly twice the pace that characterized the previous decade. Itâs not definitive proof that AI is the cause â macro data is noisy, but micro-level evidence is lining up to point in the same direction: productivity rises 14% in customer service, 26% for developers and around 25% for consultants on tasks AI can perform (Alex Imas has a comprehensive list of all the studies). Erik Brynjolfsson writes: âthe old line âwe see AI everywhere but in the productivity statisticsâ may need to be retired.â
Looking back to 2020, the current cycle is already the second-best since 1973, although it still falls short of the dot-com era.
These productivity numbers are still chatbot numbers. They mostly predate agents. OpenAI doubled its revenue in the last seven months. Anthropic less than three months. That surge was the beginning of the agent era. The productivity studies donât capture it yet. Nor do the 2025 macro statistics. What we have measured so far is merely the chatbot phase.
Goldman Sachs says AI added basically zero to US GDP in 2025, 80% of firms report no productivity gains. But theyâre measuring the wrong unit: institutions (companies running pilots, hiring Chief AI Officers, building governance frameworks) while a small number of individuals like myself have already deployed capabilities that no Fortune 500 has matched, because we have zero compliance overhead, zero averaging over 100,000 employees and a compounding context relationship with the agent that makes it better every week.
I shared my entire OpenClaw setup in last weekâs live show. Watch it on YouTube or listen on Spotify or Apple Podcasts.
See also:
Anthropicâs labor-market report shows a large gap between what AI could do in theory and what it is actually doing in practice. Expect that gap to close quickly as agents spread through the workplace. (Also see this fun mock-up of the chart, to get a sense of how much work has changed over the past 200 years.)
One of the core questions around agents is whether they will create AI service firms or AI employees embedded inside companies. Julien Bek, a partner at Sequoia, argues for the former.
Other morsels
Garry Tan, CEO of Y Combinator, recreated his 2008 startup, over 70,000 lines of code in 90 hours, with a little help from his AI friends.
GPT-5.4 was released this week, and it is an excellent model (see it working on a legacy insurance portal like a champ). The performance gains arguably deserve a bigger version jump. Models that can maintain state across long, multi-step tasks is the kind of breakthrough that would justify the moniker of GPT-6.
Paul Graham writes about âBrand Ageâ â when technology commoditizes performance, industries compete on brand rather than engineering.
SemiAnalysis argues that rising electricity bills in PJM, the grid covering 13 eastern US states, is mostly due to its capacity-market design rather than AI. Texas, with real-time pricing, hasnât seen the same effect.
Tests of more than a dozen AI-detection tools show they can catch obvious fakes, but remain too unreliable to definitively tell whether images, videos or audio are AI-generated.
Ancient Southern Levant was ultra-violent, with 25% of skeletons showing cranial trauma (h/t Alice Evans ).
A new agent-based AI system for rare diseases outperformed existing diagnostic tools.
You can now doomscroll Wikipedia.
Thank you for reading!





