🔮 Exponential View #548: From boom to payoff; OpenAI's platform bet; urban loops, floating solar & AI erotica++
A weekly briefing on AI and exponential technologies
Good morning from London!
In today’s briefing:
75% of big US firms are already making money from AI.
OpenAI rewires itself for a trillion-dollar infrastructure push.
Plus: AI’s self-awareness, a floating solar farm, semi-autonomous cities & Beijing 2035.
Let’s go!
⚡️ Today’s edition is brought to you by Lindy.
Your next hire won’t be human – it will be AI. Lindy lets businesses create AI agents with just a prompt. These AI employees handle sales, support, and ops 24/7, so you can focus on growth.
AI ROI
Seventy-five percent of large US firms already report a return on investment from AI, according to Wharton Business School’s latest update from its three-year enterprise study. Roughly two-thirds of firms report spending more than $5 million annually on generative AI budgets. More than one in ten are spending $20 million.
Extrapolated across some 20,000 comparable American firms, this suggests a conservative floor of $66 billion in annual genAI spending. This is slightly above our estimates for revenues in the global generative AI ecosystem, excluding China. Therefore, I suspect that the Wharton respondents included internal budget allocations in their estimates, which we typically do not. The study used a solid cross-sectional survey, though it remained dependent on self-reported responses.1 Either way, 88% expect to increase spending.
It’s remarkable that most firms are seeing positive returns within three years of ChatGPT’s debut. Usage is deepening rapidly – 82% of respondents engage with generative AI weekly, 46% daily – but the pattern is uneven. Middle managers lag their executives, and hiring expectations vary widely across levels. Whether this early surge can sustain itself remains to be seen.
This study triangulates well with other indicators. JPMorgan reports that 150,000 employees use its LLM tools daily. Results from the hyperscalers show that AI demand continues to drive growth in their cloud businesses. (Microsoft Azure, for example, grew 40% year-over-year to nearly $85 billion in revenue, with perhaps one-fifth attributable to AI services.)
See also:
Goldman Sachs’ bankers reckon that 37% of their clients are at production scale, with uptake expected to hit 50% this year. (This is lower than the Wharton study, but still an impressive proportion considering the immaturity of this market.)
A new preprint finds that even frontier-level agents can complete only 2.5% of real remote work projects at human-acceptable quality. AI is an augmenting force for now.
OpenAI’s strategy, clarified
OpenAI is now a for-profit, public benefit corporation, which allows it to raise capital. Lucky for OpenAI – they need that investment. The firm already has commitments to about 30 GW of compute (about $1.4 trillion in infrastructure) and ultimately wants to add 1 GW per week. That is at a scale of national infrastructure; in fact, the commitments are nearly double the value of the US Interstate Highway System.
With all this compute, OpenAI needs to change tactics. Ambitiously, they want to build a platform. In Sam Altman’s words:
Traditionally, that’s looked like an AI super-assistant inside ChatGPT. But we’re now evolving into a platform that others can build on top of. The pieces that need to fit into the broader world will be built by others.
Where exactly this platform is located is a little ambiguous. OpenAI’s strongest vertical is ChatGPT, with more than 800 million weekly users (Gemini is second with 650 million monthly users). Now, they want people to build on their platform, likely via APIs. OpenAI is currently second to Anthropic on API market share, but they’re right to move into this zone. The closer a company sits to the bottom of the stack, the more control it has, especially as compute and energy become the real bottlenecks. Few players have the clout to commit $1.4 trillion to infrastructure spending, and that bill will only rise.
AI thinks, therefore AI am
Anthropic researchers ran an unusual experiment to test whether their Claude models could detect what was happening inside their own “minds.” In about 20% of cases, the models noticed when scientists added specific patterns of neural activity, like slipping a new thought into their “head” (“Don’t think of an elephant!”). When researchers inserted a pattern linked to ALL CAPS, Claude reported a sense of loudness or shouting before its text changed.
This suggests that models can, in some sense, notice what is happening inside themselves, and that ability appears to scale with capability. In principle, you could ask a model why it made a decision and receive a meaningful, if partial, account of its decision process. Today, a model’s values are typically encoded in a “constitution” that instructs it how to balance helpfulness, honesty and harm avoidance. Yet when you stress-test these specifications, you often find that their principles collide. A model told to “assume good intentions” while also being instructed to “avoid harm” cannot always satisfy both. When values clash, models do not reflect on the tension. They simply pick a side. If introspection continues to scale with capability, future systems might be able to understand them.
The yin-yang of efficiency and control
In 1933, John Maynard Keynes warned that free trade no longer guaranteed peace. Britain’s embrace of globalization, he said, reflected power rather than virtue and had turned life into “a parody of an accountant’s nightmare.” That nightmare replayed in the late 20th century. As
recounts, the great wager that open markets and global integration would spread democracy, exported jobs and imported fragility instead. The logic of comparative advantage worked too well: it rewarded efficiency and scale but concentrated production. This hollowed out industrial bases that once anchored resilience.The response has been to bring the state back in as player, not a referee. Industrial policy, what I call catalytic government, is now a tool of strategy. The question is no longer whether governments should intervene, but how far. Each subsidy or tariff buys security at the cost of dynamism. Unlike Keynes, today’s pragmatists do not seek to abandon globalization, only to discipline it, to treat openness as an instrument of national power rather than an article of faith.
Elsewhere
- ’s new Capabilities Index combines benchmarks to track model performance over time and help avoid benchmark obsolescence.
The obituary for pretraining was written too soon. This is a good rundown of why compute-intensive scaling may return.
Thirty-eight million farmers in India got a month’s head start using AI weather forecasting for an unusually chaotic monsoon season.
A 1.8 MW vertical floating PV plant is now online in Germany, floating in the sun.
Is AI erotica a private decision or a social one? Leah Libresco Sargeant argues that it will shape cultural norms around sex and intimacy and affect even non-users. See also, Character.ai has blocked under-18s from chatting with its chatbots.
The Economist argues that AI gives consumers expert-level leverage by expanding access to information and making markets more transparent.
Beijing is prioritizing tech self-reliance and manufacturing dominance toward “socialist modernization” by 2035.
Semi-autonomous city-building is moving from the fringes into realization, but these projects’ success will depend less on the ability to build and more on the ability to govern. Good survey here by
.Economists have developed a cheap and reliable way to recreate how people thought about the economy across decades, using large language models to simulate historical survey responses.
Today’s edition is supported by Lindy.
If ChatGPT could actually do the work, not just talk about it, you’d have Lindy.
Just describe what you need in plain English. Lindy builds the agent and gets it done.
→ “Create a booking platform for my business”
→ “Handle inbound leads and follow-ups”
→ “Send weekly performance recaps to my team.”
Save hours. Automate tasks. Scale your business.
It is more robust than the weak “MIT” Nanda study earlier this year. I invite the authors of that t challenge my assertion.







is ALL CAPS a reference to MF Doom being in the Claude commercial?
https://www.youtube.com/watch?v=FDNkDBNR7AM
https://www.youtube.com/watch?v=gSJeHDlhYls