🔮 Sunday edition #529: Mainframe-to-Mac; routine cognition commoditized; Meta’s AI bet; the LLM OS frontier++
An insider's guide to AI and exponential technologies
Hi, it’s Azeem.
Could large language models form the basis of a new operating systems. Meanwhile, Mark Zuckerberg isn’t guessing; he is in beast mode, willing to spend what it takes to secure Meta’s future. The battle to dominate this new computing paradigm isn’t about incremental improvements—it’s about survival. Here’s what you need to know.
Today’s edition is brought to you by Attio.
Attio is the CRM for the AI era. Sync your emails and Attio instantly builds your CRM with all contacts, companies, and interactions enriched with actionable data.
The new OS
We’re entering a new era of computing where LLMs could become the operating systems, argues Andrej Karpathy. His latest keynote frames this clearly: software has moved from hand-coded logic (1.0), through learned neural weights (2.0), and now into prompt-driven LLMs (3.0). His provocation is simple: treat the model as an operating system, a 1960s mainframe with superpowers and cognitive blind spots.
The open question is how you best interact with this operating system. For a mainframe in the 1960s, it was a punch card. Then came the command line. Neither particularly fun nor intuitive. For LLMs, we have already seen interesting, thought-provoking experiments – NotebookLM, for instance, led by Steven Johnson, is in his words “a conduit between my own creativity and the knowledge buried in my sources – stress-testing ideas, spotting patterns, nudging my memory.”1 Another example, noted by Karpathy himself, uses Gemini 2.5 Flash to write the code for a UI and its contents, based solely on what appears on the previous screen. These are still experimental products, and many more will follow. But expect a UX reset as abrupt as DOS giving way to Windows.
Mainframe-to-Mac
If today’s large-language-model clusters resemble the 1960s mainframe—powerful but forbidding—the next strategic prize is the equivalent of the original Macintosh: a human-sized device that makes an AI operating system feel intimate. The economics now make that leap plausible. A cluster of eight M-series Macs – about the cost of a family car – can now run frontier models that would have cost about $300,000 on Nvidia H100s just a year ago. Chinese up-start MiniMax even boasts that its new M1 model bested DeepSeek while using only one-third the compute, a data point that hints at personal AI slipping into everyday reach.
Apple smells this inflection – as I discussed in my Live last week. Its “Liquid” interface—dismissed by analysts as animated eye‑candy—looks more like a prototype for ambient computing: an assistant that listens all day, whispers answers, and leaves the screen dark.
Early experiments, however, have struggled to find their footing. Humane’s “AI pin” and Rabbit R1 evoke memories of ambitious yet failed early computing form factors, like Xerox’s groundbreaking but commercially unsuccessful Alto. Perhaps Sam Altman’s and Jony Ive’s upcoming product will chart a different course. What we do know is that these personal AI devices are trending toward local execution.
Just as Apple understood in the 1980s that a GUI demanded the form factor of the Mac, the form factor must again evolve to match this new operating system.
Is Zuck paranoid or desperate?
Intel founder, Andy Grove, popularised the phrase “only the paranoid survive”. It is the strategic gait you need during a “strategic inflection point” when the fundamentals of the market change so much that adaption or obsolescence are the only paths.
So, what to make of Zuckerberg’s $14bn deal to acquire less than half of Scale AI, placing its 28-year-old founder, Alexandr Wang, in charge of his firm's ambitious new “super-intelligence” lab? Is it Grovian paranoia? Or a desperate last gasp, the final lurch of a singleton to pair up before time runs out?
It’s a bit of both—a third strategic boldness mixed with two-thirds desperation. Zuck is bold: his move towards the metaverse four years ago was just that. It was just wrong.
AI is the real deal, and Facebook (as it was then) had made strides in pursuing the technology. But I’ve heard for more than a year that Meta’s Llama team was unhappy and underperforming, and just a couple of months ago, Joelle Pineau, its boss, left.
Cue Meta falling behind the other major firms in offerings and mind share. And Zuck pursuing Perplexity, Ilya Sutsekever’s Safe Super Intelligence and Mira Murati’s Thinking Machines in an attempt to catch up. Combined with astonishing pay packets for AI researchers, Meta looks desperate.
AI is a “strategic inflexion point”, per Grove. It doesn’t matter whether the optics are desperate or brave; what matters is being able to play the new game or face the lingering irrelevancy that technology bestows upon former titans that don’t grok the shift. For Zuckerberg, there is almost no price he won’t pay to play in the next innings.
The price of labor
The cost of routine cognitive tasks is collapsing toward zero. That single shift dissolves scarcity-based business models and severs the old link between hiring talent and corporate growth.
OpenAI’s ex-research chief Bob McGrew argues the scarcity era for knowledge work is ending: as routine cognition prices out at raw compute cost, “agents” will gut the hire-more-brains-to-grow playbook and push value toward proprietary data, deep context and durable networks. Amazon CEO Andy Jassy is already bracing—generative agents, he says, will shrink the corporate back office. The near-term story isn’t robo-layoffs so much as a violent repricing of skills: expense reports and log scraping vanish first, while synthesis, judgment and relationship-building surge in premium. For firms, moats now depend less on owning intelligence than on integrating it uniquely and securely; advantage will flow to those who harden cyber-posture, accelerate agent pilots and turn abundant cognition into defensible leverage.
See also:
For readers wondering how to stay ahead, 80,000 Hours offers a detailed checklist.
Elsewhere
The UK government will train 7.5 million workers in AI by 2030 – half of Britain’s knowledge workforce.
Honda’s 6.3-meter reusable rocket leapt nearly 300m before landing this week. The Japanese government aims to build an ¥8 trillion ($55 billion) space sector – relative pennies compared with SpaceX, which already carries a $350 billion valuation.
Iran is self-sabotaging its internet infrastructure to slow Israeli attacks. The nationwide shutdown during heightened Israel tensions shows that kinetic conflict now triggers pre-emptive digital blackouts.
Korean researchers say a single high-end AI chip could draw 15,000 watts by 2035 – up from about 800 W today. At that scale, electricity and cooling – not chip fabrication – become the main growth bottlenecks. Missing from their analysis, though, are novel approaches to energy-efficient computing, such as reversible architectures.
Waymo shows that motion-forecasting accuracy in its autonomous vehicles follows a power law with training compute – scaling data and parameters lets the cars cope with trickier road chaos.
Solar-plus-storage can now undercut grid prices for heavy industry – provided the plants can be run intermittently. Terraform plans 1 MW micro-plants that run intermittently at $200 per kW, claiming a fivefold green dividend for steel, ammonia and related sectors.
China’s next five‑year plan (2026–30) makes building its own chip‑making machines a top goal. That includes the EUV lithography tools that print the chip patterns, carve the circuits and check they meet specs—equipment usually produced by Dutch giant ASML. By pouring government money into this gear, Beijing aims to cut its reliance on foreign suppliers and weaken US export‑control pressure in just one product cycle. It will be hard; EUV tools are probably the most advanced machines you can buy today.
Today’s edition is brought to you by Attio:
Attio is the AI-native CRM built for the next era of companies.
Sync your emails and watch Attio build your CRM instantly - with every company, contact, and interaction you’ve ever had, enriched and organized. Join industry leaders like Lovable, Replicate, Flatfile and more.
See here for my conversation with Steven where we discuss this topic.
Great edition. One of the best Sunday's for awhile! Will dig into the 80K article in particular
All Apple have to do now is figure out how to squeeze an M4 ultra equiv complete with half a gig of VRAM/RAM into an iPhone - then they'll probably have something good to run most people's local AI needs (and can tweak and integrate an open source model - like they did with BSD for MacOS). Frankly that doesn't feel like an impossible task in the next 5(ish) years.
The 80,000 hours article is gold.