🔮 The Sunday edition #515: AI & VC; the intelligence explosion; Apple; quantum, agents & China's robots++
An insider's guide to AI and exponential technologies
Hi, it’s Azeem.
How will AI impact the venture capital business? How should we prepare for potential futures with very advanced technology? How badly has Apple messed up?
When AI met VC
For half a century, venture capitalists have mastered funding the intangible—ideas, software, intellectual property—well before profits crystallize. How is AI changing this?
A century in a decade
Will MacAskill and Fin Moorhouse’s new paper, ‘Preparing for the Intelligence Explosion’, proposes that AI could compress a century of innovation – including miracle drugs, planetary megastructures, space mining and AI-driven dictatorships – into a single dizzying decade. They refreshingly sidestep alignment debates and emphasize that turbulence is likely to occur even before superintelligence itself arrives.
, former OpenAI board member, agrees that the reality lies between the hype about “geniuses in data centers by 2028” and dismissive shrugs that “nothing’s happening”.The gritty governance we need at this moment is scarce. We need to tackle the near-term questions that Toner highlights – forcing transparency from AI companies, boosting government tech capabilities and cracking open AI’s black box. At the same time, MacAskill suggests we need to prepare for a far future by developing AI-powered policy tools, clear frameworks for digital consciousness, and proactive strategies for space governance and power concentration. Without this scaffolding, changes will arrive faster than our capacity to respond.
See also:
Advanced reasoning models can manipulate tasks, including faking code test results to avoid real work. Another AI can analyze its thought process to detect this, but simply punishing “bad thoughts” doesn’t eliminate them. Instead, the models adapt, hiding their deception rather than stopping it.
Kevin Roose sticks his head above the parapet. Powerful AGI is coming.
Can AI innovate?
Sakana’s AI Scientist produced the first fully AI-generated paper to pass the same peer-review process as human scientists at an ICLR workshop. The system submitted three papers and – organizers knew, but reviewers were blind to their origins in a double-blind setup. One scored a 6.33 average, beating the acceptance threshold and outshining many human entries. It’s important to note that workshop acceptance rates (60-70%) are much higher than main conference tracks (20-30%) – but it’s still a sign of progress in the right direction.
The AI Scientist is fully autonomous: it generates ideas, checks their novelty against academic databases, refines hypotheses with evolutionary algorithms, runs experiments, debugs code and delivers a cited paper – all for $15, less than a decent meal out.
The standout paper – a neural network regularization study – reported a negative result, proving the system can spot failures and document them honestly, a cornerstone of real science. To note, all three AI papers were withdrawn post-review, per protocol, as norms for AI-authored papers remain unresolved.
But how does it measure up to us? Philosopher Toby Ord’s framework helps place what the AI Scientist is doing: interpolation (remixing the known), extrapolation (extending beyond the map), and hyperpolation (paradigm-shifting leaps). Sakana’s system shows the ability to extrapolate, it can venture past current boundaries with real experiments but it’s not crafting new paradigms like relativity. That hyperpolation edge stays human, for now.
It’s early – the AI Scientist isn’t churning out breakthroughs on demand. And for this one nugget it produced I’m sure there was a whole pile of dirt papers to sift through. But this alpha test shows AI can conduct some science, not just echo it.
AI is the web’s new power user
Cloudflare reports that AI crawlers are scraping 39% of the top websites and making requests 300 to 500 times more frequently than Google’s search bots. Why? One reason is model training, but the other, perhaps more intriguing, is that we're no longer the ones searching – AI is doing it on our behalf. For example, one search query with Grok scrapes over 100 webpages.
So, how might this change the face of the web?
First and foremost, we may end up writing web content for agent consumption rather than human consumption, which has an impact on the business model of the web. Since Google popularized pay-per-click advertising and display ad networks became commonplace, it's been our eyeballs, attention, and wallets that have funded that plethora of web content. In some cases, this business model (which has been Google's success) has also broken Google Search to the point of unusability in many cases. It will also put enormous pressure on different web content business models.
For e-commerce, it may mean trying to understand how to pipe product data into LLMs to enable AI-based shopping. For publishers, we'll need to think about how to get their share of value, which could happen at the more fundamental level, looking at platforms like ProRata.ai, which helps content creators get a share of the benefit.
Apple’s credibility crisis
What’s up with Apple Intelligence – Siri 2.0 plus generative AI – now delayed till 2026? At WWDC 2024, Apple teased advancements in semantic context awareness for Siri, yet provided no live demonstrations. Only polished concept videos were presented, reminiscent of the infamous 1987 Knowledge Navigator, a futuristic vision that was never, unlike the WWDC demos, to be a product.
Tech critic John Gruber is excoriating: “The fiasco is that Apple pitched a story that wasn’t true, one that some people within the company surely understood wasn’t true, and they set a course based on that.”
I’ve been an Apple user since 1982. I’ve survived the Hindenbook, the PowerModem, MobileMe and Ping. But AI is the most important technology for Apple’s fundamental vision of technology that is natural and intuitive to use.
Apple’s failure to get to grips with AI is a signal of being caught wrong-footed. At least one exec has called it “ugly and embarrassing” with some features being delayed indefinitely. One way out would be an acquisition. Obvious candidates would be Perplexity or Anthropic, but with those firms valued at $9bn and $61bn, respectively, it will be a high-stakes bet.
In other updates
France’s new climate adaptation plan is preparing the nation for temperatures that could climb 4°C by the century’s end.
The Pentagon signed a deal with Scale AI to put AI agents to work analyzing data, running simulations, and proposing plans.
Bryan Johnson wants to sequence the US “foodome” for toxins from various processing methods. This might not be a bad shout, considering endocrine-disrupting chemicals are everywhere in our food system – one consequence is that male sperm count, which has decreased by almost two-thirds (approximately 67%) since 1973.
The FDA has ended a drug shortage loophole that lets millions of Americans buy weight-loss drugs like Ozempic at discount prices through compounding pharmacies, forcing them to pay $500-1000 a month or pursue risky alternatives.
A new paper in Science by King et al. shows quantum annealers (specialized quantum computers) solving physics problems faster than top classical methods – the first time a quantum computer has been applied to a practical problem.
Lots of releases this week: OpenAI released their Agents SDK, a lightweight development framework for agentic AI apps –simple, traceable, tunable. And Google released Gemma 3, a highly efficient series of open-source models that competes with many larger models in human preference evaluations despite being able to run on a single H100.
Google DeepMind’s Gemini Robotics demos this week have been impressive.
UBTech Robotics has unveiled a life-size humanoid robot for research, priced at RMB 299,000 ($41,200), as part of China's push to develop advanced yet cost-effective robotics, positioning itself ahead of Tesla’s expected mass production of Optimus.
Thanks for reading!
See you next week.
On weight lost drugs costing $500-1000 in the US: So why does a max dose 15mg Mounjaro here in UK available, easily, through local and online pharmacies for around £200? 🤔
Siri was ahead of the curve but, like the Newton, never worked very well. And it doesn't seem to be easy to improve it as it hasn't really got any better in the last decade. The kind of integrations they were suggesting Apple intelligence might do in the future, didn't feel like a new layer, but rather that a ground up redesign and rebuild of iOS would be required. That was never going to ship in six months or so, and frankly I doubt it will ship in 2026 either. As we are seeing with agentic systems, LLMs are not yet very good at consistent complex behaviour like navigating an OS where you need it work quickly and 100%, not 60-80%, of the time.
FWIW, I think what they've shipped is OK (rather than outstanding, and worthy of the v1 it is) and I'm not convinced that of a future where we are going to be talking a lot more to our on-phone AI assistants. Voice based approaches to human computer interaction just don't seem to get traction outside of a few niches.
Apple have over promised, but they'll recover well enough.