🔮 The Sunday edition #517: Ghibli, AI & semantic apocalypse; AI team; China’s order; nuclear, hacker enzymes, intergalactic travel++
An insider's guide to AI and technology
Hello, it’s Azeem.
What a week it’s been. OpenAI's servers are “melting” under the weight of their new image model, while Google's Gemini 2.5 Pro raises the bar with a 40-point ELO improvement in the Chatbot Arena. But you’re here to get some distance from the headlines and make sense of what’s going on – so let’s get into it!
Today’s edition is brought to you by Sana — do real work with AI.
Beyond the semantic apocalypse
argues that generative AI is creating a “semantic apocalypse” where art and language are being drained of their meaning. Social media has been flooded by images recrafted in the style of Studio Ghibli, obligingly produced by OpenAI’s new image model. The firm’s servers are “melting”. Every meme was redone Ghibli-style, family photos were now in Ghibli-style, anonymous accounts face-doxxed themselves Ghibli-style. And it’s undeniable that Ghiblification is fun.
The flood of Ghibli-style images, delightful at first, leads to a “semantic satiation”, Erik says. A feeling of emptiness. Ed Newton-Rex, who champions creators’ rights, called it “the biggest art heist in history.”
Here’s how I see it: in some cases, this kind of approach can actually increase awareness of an artist—their style, their work, their world. A Ghibli-style portrait of me doesn’t replace the experience of watching a Studio Ghibli film or engaging with other human-made art. If anything, it invites curiosity.
We saw a similar pattern in the early internet era, when fan-fiction exploded online. Rights owners initially sought to exterminate it, pursuing cease-and-desist with gusto, rather than recognising it as signs of a deepening loyalty and the emergence of a remix culture that could extend a franchise’s life. It’s hard to imagine someone seeing a Simpsonized version of a person and thinking, “That’s enough—I don’t need to watch The Simpsons anymore.” More likely, it sparks nostalgia or interest, prompting them to seek out the real thing.
But that connection isn’t automatic. When a visual style becomes widely replicated without context—like with Ghibli-fication divorced from the actual narratives and themes of Ghibli films—it risks becoming hollow. This highlights where the real value of art lies. If style alone becomes a cheap, easily generated commodity through AI, then human creativity will have to differentiate itself in more meaningful ways—it will need to move “up the stack.”
That shift points toward a renewed emphasis on what AI struggles to replicate: conceptual richness, original storytelling, personal perspective shaped by lived experience and transparent creative processes that reveal intent and authorship. In a world awash with convincing fakes, authenticity—real, verifiable and human—becomes more valuable than ever. And that could reshape how we create and consume art, with greater appreciation for forms where the artist’s process is not just present but essential to the work itself.
The irony, of course, is that Studio Ghibli is exactly that: a deeply human process, painstaking in its labor and swimming against the tide. Says Hayao Miyazaki, the creator, “We have chosen the opposite position to the trend.”
Cybernetic teammates
Recent research from Fabrizio Dell’Acqua et al. has shown that AI doesn’t just act like a tool; it functions like a teammate. AI systems – whether they produce text, generate images or analyze data – can function as near-instant collaborators. A handful of recent studies show that individuals and teams working alongside AI produce deliverables faster and often with less frustration. Dell’Acqua found that individuals working with AI performed as effectively as two-person teams – and did so 12–17% faster. Another new study, by Harang Ju and Sinan Aral, showed that human-AI teams communicated 45% more than human-only teams, yet humans focused 23% more on content creation and 20% less on direct editing, since the AI shouldered much of the drafting and refinement work.
And yet, to trust these “cybernetic teammates”, we need systems that feel not just powerful but also reliable. In that regard, Google’s new Gemini 2.5 Pro offers a snapshot of how far large language models have come: beyond its 40-point ELO improvement on Chatbot Arena.
In my opinion, Gemini 2.5 Pro is the model most people should be using. Forget the benchmarks, the vibes are just excellent. It’s fast, transparent and its results have a depth that feels “wiser” than other models.
China is already ahead
Many Westerners still view China through an outdated lens, focusing on its historical “catch-up” phase – an era defined by reverse-engineering foreign technologies, relying on low-cost manufacturing, and climbing up the value chain. But that era is decisively over. China has transitioned from imitator to innovator in several critical domains. In a striking recent example, BYD, which just surpassed $100 billion in revenues, is reportedly delaying its planned $1 billion Mexican factory over concerns that its advanced technology could leak to US rivals! DeepSeek is challenging Western labs with competitive, cost-efficient large language models – just earlier in the week, DeepSeek nonchalantly dropped the best open-source model. The old narrative of the West guarding its IP from a tech-hungry China has flipped on its head. As Louis-Vincent Gave, CEO of Gavekal Research, quipped:
There are two kinds of people in the world: those who have visited China and see the future, and those who have not – calling the former Communist Party shills.
This isn’t to romanticize China’s approach – serious issues remain around environmental externalities, civil liberties and the role of the state. But to ignore the reality of China’s innovation leap is to misread the direction of global technological power. As historian
said to me in our Friday discussion:Anybody who bets against Chinese engineering, anybody who bets against Chinese STEM is likely to lose their bet over a 10-year time horizon. The problem is that you’re making a bet on the CCP’s ability to manage Chinese economic policy and national security.
I’ll be visiting China in a month and will report back on what I experience firsthand.
Fluid intelligence
Anthropic’s new interpretability research on Claude, its AI model, offers a fascinating look into the inner mechanics of large language models. Their team peered inside Claude’s processes as it composed rhymes and solved math problems.
The investigations revealed moments of structured, deliberate-seeming behavior: a planned rhyme here, a flash of mental arithmetic there. Yet alongside these signs of internal coherence were just as many examples of motivated confabulation — the model assembling plausible explanations without a grounded rationale.
These findings brought me back to Murray Shanahan’s 2024 paper, Simulacra as Conscious Exotica. Shanahan argues that LLMs role-play human-like cognition while operating on fundamentally alien computational substrates. Because our concepts of consciousness depend on shared embodiment in a common world, what we see in models like Claude might be just that, a compelling performance rather than a true mind. Still, Shanahan leaves open the possibility that more grounded, sustained interactions in richer environments could push us to revise those very concepts — a horizon Anthropic’s work nudges us closer to.
With all this in mind, the new ARC benchmark will be quite an intriguing one to follow in the coming weeks and months – rather than punishing a lack of obscure knowledge, the new AGI benchmark highlights the need for fluid intelligence, context-sensitive rule application and efficiency – all capabilities that separate mere next-word mimicry from the multi-step integration we see in biological minds.
Elsewhere
AI adding value: Earth AI’s algorithms are discovering overlooked deposits of copper, cobalt, and gold in Australia. MIT chemists use AI to predict DNA’s 3D structure in minutes, speeding up genetic research.
H&M plans to create AI-generated “digital twins” of real models with an ownership model in which humans own the rights to their twins.
Nuclear startup Terrestrial Energy goes public via SPAC and nets $280 million in a merger.
A review of recent breakthroughs in biosynthetic and bioinformational technologies hint at a future where we may overcome the physical and energy limitations of traditional computing by integrating biology with semiconductor technology.
Apple reportedly placed a $1 billion order for Nvidia’s AI servers, partnering with Dell and Super Micro Computer to build their first generative AI infrastructure, marking a strategic shift in Apple's AI approach.
Scientists have engineered proteases – enzymes that cut proteins at specific sites – to selectively degrade proteins that cause diseases including Parkinson’s.
Engineers have figured out how to recycle cement without compromising its strength.
What would it take for an interstellar ship to maintain a population on a 250-year trip to a (hopefully) habitable exoplanet? A wonderfully geeky exploration.
Thanks for reading!
Today’s edition is sponsored by Sana.
Enterprise AI doesn’t have to be hard. Sana’s agent platform provides one unified interface for building AI agents grounded in your company's data. Extensible APIs. Enterprise-grade security. Loved by end users.
Capabilities include:
Automates manual tasks
Attends and summarizes meetings
Completes tasks across tools
Searches across every app
Conducts deep research
Loved it. Information density in today's edition is off the charts. Getting to understand today's takes on biology is going to be an uphill battle and i am enjoying every minute
Can’t wait to hear about your report from your China trip!