🔮 Attentive AI; China’s LLMs; nuclear fusion, batteries, menopause++ #422
Your insider’s guide to AI and exponential technologies
Hi, I’m Azeem Azhar. As a global expert on exponential technologies, I advise governments, some of the world’s largest firms, and investors on how to make sense of our exponential future. Every Sunday, I share my view on developments that I think you should know about in this newsletter.
In today’s edition:
Less forgetful generative AI;
Analysing China’s LLM ecosystem;
Microsoft bets on nuclear fusion.
🧬 Thanks to our sponsor SynBioBeta, the Global Synthetic Biology Conference.
Sunday chart: From goldfish to elephant
Context windows for large-language models (LLMs) are growing rapidly. A context window is the length of text that an LLM can process and respond to. In a sense, it is like the “memory” of the system for a given analysis or conversation.
Anthropic has released a new version of Claude, their LLM, ballooning its context window from 9,000 tokens to an immense 100,000 tokens, or about 75,000 words. Larger context windows allow the systems to have much longer conversations, or to analyse much bigger, more complex documents. Claude’s new 100,000-token capacity means it can easily process the entirety of The Great Gatsby, and answer queries about it in under 30 seconds.
This opens up a range of incredibly useful applications. I have found the 8,000 token limit on GPT-4 a constraint. It prevents me from analysing documents that are longer than about 7,000 words (and most book chapters are longer than that). A whole slew of useful material like academic papers, legislation, business reports or smallish databases is within Claude’s reach. A long context window also opens up the possibility for greater personalisation of an LLM. A 100,000 token limit could store an individual’s preferences in significant granularity and leave space to engage in helpful conversation relevant to them. Finally, longer windows may reduce the frequency of certain hallucinations.
Is 100k even the limit we’ll see for context windows? It seems unlikely, judging by computer innovations of the past. Between 1981 and 2001, the average RAM of a computer increased by roughly five orders of magnitude. In the 20 years from 1997, wifi chip speeds jumped more than 1000-fold. Similar patterns would suggest immense context windows in the near future. This could mean processing or creating coherent outputs across very long research projects. Imagine.
Key (AI) reads
Sinotopia: China is ahead of the United States in enacting rules for artificial intelligence, according to Axios. If China is first with clear rules on AI, it might project those standards and regulations globally, shaping lucrative markets, just as US standards shaped the Internet for the benefit of American firms. Coupled with traditional subsidies, Beijing is using regulation as a form of industrial policy.
Further reading: Ding and Xiao recently published a deep analysis of the Chinese LLM landscape.
Brains squared. Generative AI could power a productivity boom, suggests EV reader, Erik Brynjolfsson and colleagues. Erik sees two key paths: an increase in efficiency and an acceleration of innovation.
Generative AI has broad applications that will impact a wide range of workers, occupations, and activities. Unlike most advances in automation in the past, it is a machine of the mind affecting cognitive work.
Erik and I had dinner a few weeks ago and I joked that I would be more unrestrained when talking about the potential productivity benefits of AI. The body of evidence supporting the assertion that it will increase the productivity of desk-bound workers is growing. We’ve experienced this first-hand at EV where GPT-4 is supercharging our analysis.
The bard has sung. Google’s “100 things we announced at I/O 2023” outlines the result of the firm’s rapid pivot towards AI. The star of the show is how generative AI is being integrated across Google apps, and is becoming increasingly multimodal, spanning images and sounds. As these features become available, some of the most used websites globally are becoming interfaces for generative AI. (Remember, ChatGPT is only six months old).
See also, our state of the generative AI market in 10 charts.
🧬 Today’s edition is supported by SynBioBeta
The cost of reading, writing and editing DNA is falling exponentially. Synthetic biology is being used to create new materials, foods, medicines, consumer goods and more. Join 1800+ industry leaders at SynBioBeta, the Global Synthetic Biology Conference, May 23-25 at the Oakland Marriott. Featuring genome pioneer Craig Venter, LanzaTech’s Jennifer Holmgren, xenotransplantation expert Martine Rothblatt and Fantastic Fungi host Paul Stamets.
Exponential View readers receive a 20% discount using code EXV20 on checkout.
Market data
The Bing mobile app daily instals have quadrupled since the launch of Bing AI chat.
Stack Overflow’s traffic saw a 14% decline in March, possibly due to developers using AI tooling to answer their questions.
Since the beginning of the pandemic, the dollar amount of online grocery shopping has increased fourfold, rising from $1.9 billion in 2019 to $8.0 billion in 2023.
Median pre-money valuations for growth-stage startups in the US dropped to $90 million, representing a 74.6% decrease from the 2021 all-time high of $355 million.
Short morsels to appear smart at dinner parties
☄️ Microsoft’s bet on nuclear fusion with Helion Energy.
🔋 Sodium-ion could be the next winning battery chemistry, challenging lithium-ion systems and reducing costs for industry.
🚗 Wendy’s announced an experimental AI drive-thru.
🐣 The first British children conceived through 3-person IVF, a technique designed to prevent serious mitochondrial disease.
🧠 Menopause changes the brain.
🏃 The multiplier effect: when one top employee leaves, other high-performers follow.
⏳ Down memory lane: a 2007 article about the first iPhone, just before its release.
End note
Join the Exponential View readers on 31 May in our first AI Praxis session for the community, where members will have a chance to share their generative AI experiments, and learn from each other’s practices. The session is open only for members with an annual subscription. See more and RSVP here if you’re a member.
What you’re up to — community updates
Anil Seth on why “conscious AI is a bad idea”.
Ussal Sahbaz and Starla Griffin published a post at the Bretton Woods Committee on emerging markets and crypto with a focus on regulatory discussions.
Iason Gabriel and his team at DeepMind published a paper “Using the Veil of Ignorance to Align Artificial Intelligence with Principles of Justice” in PNAS.
Robert Chandler and his team won the London AGI Hackathon with a tool that enables anyone to build AI agents with a simple visual programming interface.
Toby Mather’s 12 predictions on how AI will impact K-12 education.
Share your updates with EV readers by telling us what you’re up to here.