Hi everyone,
Every year, I provide an outlook for the year ahead to help you set expectations in ways that are helpful for the Exponential Age. I wrote a book about this era in 2021, and this newsletter is a weekly practice of analysis and understanding of what is going on. Before you dig into today’s Part 2 of my 2024 outlook, I recommend you read my introduction to the Horizon Scan here and Part 1 of the themes to watch here.
I’ve had to split this Part 2 into two parts because of the length. Today’s analysis will look at:
Smol is beautiful: the allure of smaller technologies
The state of insecurity: geopolitics meets technology vulnerabilities
Decentralisation gathers speed: and it isn’t coming from the blockchain
In Part 3, I’ll talk about:
Democracy’s plebiscite and the epistemic quake: 2024 is a big year for elections
The society of AI: how we will move away from singular models
Helpful humanoids: what to look out for in humanoid robots
Moving at human speed.
As always, these analyses will be available for subscribers in full.
Let’s get started!
▪ Smol is beautiful
While GPT-4 and Gemini Ultra demonstrate what can be done with extremely large AI models, we’re also witnessing the common pattern in the evolution of technologies: optimisations that lead to miniaturisation. French open-source provider, Mistral, has already shown “as good as GPT-3.5” performance from its various models, including its 7bn parameter Mixtral model. Microsoft showed off Phi-2, another small model that “matches or outperforms models up to 25x larger, thanks to new innovations in model scaling and training data curation.” The Phi-2 team relied on extremely high-quality data and textbook quality. State-space models, which use a different architecture to transformers, are also rather smaller than the leviathans of GPT-4 and Gemini.
Smaller models will be in demand: they are cheaper to run and can operate on a wider array of environments than mega-models. For many organisations, smaller models will be easier to use. This will create many opportunities: how will such models be tested, managed, maintained and governed?
⚠️ The state of insecurity: Geopolitics meets technology vulnerabilities
The events in the Red Sea speak to the growing range of asymmetries. Some of the US Navy’s most sophisticated assets—its Aegis fleet air defence systems—have been put into action against weapons launched by the Houthi rebel groups. It’s evidence of the speed with which even less sophisticated actors can force expensive responses from nation-states. Each US missile intercept costs a couple of million dollars. While they protect much more expensive ships, they shoot down a much cheaper munition. American destroyers only carry a magazine of up to 96 missiles—and must reload in a major port: they were designed for the defunct Soviet threat, and this asymmetry is unsustainable.
The advantage is, at least for now, in the hands of the attackers. One result is more shippers are pausing their Red Sea traffic. The lessons will force nations to develop more cost-effective and suitable weapons. Learning from the lessons of the Ukrainian battlefield where cheap drones render even tanks vulnerable within minutes.
The geopolitics is getting uglier. We have been moving towards a state of AI nationalism for many years. The Economist has finally characterised America’s increasingly strict approach to the tech underpinning AI as “beggar thy neighbour”.
This creates the background for misbehaviour: increasing conflict, direct and proxy. Technology that could confer an advantage will be appealing. For the Houthis, it may be shipping. For other adversaries, it could be in the cyber and AI domains.
Western nations will need to deepen investments across these attack surfaces: physical and cyber. But it isn’t simply a case of money but also the innovation it tools for this brave and ugly new world.