Hi, it’s Azeem.
A year ago, I offered a horizon scan for 2024 (part 1, 2 and 3). At the time, we were grappling with the pace of AI adoption, surging renewables, and growing friction about strategic tech. I also saw hints that advanced drugs, robotics and open-source tools would shape new opportunities.
As I do each year, I reviewed this outlook, and the results are encouraging. Many of the key trends I highlighted held true, with some changes exceeding my expectations in pace or scale. A few predictions unfolded more gradually or differently than expected, but the overall trajectory was accurate. While the outlook wasn’t strictly a list of predictions, I’ll grade them as such: 🧊 for areas that evolved in unexpected ways and 🔥 for spot-on hits.
Here’s my scorecard:
My 2025 Horizon Scan – community briefing in January
I’m hosting a community briefing about my 2025 Horizon Scan in January open to all paying members of Exponential View. This is an opportunity to explore key opportunities and risks in the year ahead and engage in thoughtful discussions with me and other members of our community.
8 January, 4pm UK / 11am NY REGISTER HERE
or9 January, 11am UK / 3pm Dubai REGISTER HERE
🧊 AI and big firms
Last year, the expectation was that large companies would embrace AI at scale, swiftly reshaping their workflows and maybe even rattling their hiring plans. I wrote:
Firms will be under pressure to deliver robust applications on a technology that is, in many cases, not robust — and facing various types of legal challenges. At the same time, more straightforward applications will be available to companies. If these projects succeed in improving productivity, there may be an immediate impact on the rate of new hires or even signs of widespread job cuts. 2023 was too soon to see those impacts. This coming year they might be visible.
AI integration moved forward, but quietly. The number of AI projects within large firms exploded, with global AI enterprise spending growing 500% to $13.8bn. AI startups are reaping the rewards, reaching $1m in revenue four months faster and scaling to $30m five times quicker than their non-AI SaaS counterparts. Consulting firms like Accenture are also seeing surprisingly high growth on the back of companies trying to figure out these tools. However, as IBM CEO Arvind Krishna told me in early December, we’re yet to see these use cases scale.
This is why the feared (or hoped-for) tidal wave of job cuts or radical restructuring hasn’t arrived yet. Firms are leaning in, and some are hitting a stride, but only a small group of "reinvention-ready" companies (2%) already using generative AI at scale and seeing substantial returns. As I wrote in October, those who are the best poised to adopt AI are the ones with lots of developers, whose revenue comes from new products and services, with highly adaptable teams.
Dig deeper:
My conversation with Nathan Benaich about AI in 2025: the post-hype AI era.
🔥 Smol is beautiful
We said smaller and more specialised models would ride alongside the big beasts like GPT-4 and that’s exactly what happened. A year ago, I wrote:
Smaller models will be in demand: they are cheaper to run and can operate on a wider array of environments than mega-models. For many organisations, smaller models will be easier to use.
Open-source and edge-friendly models rolled onto the scene, giving more players a seat at the AI table. Thanks to these compact models, it became simpler, cheaper, and faster for smaller firms and individuals to test ideas and build tools. Plus, we saw smaller models nearly reach GPT-4 performance, calling into question whether bigger is always better. Open-weight models like Meta's Llama 3.3 70B and Alibaba's Qwen 2.5 72B have at least matched, if not surpassed GPT-4 with its roughly 1.8tr parameters. Google's new Gemini 2 Flash model, which, while we don't know the exact size, is Google's third-tier of model sizing (out of four), yet it is one of the best. Microsoft’s Phi-4, with 17B, and Deepseek which only uses 21 billion parameters per token during inference, are also designed to be cheaper, more efficient to run while remaining competitive on benchmark tests. Despite their size, they remain competitive against these large models on benchmark tests.
🧊 Copyright is still stuck in limbo
We expected some drama over AI’s business model and who owns the training data. I wrote:
The New York Times lawsuit against OpenAI for copyright infringement will be a critical test case to help us understand one aspect of this business model. It doesn’t seem unreasonable for there to be some kind of settlement between OpenAI and the New York Times and others who put significant effort into creating unique insights. If courts believe the watermark of the music industry is reasonable, there will be a fracas. Spotify pays a punishing 70% of revenue in license fees. It seems more likely that any agreement will be far lower than that.
The battle lines hardened, but no one could declare victory. The New York Times, the first news outlet to sue an AI company in December 2023, is still battling it out in court. Other cases are also ongoing, or have been dismissed. Meanwhile, major AI companies like Perplexity announced partnerships with media outlets. Still, the business model of AI and how those who create the training data are (or aren’t) compensated is far from clear – no grand bargain emerged. At the same time, entrepreneurs are stepping up to find solutions. The UK government recently proposed a new blueprint for how AI and copyright should interact, which includes progressive elements like rights reservation and transparency but ultimately favours big industry over open cultural commons. A friend of EV, founder and investor Bill Gross founded ProRata to offer an algorithmic solution for crediting and compensating creators. Exponential View is one of the earliest partners on the project.
Dig deeper:
🔥 Election meltdown averted
A year ago, many fretted that AI-generated deepfakes and cunning chatbots would topple elections and mislead millions. I didn’t forecast AI having such a direct disruptive effect.
The swing towards autocracy is not inoculated by advanced technology. However, the 2024 election manipulation risks are not — and should not be represented as — mostly about AI.
There was no widespread, successful AI propaganda misleading voters across the world. In part, this is thanks to voters, platforms and watchdogs staying watchful. For example, Meta established election operations centres around the world to monitor content issues, and took down North of 20 covert operations this year. Where there were some of the feared foreign interference campaigns, these were rather low-tech. In September, US prosecutors said American right-wing influencers were covertly paid to produce politically divisive content by Russia. Even without an AI-generated apocalypse, this year’s elections disrupted traditional power structures, with incumbents losing ground around the world.
🔥 Clean energy saw faster steps but no giant leap
We bet on renewables pressing their advantage, especially solar and batteries. A year ago I wrote…
We are at the very peak of fossil fuel use globally. Coal, which powered the Industrial Revolution from the mid-17th century, will be in decline from now on. As we defossilize, technology-driven energy systems (like solar) will guarantee a declining price ceiling for energy, freeing us of the vagaries of commodity-driven energy provision.” And “Much of this has been helped by the consistent decline in battery pack prices. Cheaper alternatives, like sodium-ion batteries, will contribute to further downward price pressure.
That call was solid. The sodium-ion prediction was accurate—we now have the first sodium-ion cars. BYD has been leading this with shipments. However, what I underestimated was how quickly LFP (lithium iron phosphate) batteries would drop in price. This year, prices fell 20%, driven by learning effects, scale, and aggressive price cuts pushed by BYD. The average cost of a passenger EV battery fell to $115 per kWh, down from $139 last year—a 24% decline, much faster than expected.
On coal, the UK has made progress, phasing it out entirely. The Radcliffe-on-Soar plant, the last coal plant in the UK, has shut. Yet, in the US, Microsoft has requested certain plants remain operational to power AI infrastructure.
This points to one big realisation—for everyone. AI's power demands far exceed traditional data centres. Delays in building new facilities initially seemed like typical construction issues but revealed the massive scale of the challenge. This became clearer over the years as scale proved key to AI progress. Anthropic once saw it as a short-term factor but now expect it to remain crucial for years. The foundation model market’s sixfold growth this year confirms this shift. Our team’s internal modelling projects that AI shouldn’t endanger the now inevitable decarbonisation of energy systems; but it is proving a speed bump.
Dig deeper:
🔥 Geopolitics and cheap tech warfare
On the world stage, low-cost drones, tiny munitions and cheap surveillance gear made life harder for big militaries. We forecasted that asymmetric warfare would intensify and it did. I wrote:
The events in the Red Sea speak to the growing range of asymmetries. […] The advantage is, at least for now, in the hands of the attackers. One result is more shippers are pausing their Red Sea traffic. The lessons will force nations to develop more cost-effective and suitable weapons. Learning from the lessons of the Ukrainian battlefield where cheap drones render even tanks vulnerable within minutes. The geopolitics is getting uglier.
The exponential asymmetry that favours attackers has only grown. The threat of private drones in the Red Sea has become so important that the German defence ministry is redirecting its commercial vessels around Africa, a long detour. Israel’s detonation of 5,000 Hezbollah pagers reveals the weaponisation of the electronics supply chain. What is new about this year is how conflicts are being fought along new parameters. Rather than the battle of the most advanced, expensive weapon, attackers use cheap and mature technologies that end up being expensive to defend against.
Beyond warfare, geopolitical tensions around key technologies like chips, data centers and critical materials are intensifying. The US has continued to impose tariffs to friends and foes alike, while reigniting its manufacturing past through the IRA. China leads in solar, batteries, and EVs. And other countries around the world, not least in Europe, plan for energetic and technological independence.
🔥 No shining moment for Web3 + AI, yet
There was a hunch that blockchains would help make AI more trustworthy. I acknowledged the possibility, commenting that…
It may. It may. The crypto market recovered in 2023 as the rotten eggs were flushed out. But decentralisation is not dependent on tokenisation or some of the marvellous maths in crypto. And the trend towards decentralisation is secular, not bound in any way to the existing token economy.
In reality, few compelling use cases emerged at scale. Zerebro showcased an agentic system as an Ethereum validator by contributing to the network, which was fascinating to see. Crypto did succeed with Bitcoin breaking the $100k barrier. One driver of this was Trump’s election and a tight new relationship with major names in the crypto industry. Perhaps this new environment will enable some interesting applications to emerge and scale. It may, it may.
🧊 Scientific AI and material breakthroughs
We thought clever AI systems might smash through the barriers of materials science or drug discovery at a fast pace. I wrote:
Both AlphaFold and GNoME enable vastly accelerated discovery of potential candidates, reducing the cost to synthesise, test, and ultimately use them. Even though Alphafold has been available longer than GNoME, proteins are more fiddly than crystals and operate in more complex, biological environments. And for many applications of proteins in biotech and healthcare, regulatory scrutiny is necessarily deep. This might make the case that GNoME could have a more immediate impact.
I maintain that scientific AI can help us compress time by accelerating various parts of the R&D process. The team behind AlphaFold even won the Nobel for their work. We haven’t seen any public announcements of something AI-designed and created being commercialised yet, but there are lots of interesting experiments. For example, a study found that AI-tools enabled a large US firm to increase the amount of new materials discovered by 44%, the number of new patent applications filed by 39% and the number of prototypes by 17%. In Stanford, scientists created a virtual lab which used agents to create 92 new nanobody designs. We learned that software progress can race ahead, but it still needs to meet the laws of physics and cautious regulators out in the real world. We’re only at the very beginning of the journey.
Dig deeper:
Humanity needs AI to ensure we have enough knowledge to manage our societies.
Our view on the future of an ecosystem of AI models that would benefit scientific discovery.
🔥 GLP-1 drugs: big buzz, quiet cultural shift
A year ago, I wrote:
GLP-1s, like Ozempic and its more powerful cousins, sit as a layer between the capabilities of a modern technological market economy and the limitations of the human reward system. […] As we turn the corner towards a society of AI, the speed of interactions between those technologies will increase. And that increase will produce benefits, from abundant energy to new medicines, to wider access to education. But it will also objectively and subjectively harness us to a pace we will struggle to manage.
A lot happened to encourage my belief in GLP-1s. Several more studies came out this year, showing how widespread the positive impact of these drugs could be. Obesity in the US peaked, which we attributed to the widespread use of drugs like Ozempic. In addition to weight management, research has also found that it could help with neurodegenerative diseases like Alzheimer's, opioid and alcohol use, and even relief from anxiety and intrusive thoughts. Biden deemed them so useful that he proposed to expand medicaid to include them under insurance. The broader cultural shift won't happen overnight. Undoubtedly, we’ll continue to learn about the effect of GLP-1s and how it might help us manage challenges spanning illness to addiction.
Dig deeper:
🔥 Robotics and humanoids: more spark than expected
Humanoid robots impressed even more than I expected. A year ago, I wrote…
They may not be at their ChatGPT moment yet, but humanoid robots are making meaningful progress. In theory they should be really difficult — Moravec’s paradox and all that. But in practice, the operating conditions for humanoid robots may be easier than we expect. Unlike self-driving cars, they amble slowly, and they can be geofenced to specific environments.
I was on the right path, for example when I said that countries with ageing populations need robots first: South Korea has become the first country where robots constitute 10% of their workforce. Where I’ve changed my mind is about where humanoid robots will be useful first. Instead of general-purpose robots in factories, we’ll see specialised robots in areas like solar panel installation. Cosmic Robots, a startup I invested in, builds robots that can install panels 10x faster than through traditional methods, helping to address labour shortages and the huge demand for renewable infrastructure. And as I mentioned last year, I still believe that humanoid robots can be very useful in some situations, but that we need to give some serious consideration to how present we want them to be in our everyday lives.
The main takeaway, for me, is that change is messy. Technology evolves fast, and humans take a while to figure out what to do with it. That process often ends up being even more complex than tech innovation in the first place.
Thank you for sharing this ride with us. We’ll put these lessons to use when we look ahead to 2025. If you have your own reflections or spot trends we missed, drop them in the comments. Let’s keep mapping the landscape together!
Thank you for a year of incredibly interesting newsletters 🙏
Wonder what’s your take on the virtuals protocol which likens itself to a “digital nation,” where AI agents function as autonomous entities with their own wallets and capabilities. These agents can engage in transactions, hire humans, and operate like small business owners, contributing to an AI-driven economy ?