🔮OpenAI in crisis; adaptation gap; DataZilla; elections; fast internet ++ #449
An insider’s take on AI and exponential technologies
Hi, I’m Azeem Azhar. As a global expert on exponential technologies, I advise governments, some of the world’s largest firms, and investors on how to make sense of our exponential future. Every Sunday, I share my view on developments that I think you should know about in this newsletter.
The latest posts
If you’re not a subscriber, here’s what you missed recently:
✨ If I had to pick one AI tool... this would be it.
Watch now (66 mins) | There are so many new artificial intelligence products out there. Which ones are really worth your time? If I had to pick one, it wouldn’t be ChatGPT or Claude. It would be Perplexity.ai. Since 1 October I’ve logged more than 268 queries on Perplexity from my laptop alone (I use it on my phone, too). It’s displacing a large number of my Google searches.
🚀 Thanks to our sponsor Cisco, the worldwide technology leader that securely connects everything.
Sunday chart: What happened to plan B?

We’re not on track to meet the 2030 climate goals. The State of Climate Action 2023 report highlights this issue, finding that only electric vehicles are on track to hit their target. EV usage increased from 1.6% in 2018 to 10% in 2022, surpassing the crucial 5% tipping point for mass adoption. Credit to the report for understanding the S-curve dynamic - how initial slow growth in technology adoption can lead to a rapid increase once certain thresholds are crossed. However, the report finds that other S-curve indicators, such as zero carbon electricity generation & green hydrogen production, are not on track.
It is important to acknowledge that S-curves are quite sensitive to inputs; they can accelerate more rapidly if the private or public sector changes their behaviour. This means that despite the pessimistic outlook, changes can still take place that can have a large effect.
At Exponential View, we think that over the next 7-10 years, things will move faster than expected: that is the shape of the exponential ramp. Climate analysis often is more pessimistic than it should be. Yet it is true, we aren’t significantly reducing emissions. Plan A is falling behind.
Plan B is looking seriously at higher risk mitigation strategies, like solar radiation management, or being more honest about adaptation strategies. Startups focusing on adapting to climate change impacts received a mere 7.5% of global climate tech funding during 2019-2020. This underfunding is stark, especially considering the UN Environment Program’s identification of a global annual funding gap of up to $194-366 billion for adaptation projects. The world is already feeling the effects of climate change, and adaptation is key to managing these real impacts. Yet, our current efforts are far from treating this as a high priority.
🧬 Today’s edition is supported by our partner Cisco.

According to the inaugural Cisco AI Readiness Index, 97% of surveyed organisations say the urgency to deploy AI-powered tech in their company has skyrocketed in the past six months. But here’s the reality - intentions are outpacing actual abilities: 86% of companies globally are not fully prepared to leverage AI to its fullest potential. The index, based on a double-blind survey of 8,161 private sector pros involved in AI integration or deployment, examines factors such as strategy, infrastructure, data, talent, governance, and culture. It’s your ultimate guide to accelerate AI adoption, unlock business value, and enhance experiences for both employees and customers.
Key reads
The OpenAI crisis. I’ve written about some of the things you should be thinking about in the context of the crisis at OpenAI. I’ll briefly tackle what comes next in my End Note this week. The rest of the Key Reads were written before Altman’s ouster.
LLiMits. Before he was fired, Sam Altman spoke about the limits of large-language models for achieving artificial general intelligence at Cambridge University. He said “We still need another breakthrough” echoing Yann LeCun’s sentiments. This perspective aligns with the widespread agreement I’ve encountered in discussions with developers of foundational language models at organisations like OpenAI, Anthropic, and others. The consensus suggests a few more years of progress within the current paradigm before a new scientific breakthrough is required. Of course, it’s never clear how long that breakthrough will take to comeand then be accepted and then turned into working code.
Electricity took five decades between Faraday’s first experiments in electromagnetism to the first public electricity distribution antibiotics around 15 years from Fleming’s mould to widespread use; nuclear power just nine years from Szilard’s first imagining of nuclear chain reactions as he crossed the road in Russell Square to the Chicago Pile 1.1 Yet MRI machines for medical imaging emerged three decades after nuclear magnetic resonance was first discovered. Arguably, evolution has still to find a majority uptake 160 years on from On The Origin of Species.
So, how long might this breakthrough take? Such punctuated steps often rely on interdisciplinary research, collaboration between the public and private sectors, strong incentives and luck. So, the answer? It depends.
See also:
Arthur Mensch, co-founder of Mistral AI on why the EU AI Act should focus on product safety, rather than regulating foundational models.
Large-language models can strategically deceive their users when put under pressure.
GPT gets a Ph.D. A Microsoft study highlights the potential of GPT-4 for complex scientific tasks in fields such as biology and materials design. It excels at predictive abilities valuable for drug discovery. However, improvements in accuracy are needed in more math-heavy fields. Despite current limitations, GPT-4 signifies an advanced tool for scientific research. Meanwhile, Argonne National Lab is ambitiously developing AuroraGPT to integrate the world’s scientific knowledge into a chatbot. Their goal is to make research faster and more efficient. If it works, AuroraGPT could be a key tool for scientists.
See also:
How LLMs are transforming education and their textbook (in-)accuracy.
BioRxiv is trialling an AI that writes summaries of preprints, adapted to your knowledge level.
DeepMind has just made weather forecasting more accurate.
Microvote. As The Economist points out, 2024 will be the biggest electoral year globally in history. This will be tempting for authoritarian regimes who may weaponise technology to interfere in elections worldwide. Private firms, like Microsoft, are stepping in to protect the democratic process. They are doing so through initiatives like digital authentication of campaign content and the Election Communications Hub. While these are incredibly useful for fighting disinformation, it does raise questions about public institutions’ increasing reliance on Big Tech. Reliance is ultimately power.
Datazilla. The introduction of ChipNeMo, a custom large-language model by NVIDIA, marks a significant advance in the use of genAI to improve business processes, in this case, semiconductor design. The model demonstrates potential applications in generating code, analysing designs, and acting as a chatbot for technical queries. It showcases how specialised industries can benefit from custom-trained AI based on their internal data. Data is no longer just a resource to be managed but a strategic asset offering increasing returns to scale. As LLMs become more adept at interpreting and utilising internal data, the companies that own this data can further solidify their competitive edge, potentially leading to a significant concentration of power in the industry. The more data a company has, the more powerful and accurate its LLM becomes, creating a virtuous cycle that can entrench its market position. We’ve seen this before with big tech companies - the more users they had, the stronger they got. This trend is likely to spread to other industries if data stays exclusive to those who collect it.
Market data
The world’s fastest internet connection has been reported in China, a whopping 1.2 terabits-per-second (or 150 HD films per second).
Roughly 40% of wage inequality from 1980-2019 has been reversed in the past 3 years in the US.
Google gives Apple 36% of its Safari search earnings to be the default search engine on Apple devices.
40% of the buildings in Manhattan could not be built today due to zoning laws.
Microsoft plans to spend >$50 billion annually on data centres in 2024 and beyond. For comparison, the Manhattan Project cost roughly $34 billion (in today’s dollars).
Short morsels to appear smart at dinner parties
🧬A new form of gene therapy sharply lowers bad cholesterol.
🌈 A secret to happiness.
🌐 What the world’s largest LED screen says about the human condition.
🤖 Researchers 3D-printed a robot hand with working tendons.
🦠 Was an ancient bacterium awakened by an industrial accident? Gulp.
📼 Why cassette tapes flourished, and still endure.
End note
Here’s my quick summary on what might happen next after the OpenAI schism.
The board may have made a decision that reflects their responsibilities but it has been done poorly and communicated even less well. In other words, it could be a good decision, but it’s been handled poorly. Or, the board’s stewardship over previous months had been poor, forcing them into this sharp action.
I strongly suspect Sam and Greg will start something or several new things in the general AI arena. Until Friday it was hard to see any founder having sufficient credibility to raise enough to enter the foundation model space. Both Sam and Greg have this credibility. And funds exist, either on corporate balance sheets or with sovereign funds or VC firms. The shape of these firm(s) would be interesting given Sam’s strengthening convictions that existing models of economic organisation, including capitalism, need to change in a world of ubiquitous capable AI.
Many of the “go faster” people will leave OpenAI, quite possibly to join any new venture that Sam and Greg create. This team will likely move at a more rapid iterative rate than OpenAI ever did. And OpenAI was fast.
What will be left in OpenAI will be a “go slower, more methodical” group that will reflect in some way the more scientific culture of DeepMind. (Those cultural differences led to OpenAI securing a market lead over Google a year ago.)
OpenAI will need to think really hard about how it pays for itself given Sam’s unparalleled ability to raise funds—and it’s unclear whether the OpenAI board and interim CEO will be able to do this.
Microsoft is in a tough short-term position. The huge risk it took by partnering with OpenAI without board representation exploded in its face. I expect the OpenAI team will work hard to keep them on side. To be at the behest of a board that has handled this poorly is probably not an option. Microsoft could formalise their relationship more with OpenAI (perhaps by splitting its activities) and securing better governance rights or board seats or starting to end its dependance on OpenAI by doubling down on its own capabilities.
Cheers,
Azeem
What you’re up to — community updates
Bharath Ramsundar announces the launch of the Scientific Foundation Models initiative to enable generative AI for materials design and towards autonomous scientific discovery agents.
Sir Geoff Mulgan writes about “The billionaire problem” for Prospect Magazine.
Marco Pittalis published a paper on the impact and benefits of advanced remote monitoring solutions for the operation of rural mini-grids in Sub-Saharan Africa.
Peter Scott speaks about existential risk with Jaan Tallinn, one of the founding developers of Skype.
Share your updates with EV readers by telling us what you’re up to here.
It was December 1938 when Otto Hahn first succeeded in fissioning uranium, and a few weeks later when Lise Meitner and Otto Frisch came up with the explanation.
Just a heads-up: your newsletter text appears quite small on the Android Gmail app, almost like it's in a "desktop mode." This makes reading it on a phone a bit less user-friendly. Thought you should know. – M