Discover more from Exponential View by Azeem Azhar
🔮 The AI executive order; Krispy Kreme’s sugary crash; disruptive transition; Amazon’s boo-boo ++ #447
Hi, I’m Azeem Azhar. As a global expert on exponential technologies, I advise governments, some of the world’s largest firms, and investors on how to make sense of our exponential future. Every Sunday, I share my view on developments that I think you should know about in this newsletter.
The latest from Exponential View
Sunday chart: How open is the government to open-source?
The AI Safety Summit in the UK and the Executive Order on Safe, Secure and Trustworthy Artificial Intelligence in the US signal an intensification of government intervention in artificial intelligence. Both events demonstrate a growing commitment to address concerns about AI risks that have entered the public consciousness.
The US took broad and sweeping action to regulate AI, covering many areas of civil society. The scope is vast — for a brief overview, check out the fact sheet here. It’s unusual for a state to regulate a technology so extensively before maturity. Perhaps the lessons learned from social media influenced this proactive approach.
The summit, on the other hand, aimed to build global consensus on AI risk and open up models for government testing - both of which it achieved (see here for Ian Hogarth’s overview).
While safety is critical, some argue that government regulation of AI could also serve as a “wolf in sheep’s clothing” — a means to consolidate control over AI gains in the hands of a few. As Yann LeCun recently called out, leaders of major AI companies including Altman, Hassabis, and Amodei may be motivated by regulatory capture more than broad safety concerns. Andrew Ng has made similar arguments that there is a financial incentive to spread fear. It’s imperative to ensure any government oversight balances both safety and open-source as AI capabilities advance.
Luckily, there were sufficient voices for open-source at the UK summit. The Deputy PM of the UK Oliver Dowden said open-source had “huge potential” - and its ideals were represented by Yann LeCun and Arthur Mensch.
The order was also somewhat positive for open-source - there was no mention of licensing requirements or liability, two things that would cripple open-source’s ability to compete. Antitrust, on the other hand, was a notable focus, though without specifics. This will be a key area — as we covered in the newsletter last month, big tech companies are increasingly forming interlinkages with AI companies, highlighted by Google’s ($2 billion) and Amazon’s ($4 billion) recent investments in Anthropic.
The criteria for model oversight are based on technical thresholds, specifically 1e26 floating point operations (FLOP), which is roughly 5x the training compute used for GPT-4. To put this in perspective, you’d need nearly 6 billion Excel worksheets, with 17 billion calculations per sheet, to match this level of computing power.1 There should be a strong justification for why this particular threshold was set. Government intervention requires a high bar of evidence, so understanding the scientific basis is critical - as it is with the physics underlying uranium regulation. What’s more, this cut-off level is prone to rapid obsolescence due to computing’s fast pace.
Open-source AI still faces challenges because the order is capability-centric. Projects and startups may find it difficult to navigate the evolving compliance landscape, lacking larger, established companies’ resources. Andrew Ng believes that the right way to regulate is at the application layer, where regulations would be applied based on what AI is used for. For example, AI in biotech would have more stringent requirements than AI for copywriters.
Regulation is essential for managing the complex ethical and safety challenges of AI, yet it’s equally critical to promote a regulatory environment that spurs innovation and upholds the democratic nature of AI development. The executive order’s ambition is commendable and generally well-directed, but it still falls short in ways that may benefit incumbents disproportionately.
📆 💡 Every quarter we welcome a handful of organisation to showcase their work to our readers as newsletter sponsors. We’ve just opened our sponsorships calendar for booking in Q1’24 — now is the right time to get in touch if you want to share your work with 100,000s of decision-makers who read Exponential View every week.
ICE’s existential risk. The automotive sector is at a crossroads. Professor Ray Wills forecasts that ICE vehicle sales will be completely eliminated by 2027, an existential warning for companies clinging to ICE vehicle production, akin to Kodak’s collapse. This is even faster than Tony Seba’s famous 2030 prediction, and more aggressive than my current view — which I’ll update. Contrary to trends, BMW, Toyota, and Nissan are still clinging to plans for a majority production of ICE cars in 2030. This hesitation could be deadly. Meanwhile, the EV market is flourishing, evidenced by a 39% jump in YoY sales in Q2 2023. This optimism is further bolstered by a recent study finding that of the 15,000 cars analysed only 1.5% had their batteries replaced, challenging the anticipated obsolescence narrative.
Solar module prices have dropped dramatically this year, now at $0.14/watt. This is largely driven by China’s oversupply situation.
Novo boom, Krispy Kreme gloom. Novo Nordisk is now Europe’s most valuable company. Their weight-loss marvel, Ozempic, is flying off the shelves faster than hotcakes—or, should I say, faster than the high-demand drug itself can be produced. There’s a creeping concern that Ozempic’s demand is syphoning off clientele from an unexpected source: the American fast food industry. The drug’s ability to curb appetites is sounding analysts’ alarms over Krispy Kreme, whose dopaminergic sugar highway may soon hit a dead end. Yet, Ozempic’s $10,000 annual sticker shock might leave some pockets feeling peckish. Patience may be needed, with the patent only expected to expire in 2033.
AlphaFold levels up. DeepMind’s revolutionary protein folding AI AlphaFold continues to advance. The latest upgrade extends its prediction abilities from protein structures to other biological molecules, such as nucleic acids and ligands. AI is generating excitement for its potential to accelerate early drug discovery before clinical trials. Insilico Medicine, an AI-focused biotech firm, reported advancing an AI-designed drug candidate from discovery to phase I of clinical trials in just 30 months, notably below the traditional average of six years for this stage of development. However, independent verification is needed, along with clinical trial results to determine if AI-discovered drugs exhibit superior efficacy and safety. While limitations remain, AlphaFold’s dramatic advances demonstrate that AI’s potential to transform drug discovery continues to grow. By shortening the R&D cycle, AI is demonstrating yet another compression of the decision loop.
A year on from the $44 billion acquisition of Twitter, Musk now values the company at $19 billion, marking a 55% depreciation.
GPT-3.5 turbo only has 20 billion parameters, 8.75x smaller than GPT-3 (175 billion).
The Chinese Yuan has surpassed the Euro to become the second-most used currency in global trade financing.
Globally, men commit over 90% of homicides and represent 70% of the victims, a pattern mirrored in chimpanzees where 92% of killers and 73% of victims are male.
Critical battery components, lithium carbonate and spodumene concentrate, have experienced sharp drops in prices in China, declining 69.48% and 64.46% year-to-date, respectively.
As of Q3 2023, the US has surpassed the total 2022 installations of large-scale battery energy storage systems (BESS).
Short morsels to appear smart at dinner parties
🌐 An ambitious plan for open-access publishing.
🏆 Content creators are winning the battle for the news.
🧠 First digital atlas of human foetal brain development published.
🔍 AIs can guess where Reddit users live and how much they earn.
❓ Are we having a moral panic over misinformation?
🎶 I get by with a little help from AI. The final Beatles record.
Credit where credit is due. I really think that AI Summit achieved a great deal of its goals — and perhaps more. There was widespread public discussion about sociotechnical, not just catastrophic risks. There was the appointment of Yoshua Bengio to lead a new research project. Representatives of China and the US stood on stage together and talked about common perspectives. And there is a declaration of intent with international support. In two days, what more can we ask? I was surprised to the upside.
There are still points of disagreement, of course, that is to be expected. But the hard questions about this technology - and it is so many-headed and complicated - are being asked in a constructive way, with policymakers at the table.
Well done to Matt Clifford and the many people who worked to make this happen.
What you’re up to — community updates
Shamil Chandaria gave a lecture at Oxford University this week on whether AI can become conscious.
Laure Claire Reillier and Benoit Reillier are running the Platform Leaders evert on 9 Nov at the Science Museum in London.
Vess Clewley on using journey management to keep companies on track.
Share your updates with EV readers by telling us what you’re up to here.
Assuming each cell represents a single FLOP.