💡The complexity of AI governance
Navigating a narrow corridor ahead of the AI Safety Summit hosted by the UK
Two recent essays made me think further about the challenges and opportunities for forging global agreements on regulating artificial intelligence.
In the first one, EV member Mustafa Suleyman, co-founder of Pi and DeepMind, argues that the U.S. could use its influence over GPU suppliers like NVIDIA to enforce minimum standards. And another where Nathan Benaich and Alex Chalmers suggest excluding China from the upcoming AI Safety Summit due to its bad faith behavior. These diverging perspectives encapsulate the intricate issues facing AI governance and compel us to examine the role of nation-states, industries, and global institutions in forging sustainable solutions.
I know that many readers are trying to make sense of this. So, I took a moment to record a commentary on this complex question for paying members of Exponential View. I will break down…
The evolving landscape of technology governance,
The complex case of excluding China,
How to approach building a resilient process for AI governance.
As we navigate this narrow corridor, the pressing need is not just for standards or rules, but for a dynamic framework that accommodates the multifaceted nature of AI and its implications for global society.
⏳ If you’re short on time, check out the digest of my commentary below.
The evolving landscape of technology governance
Historically, technology standards have been market-driven, largely determined by American firms and backed by the U.S. government. The Internet’s own adolescent development during the 1980s and 1990s reflected a particular set of values associated with the West Coast of the U.S., and it framed how much of the Internet still operates today. But this isn’t a question of whether those standards were good or not…
This is a question about what processes lead to certain global agreements, and whether those agreements should happen at the nation-state level or involve multiple stakeholders. The Internet was modelled in multi-stakeholder process for the last thirty years or so, and it’s done quite well. There are examples in other industries. In the financial services industry, for example, Basel II and its successor Basel III1 brought together regulators, national regulators, and also bankers themselves in a process to figure out what stability ought to look like, what the various tools are to maintain it, and the triggers to assess potential risks.
Systemic risks and dual-use dilemma
AI is not a singular entity but an interconnected system, making it vulnerable to cascading failures that could have global ramifications. Additionally, the dual-use nature of AI—its application for both beneficial and harmful purposes—necessitates established principles to guide its development and use. The urgency for collective action is magnified against a backdrop of geopolitical fragmentation, epitomised by the emergence of new power blocks like BRICS2 challenging a dollar-dominated global economy.