I've been contemplating how to make technologies beneficially available, as I discuss in my book, The Exponential Age (or Exponential outside of the US). Contributing to recent debates around AI safety and societal benefit, Italy's privacy regulator shut down ChatGPT, exposing the exponential gap between technological advancement and the ability of institutions to respond. The past couple of weeks has truly stirred the pot, from extreme doomer claims to more measured attestations of concern.
I’m sending these essays more frequently than usual because so many debates
around AI and technology are live now. Later this week, I’ll send another note on
why we are seeing an acceleration in AI development and what that means. Azeem
We have long recognized the need for limits on many technologies, such as the pharmaceutical industry, which must undergo rigorous testing and explanation of side effects and benefits before marketing their products. We've moved past the reckless days of selling radium-laced cough syrup. The IT industry managed, for historical reasons, to avoid any real regulatory oversight.1
For its many benefits, AI brings with it three complications:
Most proximate are matters of equity, bias, and fairness as AI is embedded in the systems used by firms and government. To quote Tom Clancy, these are a “clear and present danger.”
There are then issues specific to AI in the context of the Exponential Age: it is a general-purpose technology that will redefine how the economy works. New firms will arise, and others will die. Many types of employment will cease or face wage pressure and new classes of work will emerge. Some firms may agglomerate excessive power. All of this could happen at a faster clip than our experience with the Internet or electricity.
We then need to grapple with the value alignment problem. Will AI systems align with human values? Such values are often abstract, conflicting and changing — and we often don’t know what they are. How do we ensure the complicated systems built around AI serve us well? This conundrum extends to the existential implications that have dominated recent discourse. This isn’t the first time we’ve had to deal with aligning artificial entities to human values. Companies are such entities2 and we probably have a patchy record of “aligning” them to what matters to us.3
Ultimately, society’s rules should be decided by legitimate political authorities. But while that sleepy-headed leviathan wakes up, we must get the ball rolling.
Quick not dirty
I propose three practical steps that can be initiated without requiring complex coordination between governments. The purpose is to ensure we invest in capabilities that better help with the beneficial development of this crucial technology.
Let’s take them one at a time.
Increase scientific research on trust and alignment: Develop a better understanding of how to build trustworthy AI systems, tackle the value alignment problem, and ensure AI safety. This research could also generate new ways of classifying and measuring risks and benefits. Consider how the hazard ratio has become a cornerstone of clinical trials. This ratio was only introduced to the literature in 19724, supplanting older metrics.
Establish an observatory to monitor AI developments: Create an independent expert space, comprised of scientists and engineers from various disciplines, that keeps a beady eye on AI research and development, from early research to GitHub repos. It would identify trends and milestones, and then potential risks or poor standards. This observatory would serve as a trustworthy source for public discussion and help detect complex risks that could emerge at the national or societal level. It would help inform what resilience and preparedness could look like in this AI-first world. This group could also track and evaluate the impact of open-source AI projects.
Form citizens' juries or deliberative mechanisms to discern values and preferences: Instead of relying solely on existing democratic processes, engage citizens at various levels to better understand human values in relation to AI's potential impact on our lives. This approach has been successful in Ireland's abortion debate, Brussels' climate change initiatives, and Paris' ongoing deliberative processes. Just today, President Macron declared he wanted to see more citizen’s juries in France.
These projects should be independent of the large AI research labs (like Google, OpenAI, Microsoft, Meta and others) but funded by them. It would be run by some kind of independent entity with some credible expertise and legitimacy. This entity would be funded through an irrevocable trust5. Initially, this trust would be fuelled by a couple of billion dollars from those big firms, sufficient to ensure the execution of the research, observational and deliberative work streams. Many exceptional research institutions, inter alia, the Ada Lovelace Foundation6, Oxford Internet Institute, Berkman Klein Center and the Distributed AI Research Institute, could take on the work. More could be created.
This isn't a substitute for government action; it's a precursor that provides empirical and logical bases for decisions, identifies emergent risks, and involves citizens in determining regulatory frameworks.
Historically, early regulatory interventions have proven effective. For example, establishing the Food and Drug Administration (FDA) in the United States helped regulate food and drug safety, eliminating dangerous products from the market. Despite being heavily regulated, the global pharmaceutical industry is a $1.5trn business and has grown by 6.6% annually since 2001.
Another instance is the implementation of automobile safety regulations, which led to seatbelt requirements, crash tests, and other safety improvements, drastically reducing traffic fatalities. Seatbelts, mandated in new American cars in 1968, reduced road deaths but did not shrink the industry. It isn’t clear that earlier intervention would have retarded the industry, merely saved thousands of lives.
Just a start
This approach is practical, affordable, and achievable by persuading well-funded firms to act. It would also create adequate incentives to widen the scope of AI science, whether this would mean more open-source AI, better research into resilience and robustness, or more understanding of regional specificities.
However, industry consortia alone aren’t enough. Self-regulation is inherently patchy, limited and self-serving. Firms may gesture to Ulysses’ contracts but they are hardly likely to make them happen.
Governments must also create rules and frameworks, informed by research and public input, to govern AI effectively7. By following these steps, we can establish the fact base to inform those rules.
In 2023, humanity has lived with the introduction of powerful technologies many times. We have a pre-history with the flint axe, the wheel and pottery. We have a history with many more recent inventions—and the benefit of hindsight, that perhaps our forebears at the time of Gutenberg or Watt did not. Turning this hindsight into foresight could improve the way in which these advances make their way into the world.
Azeem
P.S. The video is optional, but I make some other points that don’t make it into the essay.
P.P.S. If you know someone in one of the big tech firms or in government who should read this, please forward it to them. 🙏🏽
One important reason is that the industry came into maturity during a period of broad regulatory rollback.
Listen to my discussion with historian, David Runciman, on this question.
Consider climate change and profit-seeking oil firms, for example.
Cox, Regression Models and Life-Tables, Journal of the Royal Statistical Society: Series B (Methodological), Jan 1972.
The Facebook Oversight Board has similarly limited funding.
I was a board member at the Ada Lovelace Foundation until 2021.
There are also existing frameworks, such as GDPR and privacy laws, that can be applied to many of these issues.
Share this post