🐙 Three quick steps towards beneficial AI
How to make an amazing technology amazinger
I've been contemplating how to make technologies beneficially available, as I discuss in my book, The Exponential Age (or Exponential outside of the US). Contributing to recent debates around AI safety and societal benefit, Italy's privacy regulator shut down ChatGPT, exposing the exponential gap between technological advancement and the ability of institutions to respond. The past couple of weeks has truly stirred the pot, from extreme doomer claims to more measured attestations of concern.
I’m sending these essays more frequently than usual because so many debates
around AI and technology are live now. Later this week, I’ll send another note on
why we are seeing an acceleration in AI development and what that means. Azeem
We have long recognized the need for limits on many technologies, such as the pharmaceutical industry, which must undergo rigorous testing and explanation of side effects and benefits before marketing their products. We've moved past the reckless days of selling radium-laced cough syrup. The IT industry managed, for historical reasons, to avoid any real regulatory oversight.1
For its many benefits, AI brings with it three complications:
Most proximate are matters of equity, bias, and fairness as AI is embedded in the systems used by firms and government. To quote Tom Clancy, these are a “clear and present danger.”
There are then issues specific to AI in the context of the Exponential Age: it is a general-purpose technology that will redefine how the economy works. New firms will arise, and others will die. Many types of employment will cease or face wage pressure and new classes of work will emerge. Some firms may agglomerate excessive power. All of this could happen at a faster clip than our experience with the Internet or electricity.
We then need to grapple with the value alignment problem. Will AI systems align with human values? Such values are often abstract, conflicting and changing — and we often don’t know what they are. How do we ensure the complicated systems built around AI serve us well? This conundrum extends to the existential implications that have dominated recent discourse. This isn’t the first time we’ve had to deal with aligning artificial entities to human values. Companies are such entities2 and we probably have a patchy record of “aligning” them to what matters to us.3
Ultimately, society’s rules should be decided by legitimate political authorities. But while that sleepy-headed leviathan wakes up, we must get the ball rolling.
Quick not dirty
I propose three practical steps that can be initiated without requiring complex coordination between governments. The purpose is to ensure we invest in capabilities that better help with the beneficial development of this crucial technology.
Let’s take them one at a time.