Great analysis Azeem and extremely actionable governance solutions. I've enjoyed your newsletters for many years now, but the video format is excellent--easy to consume and perfect length.
The ability to consume the content in a different way, more flexibly. Eg., when holding the baby, cooking, or doing other odd tasks. Reading needs sustained quiet, a rare resource.
I agree, Azeem - we need to find a way for civil society to engage and I often wonder where the voice of the non-human inhabitants of Earth is in this. We don't just need an appreciation of ethics, but environmental ethics too. I'm enjoying the videos, sometimes it's just nice to put a face and a voice to the writing, and if it were possible to have them subtitled - even better.
The three practical steps are a fine foundation for thinking about the near term and the value of near term action. For my money, they are qualitatively more useful at this stage than complex plans for improving AI or calls for a pause.
I think the BLOOM model from BigScience and Hugging Face are worthy of attention in addition to the better known players because of their concern for clean energy and the steps toward equity represented by their multi-country, multi-language orientation.
I think a fourth complication deserves attention: the exponential increase in the danger of war as AI is integrated into defense systems. The danger comes from both accidental behavior and the tendency of unbalanced markets, raw material access, and technology to lead to war.
Azeem, It is a pity that the 3 initiatives you outline are not already happening. Your analysis and perspectives on this are leading edge...and although the genie is out of the bottle there is work to do to as you point out - close a number of gaps. Can this community somehow catalyze activity?
Azeem, I disagree with your proposal for three reasons.
First, I find the analogy with the pharma incorrect. WIth novel drugs, we are testing specific drugs, instead of the industry overall. In case of AI, we can’t isolate precise use cases and run trials for them, which would be an analogy. Instead, we are looking at the field as a whole. A better analogy would be to imagine what would happen if we issued a blanket ban on all synthesised drugs until we were sure than synthesising drugs are a good idea. We’d never get there.
Second, if I imagine using your approach at the beginning of any other technological revolution, I’m pretty sure none would have happened. For example, if we waited to start using WWW (or mobile phones, or radio, or electricity) until we were satisfied that we won’t do more harm than good, we’d never deploy them. There are always new and credible risks, and sometimes they are severe (e.g. fossil fuels and climate change).
Third, involving citizens directly may work better with easy to understand issues like abortion (or even climate change). Asking an average citizen (forgive my arrogance) to have an informed opinion on the impact of an AGI, when even the leading figures in the field disagree, doesn’t strike me as a reliable process. (On a side note, I think your approach ignores the fact that it’s hard enough to run these processes in one country and culture, e.g. residents of Paris, but it gets exponentially more complex if we need to include the entire world, which we need to do as AI would impact everyone).
Finally, I think it could be helpful to differentiate between two types of risk: existential and not. I would argue that non-existential risks (e.g. jobs being automated away) should be approached using existing frameworks and tools, but existential risks need a different approach, like we already have for nuclear weapons. Making sure the paper clip problem never becomes real is important and it doesn’t require aligning on values. If there’s anything nearly everyone would agree with is that it’s good to be alive.
ps video format is great: I watched it as I was making my morning coffee instead of sitting at my desk. Thanks for making the video, it is thought-provoking!
The non-exisential risks do not easily fit existing frameworks for regulation or even economic orthdoxy. In the same way, that we need new rules around market manipulation with the arrival of the telegraph, we'll need new rules for this new era. That rules might (in some cases) be monitored and enforced by existing institutions, but in many cases will require new institutions.
My proposal is about kick-starting the science, the evidence base and the understanding of values which would help the formation of these new institutitons.
On deliberative democracy, you've dismissed the quality of the process so far. It has been used extensively for hard problems - including, for example, policy questions around automated facial recognition (something Ada Lovelace did some years ago). See for example: https://www.adalovelaceinstitute.org/report/trust-data-governance-pandemics/
There is a particular methodology to citizens juries which makes them appropriate for aspects of even hard technical questions.
They're all good ideas Azeem but none of them are regulatory solutions. Ultimately, a regulatory solution needs a regulator, no? The UK Government's current approach to AI regulation (https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper) is to distribute responsibilities across existing regulators but coordinate and align. Time will tell whether this will work.
Agree. They are pre-regulatory solutions. In other words, the activities can help inform the kinds of regulators required and what their roles and responsibilities would be.
Great analysis Azeem and extremely actionable governance solutions. I've enjoyed your newsletters for many years now, but the video format is excellent--easy to consume and perfect length.
The new video format is fantastic. Thanks for thinking to start this.
Appreciate it. What is it about the format that works?
The ability to consume the content in a different way, more flexibly. Eg., when holding the baby, cooking, or doing other odd tasks. Reading needs sustained quiet, a rare resource.
Seeing your smile, and sign off are nice too :)
I agree, Azeem - we need to find a way for civil society to engage and I often wonder where the voice of the non-human inhabitants of Earth is in this. We don't just need an appreciation of ethics, but environmental ethics too. I'm enjoying the videos, sometimes it's just nice to put a face and a voice to the writing, and if it were possible to have them subtitled - even better.
The three practical steps are a fine foundation for thinking about the near term and the value of near term action. For my money, they are qualitatively more useful at this stage than complex plans for improving AI or calls for a pause.
I think the BLOOM model from BigScience and Hugging Face are worthy of attention in addition to the better known players because of their concern for clean energy and the steps toward equity represented by their multi-country, multi-language orientation.
I think a fourth complication deserves attention: the exponential increase in the danger of war as AI is integrated into defense systems. The danger comes from both accidental behavior and the tendency of unbalanced markets, raw material access, and technology to lead to war.
Azeem, thanks for this post.
Azeem, It is a pity that the 3 initiatives you outline are not already happening. Your analysis and perspectives on this are leading edge...and although the genie is out of the bottle there is work to do to as you point out - close a number of gaps. Can this community somehow catalyze activity?
Azeem, I disagree with your proposal for three reasons.
First, I find the analogy with the pharma incorrect. WIth novel drugs, we are testing specific drugs, instead of the industry overall. In case of AI, we can’t isolate precise use cases and run trials for them, which would be an analogy. Instead, we are looking at the field as a whole. A better analogy would be to imagine what would happen if we issued a blanket ban on all synthesised drugs until we were sure than synthesising drugs are a good idea. We’d never get there.
Second, if I imagine using your approach at the beginning of any other technological revolution, I’m pretty sure none would have happened. For example, if we waited to start using WWW (or mobile phones, or radio, or electricity) until we were satisfied that we won’t do more harm than good, we’d never deploy them. There are always new and credible risks, and sometimes they are severe (e.g. fossil fuels and climate change).
Third, involving citizens directly may work better with easy to understand issues like abortion (or even climate change). Asking an average citizen (forgive my arrogance) to have an informed opinion on the impact of an AGI, when even the leading figures in the field disagree, doesn’t strike me as a reliable process. (On a side note, I think your approach ignores the fact that it’s hard enough to run these processes in one country and culture, e.g. residents of Paris, but it gets exponentially more complex if we need to include the entire world, which we need to do as AI would impact everyone).
Finally, I think it could be helpful to differentiate between two types of risk: existential and not. I would argue that non-existential risks (e.g. jobs being automated away) should be approached using existing frameworks and tools, but existential risks need a different approach, like we already have for nuclear weapons. Making sure the paper clip problem never becomes real is important and it doesn’t require aligning on values. If there’s anything nearly everyone would agree with is that it’s good to be alive.
ps video format is great: I watched it as I was making my morning coffee instead of sitting at my desk. Thanks for making the video, it is thought-provoking!
I don't agree with that.
The non-exisential risks do not easily fit existing frameworks for regulation or even economic orthdoxy. In the same way, that we need new rules around market manipulation with the arrival of the telegraph, we'll need new rules for this new era. That rules might (in some cases) be monitored and enforced by existing institutions, but in many cases will require new institutions.
My proposal is about kick-starting the science, the evidence base and the understanding of values which would help the formation of these new institutitons.
On deliberative democracy, you've dismissed the quality of the process so far. It has been used extensively for hard problems - including, for example, policy questions around automated facial recognition (something Ada Lovelace did some years ago). See for example: https://www.adalovelaceinstitute.org/report/trust-data-governance-pandemics/
There is a particular methodology to citizens juries which makes them appropriate for aspects of even hard technical questions.
They're all good ideas Azeem but none of them are regulatory solutions. Ultimately, a regulatory solution needs a regulator, no? The UK Government's current approach to AI regulation (https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper) is to distribute responsibilities across existing regulators but coordinate and align. Time will tell whether this will work.
The research agenda, observatory and citizens juries are all great and there's already work in this space e.g. https://www.cndp.us/citizens-juries-artificial-intelligence/, https://www.ukri.org/opportunity/responsible-and-trustworthy-artificial-intelligence/ to build on in the UK
Agree. They are pre-regulatory solutions. In other words, the activities can help inform the kinds of regulators required and what their roles and responsibilities would be.