Preview
13

🐙 Let’s talk about extinction

Extraordinary claims require extraordinary evidence
13

An earlier version of this essay wrongly suggested Yann LeCun had signed the CAIS paper; which of course, he hadn’t at the time of writing, 3rd June 2023. My mistake. Yann’s commentary on these issues is well worth reading via his twitter feed.1

We need to talk about AI risk, specifically about existential risk posed by the development of AI. The risk that it wipes out all humans if you can’t escape the discussion. Earlier this week, the new Center for AI Safety issued a new petition signed by many, but by no means all AI researchers, including Yoshua Bengio, Geoff Hinton and Demis Hassabis.

It said that mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks like pandemics and nuclear war.

I wanted to share my reflections of AI risk, and in particular, the current debate on existential risk. I’ve thought about these wider questions for a few years. I’ve had to contend with issues of fairness and bias and transparency of automated decision systems and machine learning. For more than a decade, I spent time as a trustee of the Ada Lovelace Institute, an independent ethics research group looking at AI and data. My podcast has had many guests on this topic including Fei Fei Lee, to Joanna Bryson, Rumman Chowdhury, Meredith Whitaker, Kate Crawford, Demis Hassabis, Gary Marcus, and others.

Bad predictors

It’s a topic I have given some thought to and I wanted to explain what I think is going on. Just because people are experts in the core research of neural networks does not make them great forecasters, especially when it comes to societal questions or questions of the economy, or questions of geopolitics.

The best example I can give is that of Geoff Hinton.

The full video is for paid subscribers

Exponential View by Azeem Azhar
Exponential View by Azeem Azhar
Authors
Azeem Azhar