Will AI kill us all?
As readers of Exponential View will know, I doubt it. In fact, I profoundly doubt it and largely believe the AI doom narrative is quite unhelpful.
See, for example, my essay dissecting p(doom) to understand the assertions behind existentialist arguments:
However, I’m also really interested in checking my assumptions, challenging my thinking — and helping you make up your own mind. To that end, we come to my latest conversation with a guest who thinks we are (almost) doomed.
Connor Leahy is the co-founder and CEO of Conjecture, an AI startup working on controlling AI systems and aligning them to human values. He’s also one of the most prominent voices warning of AI existential threats.
In this conversation, Connor and I go into…
How Connor perceives AI risks to humans, from the mundane to the existential.
Is it too late to co-evolve safeguards to manage AI risks, as we have with the Internet and cybersecurity?
Which technological methods could lead to developing safe AGI?
Are compute regulation, strict liability laws, a global kill switch, and public participation the right approaches to AI safety?
Timetamps:
00:00 Introduction
01:05 The “Pause AI” letter
07:00 Co-evolution of safeguards
10:05 The speed of change
22:01 Turning the safety agenda into action
30:30 Compute as a means for control
36:05 Practical approaches to AI safety
50:38 The promise of AI
57:58 Building safe and aligned AI
01:05:46 Hopes for the year to come
Where to find Connor:
Linkedin: https://www.linkedin.com/in/connor-j-leahy/
X: https://twitter.com/npcollapse
Conjecture: https://www.conjecture.dev/about
Where to find Azeem:
Website: https://www.azeemazhar.com
LinkedIn: https://www.linkedin.com/in/azhar/
Share this post