Hi all,
I’m in the Bay Area this week, meeting with AI founders and observing the AI Summit in Paris from a distance.

This global gathering has been described as a “pantomime of progress” by
and a “trainwreck” by .It marks a vibe shift, bringing to the end an era of safety-oriented thinking that has pervaded public-sector debate on AI. This can be traced back to Nick Bostrom’s 2014 book Superintelligence. Bostrom electrified an AI safety movement that focused on the extreme risks of the technology. The movement reached its apogee with the UK’s AI Safety Summit in Bletchley Park last year.
As Tim Hwang points out, implicit in that approach was “the idea that stability and safe development could be bootstrapped on the paradigms of global governance that drove the 20th century. […] The flaw was that the wheels were already coming off this machinery even as the safety community moved to influence it. Realism rules.”
With JD Vance’s speech, the shift is manifest.
The administration believes that AI will have countless revolutionary applications in economic innovation, job creation, national security, health care, free expression, and beyond. And to restrict its development now would not only unfairly benefit incumbents in the space, it would mean paralyzing one of the most promising technologies we have seen in generations.
Whether or not you agree with Vance in general, I would encourage you to listen to the first three minutes.
Two things I’d identify. First, my experience in a decade of travelling around Europe, US and Middle East is that AI was the first technology I’ve encountered where people wanted to spend more time on the downsides than on its potential. It was a bit like thinking about the trip of a lifetime and focussing entirely on the sore feet and jet lag at the end of the journey.
The second is that realism is back. It’s hard to match the notions of global governance with national summits where national interests (like France’s €109 billion commitment to AI infrastructure) are also promoted. Vance wasn’t the only one touting a nationalist realism.
Joanna Bryson, who participated in the Summit, assessed the scene in her typically measured way. She writes:
, an AI policy researcher who left OpenAI last year, analysed JD Vance’s speech. Miles writes:The AI Action Summit is in my mind a stunning success, for everyone except the UK. Even the corporations (who seemed a little back-footed) know they will come out better from a better regulated world than the one they are experiencing in the US right now. […] Absolutely everyone seemed to agree that transparency and audits are appropriate. Everyone said Vance could have been much more destructive to the process than he was, so took his participation as overall positive.
It was heartening to hear Vice President talk about the need to protect both AI technologies and semiconductors from theft and misuse. By distinguishing between AI technologies and semiconductors, he presumably meant to refer to protecting model weights and/or algorithmic secrets and code. […] I hope that Vance’s remarks are a sign that there will indeed be a bipartisan consensus on this issue, since I’ve heard a lot of concern about it from folks on the left, as well — no one wants American AI to be trivially stolen by China and Russia (other than China and Russia).
And on Vance’s remarks on regulatory capture:
Keep reading with a 7-day free trial
Subscribe to Exponential View to keep reading this post and get 7 days of free access to the full post archives.