🔮 The collapse of 'long' AI timelines
The prospect of reaching human-level AI in the 2030s should be jarring
Nothing embodies that acceleration more viscerally than the prospect of human-level AI arriving not in some distant sci-fi future, but within the span of a few years—or, at most, a couple of decades.
This is why I’m thrilled to publish this guest essay by
whose insights on the governance of advanced AI are deeply informed, unusually clear-headed and urgently needed. Helen is just beginning to share her thinking more publicly, launching her own Substack Rising Tide where this post was originally published.But Helen’s no newcomer to these conversations. She is widely recognized as an AI policy expert and researcher with over a decade of experience in the field. She served on OpenAI’s board of directors from 2021 until the leadership crisis in November 2023, where she played a significant role in governance decisions. Helen is currently the director of strategy at the Center for Security and Emerging Technology at Georgetown’s Walsh School of Foreign Service.
In this essay, Helen outlines how dramatically the Overton window around “AGI timelines” has shifted. Whether you think AGI is five years away or fifty, Helen argues that we no longer have the luxury of treating these questions as fringe. Her perspective bridges technical understanding with policy expertise, making her voice particularly valuable in these critical discussions.
— Azeem
“Long” timelines to advanced AI have gotten crazy short
By
, originally published on Rising TideIt used to be a bold claim, requiring strong evidence, to argue that we might see anything like human-level AI any time in the first half of the 21st century. This 2016 post, for instance, spends 8,500 words justifying the claim that there is a greater than 10% chance of advanced AI being developed by 2036.
(Arguments about timelines typically refer to “timelines to AGI,” but throughout this post I’ll mostly refer to “advanced AI” or “human-level AI” rather than “AGI.” In my view, “AGI” as a term of art tends to confuse more than it clarifies, since different experts use it in such different ways.1 So the fact that “human-level AI” sounds vaguer than “AGI” is a feature, not a bug—it naturally invites reactions of “human-level at what?” and “how are we measuring that?” and “is this even a meaningful bar?” and so on, which I think are totally appropriate questions as long as they’re not used to deny the overall trend towards smarter and more capable systems.)
Back in the dark days before ChatGPT, proponents of “short timelines” argued there was a real chance that extremely advanced AI systems would be developed within our lifetimes—perhaps as soon as within 10 or 20 years. If so, the argument continued, then we should obviously start preparing—investing in AI safety research, building international consensus around what kinds of AI systems are too dangerous to build or deploy, beefing up the security of companies developing the most advanced systems so adversaries couldn’t steal them, and so on. These preparations could take years or decades, the argument went, so we should get to work right away.
Opponents with “long timelines” would counter that, in fact, there was no evidence that AI was going to get very advanced any time soon (say, any time in the next 30 years).2 We should thus ignore any concerns associated with advanced AI and focus instead on the here-and-now problems associated with much less sophisticated systems, such as bias, surveillance, and poor labor conditions. Depending on the disposition of the speaker, problems from AGI might be banished forever as “science fiction” or simply relegated to the later bucket.
Whoever you think was right, for the purposes of this post I want to point out that this debate made sense. “This enormously consequential technology might be built within a couple of decades, we’d better prepare,” vs. “No it won’t, so that would be a waste of time” is a perfectly sensible set of opposing positions.
Today, in this era of scaling laws, reasoning models, and agents, the debates look different.
What counts as “short” timelines are now blazingly fast—somewhere between one and five years until human-level systems. For those in the “short timelines” camp, “AGI by 2027” had already become a popular talking point before one particular online manifesto made a splash with that forecast last year. Now, the heads of the three leading AI companies are on the record with similar views: that 2026-2027 is when we’re “most likely” to build “AI that is smarter than almost all humans at almost all things” (Anthropic’s CEO); that AGI will probably be built by January 2029 (OpenAI’s CEO), and that we’re “probably three to five years away” from AGI (Google DeepMind’s CEO). The vibes in much of the AI safety community are correspondingly hectic, with people frantically trying to figure out what kinds of technical or policy mitigations could possibly help on that timescale.
It’s obvious that we as a society are not ready to handle human-level AI and all its implications that soon. Fortunately, most people think we have more time.
But… how much more time?
Here are some recent quotes from AI experts who are known to have more conservative views on how fast AI is progressing:
Yann LeCun:
Reaching human-level AI will take several years if not a decade. (source)
AI systems will match and surpass human intellectual capabilities… probably over the next decade or two. (video, transcript)
Gary Marcus:
[AGI will come] perhaps 10 or 20 years from now (source)
Arvind Narayanan:
I initially had this quote from Arvind:
I think AGI is many many years away, possibly decades away (source)
I interpreted this to mean that he thinks 5 years is too short, but 20 years is on the long side. When I ran this interpretation by Arvind, he added some interesting context: he chose his phrasing in that interview in light of what he sees as a watering down of the definition of AGI, so his real timeline is longer. But to clarify what that meant, he said:
I think actual transformative effects (e.g. most cognitive tasks being done by AI) is decades away (80% likely that it is more than 20 years away). (source: private correspondence)
…in other words, a 20% chance that AI will be doing most cognitive tasks by 2045.
These “long” timelines sure look a lot like what we used to call “short”!
In other words: Yes, it’s still the case that some AI experts think we’ll build human-level AI soon, and others think we have more time. But recent advances in AI have pulled the meanings of “soon” and “more time” much closer to the present—so close to the present that even the skeptical view implies we’re in for a wild decade or two.
To be clear, this doesn’t mean:
We’ll definitely have human-level AI in 20 years.
We definitely won’t have human-level AI in the next 5 years.
Human-level AI will definitely be built with techniques that are popular today (generative pre-trained transformers, reasoning models, agent scaffolding, etc.).
Every conversation about AI should be about problems connected with extremely advanced AI systems, with no time or attention for problems already being caused by AI systems in use today.
But it does mean:
Dismissing discussion of AGI, human-level AI, transformative AI, superintelligence, etc. as “science fiction” should be seen as a sign of total unseriousness. Time travel is science fiction. Martians are science fiction. “Even many skeptical experts think we may well build it in the next decade or two” is not science fiction.
If you want to argue that human-level AI is extremely unlikely in the next 20 years, you certainly can, but you should treat that as a minority position where the burden of proof is on you.
We need to leap into action on many of the same things that could help if it does turn out that we only have a few years. These will likely take years and years to bear fruit in any case, so if we have a decade or two then we need to make use of that time. They include:
Advancing the science of AI measurement, so that rather than the current scrappy approach, we have actually-good methods to determine what a given model can and cannot do, how different models compare with each other (and with humans), and how rapidly the frontier is progressing.
Advancing the science of interpretability, so we can understand how the hell AI systems work, when they are likely to fail, and when we should(n’t) trust them.
Advancing the science of aligning/steering/controlling AI systems, especially systems that are advanced enough to realize when they’re being tested.
Fostering an ecosystem of 3rd-party organizations that can verify high-stakes claims AI developers make about their systems, so that governments and the public aren’t dependent on developers choosing to be honest and forthcoming.
Working towards some form of international consensus about what kind of AI systems would be unacceptably risky to build, and what kind of evidence would help us determine whether we are on track to build such systems. (Maybe someone should establish some kind of international summit series on this..?)
Building government capacity on AI, so that there’s real expertise within Congress and the executive branch to handle AI policy issues as they arise.
It’s totally reasonable to be skeptical of the “AGI by 2027!” crowd. But even if you side with experts who think they’re wrong, that still leaves you with the conclusion that radically transformative—and potentially very dangerous—technology could well be developed before kids born today finish high school. That’s wild.
This post was originally published on Helen’s Substack.
Let us know what resonates for you. If you have questions for Helen and wish to continue the conversation, comment below.
I routinely hear people sliding back and forth between extremely different definitions, including “AI that can do anything a human can do,” “AI that can perform a majority of economically valuable tasks,” “AI that can match humans at most cognitive tasks,” “AI that can beat humans at all cognitive tasks,” etc. I hope to dig into the potentially vast gulfs between these definitions in a future post.
Personally, my favorite description of AGI is from Joe Carlsmith: “You know, the big AI thing; real AI; the special sauce; the thing everyone else is talking about.”
See, e.g., this 2016 report from the Obama White House: “The current consensus of the private-sector expert community, with which the NSTC Committee on Technology concurs, is that General AI will not be achieved for at least decades.”
Thanks for sharing this guest essay. I knew Helen's name but wasn't aware of her substack.
I read this before I read https://substack.com/@platforms (AI and the strategic value of hype). They actually make an interesting twinning, as you can go back and forth between them.
I like the bracketing concept; near term AGI 5-8yrs, long term 3-5 decades. On first take, this would seem reasonable, until we extract the end game; A sentient presence occupying a space in the universe with some relevant features:
Sentience means the ability to feel, to be aware of sensations like pain, pleasure, and emotion. Most animals are sentient to some degree, but humans take it further.
Consciousness involves being aware of yourself and the world, having thoughts, memories, intentions, and a continuous sense of "I."
Self-awareness is the reflective capacity to think about your own thoughts, identity, existence, and purpose—essentially, being aware that you are aware.
AGI is NOT going to deliver a or any computer or robot 'being like a human' or 'living like a human' or 'doing the highly purposeful structured and the oddly random activities humans "do". There is no 'end game' for you or me, except survival until death.
The "do" part is "the problem. Folks attempting to prognosticate the development path and the date upon which a collection of lines of 1s and 0s in a box will replicate or copy the unorganized, randomness of a human existence.
I want to say the truth we all instinctively 'know" 'it'll never happen'! The fear in expressing the truth yields to an opening: "its' possible"!! Sure, expressing the possibility as a concept means the number is 0.00001, it is 'possible'.
But at the core of the belief is noise which sounds like "Wow, what is it that I have that a machine does not have, but cannot, will not, ever possess..."
Until this single element is either a) added to the equations and calc's OR b) conceptualized and quantified as a 'constant', there will never be a machine which can survive independently OR as a collective in this "environment" (aka: Earth).
Human Agency.
with no artifacts, no presence, no holding of a self generated purpose....
Never happen.
Keeping this AGI conversation alive is how the structure of our societies will embrace this new technology and as with previous techtonic shifts (tcp/ip, iphone, wheel, etc). It's not linear and not planned. Adoption happens for the purpose of advancing humans; including all of the things humans do, including yes, crime and problems which exist in the world, with or with the new "innovation".
But that is not the only part of society impacted by AGI. Which is where the focus needs to be (IMO)
When the dust settles, a year or two out, and the concepts of reasoning, knowledge, artifacts, agents, etc, will have more substntive definitions (which evolve with use cases, real world impacts). When the elements of a human, like thought, choice, outcomes, motivations, 'plan b', success, and failure are understood, like math equations. THEN, there is still no possibility of a man made creation which replicates the life or 'existence' of a human in this environment.
SO the real question isn't when are robots going to take over the planet. That's what make for good hollywood scripts. The question is how will we integrate AI/LLM (and BCH) into our current systems to move humanity forward by unlocking the creative elements and visions of humans who now have access to 'knowledge' ALL KNOWLEDGE...freely.
We're not only a long way off....slices of code will never get "here" (It's early, my 2nd cup is ready...)
BRB.
$1.00