In a reflective post published earlier this week, Sam Altman wrote:
We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents ājoin the workforceā and materially change the output of companies. [ā¦] We are beginning to turn our aim [ā¦] to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity.
As some of you know, I donāt like the phrase AGI for the reasons I outlined in this weekendās post:
I deliberately avoid using the term āartificial general intelligenceā (AGI), not because itās irrelevant, but because itās often unhelpful and distracting, even when used by leading researchers. [ā¦] AGI, or powerful AI, is not an end state like landing on the moon. Nor is it likely to be a self-regulating process, like the spread of a virus, which naturally slows down as the population of non-immune, infectable hosts declines.
I want to double down on what I wrote in that essay. If you want to avoid repetition, you can stop reading now (although youāll miss out on a couple of interesting graphs).
Whatever we call it, the reality is that increasingly powerful AI systems that can do many of the things we do today are coming our way. Perhaps, like this filmmaker, you are even using them.
Surveys and predictions of when we might āachieve AGIā are getting increasingly compressed. The release of GPT-3 four years ago, in particular, shifted estimates from 40 years to a decade or less. An OpenAI researcher says that o3, previewed just before Christmas, is AGI.
To sceptics
Keep reading with a 7-day free trial
Subscribe to Exponential View to keep reading this post and get 7 days of free access to the full post archives.