I have met dozens, maybe hundreds, of executives and investors at various events this year, and there is confusion. It is not the fog of war, it’s more like the insanity of bedlam.
The pulsing source of this bedlam is the slippery idea of artificial intelligence (or should it be artificial general intelligence?).
Different experts mean different things when they talk about “AI”, but they also hold widely different views on how good it is and where it’s going. Elon Musk thinks we’ll have AI smarter than humans by 2025. Yann LeCun of Meta says today’s systems are stupider than a house cat. Dario Amodei predicts that by next year “AI models could be able to replicate and survive in the wild”. Gary Marcus reckons it’s nothing more than a magic trick. Demis Hassabis says Google will spend $100 billion developing AI. And every week, a new AI model appears to supposedly rival GPT-4, but never quite manages it; and each week brings yet another example of the weaknesses of these models.
The media often looks like a whirlwind of opinions and forecasts, where no one seems to agree. Public debates are crucial to progress and healthy democracies, but they often make a situation unnecessarily noisy.
The fact is that for the large majority of us, who is actually right in this debate doesn’t really matter.
So let’s cut through the noise… Where does AI development go from here?
Three futures
There are three possible futures for AI development.
First, we might be on the cusp of an intelligence explosion. Current AI systems like GPT-4 are improving so rapidly that it won’t be long before one system is capable of helping to build even better systems, leading to exponential growth in capabilities. In this scenario, if you’re impressed with GPT-4, GPT-6 will be beyond your comprehension.
Second, we might need a different approach altogether. We may have reached a plateau in AI performance due to exponentially growing data needs, escalating computational costs and fundamental limits in current architectures. Despite these challenges, scientists are exploring new methods to enhance AI capabilities.
The third scenario is stagnation. Perhaps we’ve peaked with technologies like GPT-4, and no significant breakthroughs are on the horizon. What does it mean for us to be stuck in this multi-year plateau? How significant is this stagnation?
You be Statler, I’ll be Waldorf
If we were super-cantankerous to the point of rivalling the Muppets’ famous trolls, we’d probably argue that scenario three is where we’ll end up. AI development will stagnate.
But even if it were to stagnate, products would still get better.
The LLMs of today are only one part of the products they power. Not only are they getting more efficient and more capable of running on small devices, they are also getting faster. The other components of the systems they are part of are improving. It isn’t just retrieval-augmented generation, it’s better prompting, more curated data, connections to knowledge graphs and simple agentic workflows. These are good examples of complementary innovations that spring up around general-purpose technologies.
All this could lead to quality improvements in the products that firms can access, meaning that a plateau in model capabilities doesn’t necessarily equate to a plateau in overall utility.
However, let’s consider a more pessimistic scenario — Statler and Waldorf on a really bad day. Let’s say LLM development is frozen, and we can’t get anything more advanced than GPT-4. What implications would this have for firms?
Keep reading with a 7-day free trial
Subscribe to Exponential View to keep reading this post and get 7 days of free access to the full post archives.