I have met dozens, maybe hundreds, of executives and investors at various events this year, and there is confusion. It is not the fog of war, itâs more like the insanity of bedlam.Â
The pulsing source of this bedlam is the slippery idea of artificial intelligence (or should it be artificial general intelligence?).
Different experts mean different things when they talk about âAIâ, but they also hold widely different views on how good it is and where itâs going. Elon Musk thinks weâll have AI smarter than humans by 2025. Yann LeCun of Meta says todayâs systems are stupider than a house cat. Dario Amodei predicts that by next year âAI models could be able to replicate and survive in the wildâ. Gary Marcus reckons itâs nothing more than a magic trick. Demis Hassabis says Google will spend $100 billion developing AI. And every week, a new AI model appears to supposedly rival GPT-4, but never quite manages it; and each week brings yet another example of the weaknesses of these models.Â
The media often looks like a whirlwind of opinions and forecasts, where no one seems to agree. Public debates are crucial to progress and healthy democracies, but they often make a situation unnecessarily noisy.Â
The fact is that for the large majority of us, who is actually right in this debate doesnât really matter.Â
So letâs cut through the noise⌠Where does AI development go from here?
Three futures
There are three possible futures for AI development.
First, we might be on the cusp of an intelligence explosion. Current AI systems like GPT-4 are improving so rapidly that it wonât be long before one system is capable of helping to build even better systems, leading to exponential growth in capabilities. In this scenario, if youâre impressed with GPT-4, GPT-6 will be beyond your comprehension.
Second, we might need a different approach altogether. We may have reached a plateau in AI performance due to exponentially growing data needs, escalating computational costs and fundamental limits in current architectures. Despite these challenges, scientists are exploring new methods to enhance AI capabilities.
The third scenario is stagnation. Perhaps weâve peaked with technologies like GPT-4, and no significant breakthroughs are on the horizon. What does it mean for us to be stuck in this multi-year plateau? How significant is this stagnation?
You be Statler, Iâll be Waldorf
If we were super-cantankerous to the point of rivalling the Muppetsâ famous trolls, weâd probably argue that scenario three is where weâll end up. AI development will stagnate.Â
But even if it were to stagnate, products would still get better.
The LLMs of today are only one part of the products they power. Not only are they getting more efficient and more capable of running on small devices, they are also getting faster. The other components of the systems they are part of are improving. It isnât just retrieval-augmented generation, itâs better prompting, more curated data, connections to knowledge graphs and simple agentic workflows. These are good examples of complementary innovations that spring up around general-purpose technologies.Â
All this could lead to quality improvements in the products that firms can access, meaning that a plateau in model capabilities doesnât necessarily equate to a plateau in overall utility.
However, letâs consider a more pessimistic scenario â Statler and Waldorf on a really bad day. Letâs say LLM development is frozen, and we canât get anything more advanced than GPT-4. What implications would this have for firms?