Could AI help you build a competitor to beat TikTok in just a few minutes? Maybe in the next couple of years, according to Eric Schmidt, Googleâs former CEO. My buddy Erik Brynjolfsson invited Schmidt for an intimate discussion at Erikâs Stanford Econ class a few days ago.
I watched the full discussion, which was rich and full of challenging ideas. But here is what stood out.1Â
Unprecedented uncertainty
Schmidt confessed to revising his AI outlook every six months, a testament to the field's volatility. He shared a striking example: âSix months ago, I was convinced that the gap [between frontier AI models and the rest] was getting smaller, so I invested lots of money in the little companies. Now I'm not so sure.â
Now, please donât focus on the fact that Schmidt thinks the future is in ever-larger models (he does). Rather, consider the nature of his knowledge. He is an insiderâs insider, about as well-informed as anyone in this field can be, and unlike some critics, he is also putting his money where his mouth is, backing many AI companies like Mistral, Kyutai and Asari.
Schmidt understands scale and gets neural nets. After all, he ran Google when it acquired Deepmind, developed the transformer architecture and built tensor processing units, the first chips dedicated to speeding up deep learning. And Google has been about scale since its inception.
Despite this, just six months ago, this tech titan thought smaller models might stand a chance to push the frontier. He doesnât believe that anymore.Â
The point is that he was either right then or he is right now. It took just six months for a u-turn. That is the degree of uncertainty.
Unprecedented speed
Schmidt describes the potent combination of large context windows, AI agents that can learn and improve themselves, and text-to-action capabilities. These âwill have an impact on the world at a scale that no one understands, yet much bigger than the horrific impact we've had by social media.â
Large context windows, the working memory of large language models, are solidly on their way. My day-to-day model, Claude, can already hold 200k tokens (approximately 150k words) in its working memory. Millions or more are coming.
Agent-based systems, which autonomously execute multi-step tasks and adapt to environmental feedback, are poised to unlock significant economic value next. Building on this foundation, text-to-action capabilities will emerge, translating natural language into specific, working programs. Imagine your own personal programmer at your fingertips, ready to create whatever you envision. A dating app that matches people based on their Netflix viewing history and Spotify playlists?2 A prototype within minutes.
These forces will converge.
While he doesnât say it explicitly, it seems that Eric expects the next two years to be faster and more turbulent than the previous two.
Unprecedented disruption
What does this mean for entrepreneurs? Today, using LLMs already helps achieve a hell of a lot very quickly, even with limited context windows, brittle agents and limited actions. Schmidt's most provocative example ties these trends together:
Say to your LLM, âMake me a copy of TikTok. Steal all the users, steal all the music, put my preferences in it, produce this program in the next 30 seconds, release it, and in one hour, if it's not viral, do something different along the same lines.â
Today, I can accurately and rapidly get an LLM to analyse and deconstruct five academic papers (or my book) in realtime. LLMs that are currently in testing have much larger contexts, connect to the Internet, and can write code; therefore will be much more capable. Perhaps as capable as rapidly iterating through new digital and, ultimately, physical products as Schmidt provokes.
The implications are simply gargantuan and Iâll return to that in due course.
Unprecedented scale
This uncertainty extends to the scale of investment required. Schmidt noted that leading companies are discussing needs of â$10 billion, 20 billion, 50 billion, 100 billion.â He even mentioned that OpenAI's Sam Altman believes it might take âabout $300 billion, maybe more.â These figures aren't just about money â they translate to enormous energy requirements that could reshape geopolitical alliances. As Schmidt put it, âWe as a country do not have enough power to do this.â
Flying blind
Schmidtâs insights reveal a landscape changing so quickly that even he, as a veteran of the industry, is struggling to keep pace. And a lot of what Schmidt said wasnât new. Weâve been talking about extending context windows, the speed of change and the challenge of scale for years.Â
But here is why this discussion mattered a lot â and why Iâve taken up a few minutes of your Saturday on it. Schmidt was speaking in a small, intimate class at Stanford. While not entirely unguarded, he was operating with fewer filters there. Youâre more likely getting his genuine thoughts about AI. And his beliefs about the future are not dependent on any scientific breakthrough or klaxon signalling the arrival of AGI3. So this is worth taking more at face value than with a grain of salt.
In that sense, some of what is coming soon should already be baked into your expectations and, by extension, your plans for the future. Is it?
Cheers,
a
The video has been removed from YouTube.
Thanks to Anthropicâs Claude for this brilliant idea.
Whatever AGI means. Readers know I find the term unhelpful.
It's ever more about becoming comfortable being uncomfortable with the speed of change.
Great analysis. Non experts - and most execs at enterprises - like to say that the impact of GenAI isn't that great today and that it's 'over hyped'. They find it hard to appreciate what, in practice, the developments Eric talks about mean (still sounds like techno-jargon to most). It would be great to paint some more tangible examples than the TikTok one, which still seems distant to most...