In today’s commentary I delve into the question of open-source versus closed-source models for AI, and how this will shape the future of the internet.
⭐️ As usual, I recommend watching the the video!
⭐️ There is more in it than the essay alone.
It all started with leaked Google documents shared by an anonymous insider towho claims that open-source large language models pose a significant challenge to Google, as well as to OpenAI. The insider argues that neither Google nor OpenAI has an economic barrier, or “moat,” that allows them to generate sustained long-term value.
But the uncomfortable truth is, we aren’t positioned to win this arms race and neither is OpenAI. While we’ve been squabbling, a third faction has been quietly eating our lunch.
I’m talking, of course, about open source. Plainly put, they are lapping us.
I’ve been thinking about this open source since Stable Diffusion came out last summer. But the last two months, since Meta’s LLaMa model was leaked has seen a rich ecosystem of developers swarm around open-source LLMs. And things are moving quickly.
But before I share my thoughts, I’ll first summarise the key observations the Google insider makes.
The first observation is that the gap between the current state of the art (GPT-4) and open-source models is changing quickly. For example, a $100 open-source model with 13 billion parameters is competing with a $10 million Google high-end model with 540 billion parameters.
Secondly, the open-source community has solved many scaling problems through a range of optimizations. MosaicML is a great example of this, demonstrating that they can train a Stable Diffusion model, which is not a large language model, six times cheaper than the original.
The third observation is that the dynamic of the open-source market creates a faster rate of iteration. This is because there are many developers who are contributing to the market, leading to a much faster rate of learning. Learning is a key factor in the Exponential Age; it drives down cost and improves price performance. Potentially, open-source models could learn and iterate far faster than closed-source ones.
In the last few days, I’ve spoken to Emad Mostaque from StabilityAI, Yann LeCun from Meta, and several other people who are developing these things. In addition, I’ve been thinking about open-source and public goods for more than two decades1. Here’s my take…