š¤ On AI, Zuck is bang on
The man behind the worldās biggest walled garden makes a strong case for openness.
Iāve often disagreed with Mark Zuckerbergās decisions, especially regarding Facebookās data practices, lackadaisical approach to handling misinformation and being one of the first social networks to institute a walled garden. But in recent months, I have been warming to him, especially as Meta has put investments in artificial intelligence ahead of the āmetaverseā.
In The Economist this week, Zuckerberg, together with Daniel Ek, founder of Spotify and long-time EV reader, make a solid and impassioned case for the importance of open-sourcing AI software1, particularly in Europe.
They argue:
[open-source] ensures power isnāt concentrated among a few large players and, as with the internet before it, creates a level playing field.
Furthermore, that
with more open-source developers than America has, Europe is particularly well placed to make the most of this open-source AI wave.
Meta has, of course, been one of the leading lights in open-source AI models.2
In this together
The best AI models will likely continue to be megascale models developed by a handful of frontier labs. Iāve argued thatĀ scaling is likely to continue for a few more yearsĀ with multi-billion dollar models in the pipeline. However, as Zuck and Daniel suggest, the outputs under such models could be made more widely available through permissive licensing.Ā
This would go some way to limiting the risk of AIās power concentration. Many of the problems of ābig tech,ā whether it is Googleās monopolistic practices around search or Amazonās dodgy behaviour towards suppliers, come back to power concentration.Ā Open-source can help by increasing competition and reducing lock-in.
Moreover, AI foundation models are more alike infrastructure or essential facilities, the roads and pipes of modern life, than the fully enclosed applications. Open-source provides wider access to these capabilities and as a result widespread innovation and creativity.
With enough AIs, all risks are shallow
One argument against open-sourcing AI has been the risks: the bad actors using these tools, for example, in widescale personalised phishing attacks and the speculative risks3 of existential doom and runaway AI. My answer to these claims is that in both cases, open-source will help, not harm.