đ¤ On AI, Zuck is bang on
The man behind the worldâs biggest walled garden makes a strong case for openness.
Iâve often disagreed with Mark Zuckerbergâs decisions, especially regarding Facebookâs data practices, lackadaisical approach to handling misinformation and being one of the first social networks to institute a walled garden. But in recent months, I have been warming to him, especially as Meta has put investments in artificial intelligence ahead of the âmetaverseâ.
In The Economist this week, Zuckerberg, together with Daniel Ek, founder of Spotify and long-time EV reader, make a solid and impassioned case for the importance of open-sourcing AI software1, particularly in Europe.
They argue:
[open-source] ensures power isnât concentrated among a few large players and, as with the internet before it, creates a level playing field.
Furthermore, that
with more open-source developers than America has, Europe is particularly well placed to make the most of this open-source AI wave.
Meta has, of course, been one of the leading lights in open-source AI models.2
In this together
The best AI models will likely continue to be megascale models developed by a handful of frontier labs. Iâve argued that scaling is likely to continue for a few more years with multi-billion dollar models in the pipeline. However, as Zuck and Daniel suggest, the outputs under such models could be made more widely available through permissive licensing.Â
This would go some way to limiting the risk of AIâs power concentration. Many of the problems of âbig tech,â whether it is Googleâs monopolistic practices around search or Amazonâs dodgy behaviour towards suppliers, come back to power concentration. Open-source can help by increasing competition and reducing lock-in.
Moreover, AI foundation models are more alike infrastructure or essential facilities, the roads and pipes of modern life, than the fully enclosed applications. Open-source provides wider access to these capabilities and as a result widespread innovation and creativity.
With enough AIs, all risks are shallow
One argument against open-sourcing AI has been the risks: the bad actors using these tools, for example, in widescale personalised phishing attacks and the speculative risks3 of existential doom and runaway AI. My answer to these claims is that in both cases, open-source will help, not harm.