🔥 OpenAI’s identity crisis and the battle for AI’s future
And why this might not be a bad thing
I normally send my commentary to Premium members of Exponential View only. However, given the rapid developments, and the ripple-effects the events of this weekend may have for the ongoing debate on AI safety, I will keep this piece open to the public for the time-being. Share it with anyone who wants to make sense of what the OpenAI’s crisis means.

Back in 2020, I had a long conversation with Sam Altman about the role of OpenAI’s board and how he thought governance of the organisation might change. Looking back, Sam’s answers were prescient.
Azeem Azhar: I’m curious about what you think a Global Compact [for AI], might look like in the next few years, where we feel we can get these technologies working for humanity, but in a way that is well-governed.
Sam Altman: It’s a super important question. We can study history and see how that goes. When there is no public oversight, when you have these companies that are as powerful as, or more powerful than, most nations and are governed by unelected, and to be perfectly honest, fairly unaccountable leaders, I think that should give all of us pause as these technologies get more and more powerful.
And I think we need to figure out some way not just that economics get shared and that we deal with the inequality problem that society is facing, but the governance about these very big decisions. What do we want our future to look like in the world with very powerful AI? That is not a decision that 150 people sitting in San Francisco make collectively. That’s not going to be good.
Azeem Azhar: I haven’t found good models right now that do this. I think that people talk about the idea of minilateralism, which is that if you can just bring enough people to talk often enough, perhaps you create the kernel of a framework that other people can buy into, and then people copy that.
Sam Altman: Yeah, I think that is helpful for sure. But then there is always a question of how much teeth does it really have?
Let’s say we really do create AGI. Right now, if I’m doing a bad job with that, our board of directors, which is also not publicly accountable, can fire me and say, “Let’s try somebody else”. But it feels to me like at that point, at some level of power, the world as a whole should be able to say, hey, maybe we need a new leader of this company, just like we are able to vote on the person who runs the country.
And so, this question of what democratic process for companies that have sufficient impact looks like. I am very far from an expert here, and much smarter people than me have been thinking about it, but it occurs to me that we’ve got these models, and we can look at the different ones around the world and talk about some countries where it works better and where it doesn’t work as well.
And now it has happened. Sam has been fired by the board. And Greg Brockman, the Chair and President, has also left the company. It’s an evolving story. The Information has good coverage, so too does Bloomberg. Ron Conway, the most legendary of Silicon Valley angel investors, calls it “a Board coup that we have not seen the likes of since 1985 when the then-Apple board pushed out Steve Jobs. It is shocking.”
It seems questions of the balance AI safety vs market momentum were factors in the decision. There may be other things in play. Altman has a lot of irons in the fire, including his cryptocurrency, WorldCoin; a fusion startup, Helion; explorations with Jonny Ive & Masa Son to create an AI device. Perhaps a new conflict of interest — even a direct competitor to OpenAI — was surfacing.
A microcosm
If we believe AI is a powerful general-purpose technology, then it is going to have massive impacts across society: printing press, electricity, steam engine, internet style effects. Not just to our productivity but to our cultures, communities and quotidian.
If that is the case, then we will really benefit from a widespread debate. Such a debate will involve friction, disagreement, and contestation - and important that it happens as transparently and publicly as possible.
OpenAI’s crisis is like a terrarium for the wider debate about technology-in-society in general and AI in particular.
I don’t want to hazard too many guesses, although much discussion seems to revolve around the strategic direction of the company and the balance of safety versus market momentum. How OpenAI - or any AI firm - approaches that question is really important. And an optimal approach is non-obvious.
There are many AI researchers who think about safety. OpenAI has a large safety team led by Ilya Sutskever. But it is important to realise that safety in AI in 2023 is not like safety in cars in 2023. When we think about safety in cars, the quality of airbags, and the efficacy of the brakes, these are known quantities. We have data for them. The CEOs who shirk vehicle safety can evaluate the risks—the deaths and injuries —when they make those decisions.
But the AI safety debate in 2023 is not where the car safety debate is. There is insufficient data. There is no scientific agreement of what the risks look like. There isn’t a standard toolkit to rely on.
Instead, we have a series of interconnected processes: of scientific research; of social and institutional exploration (safety for whom, at what cost); and of product discovery (how does this actually manifest itself in the real world) which are feeling their way from a place where we have some answers to a frontier beyond our existing knowledge.
How should resources be allocated to each of these buckets? Where should we draw the line? Should we insist on having working, provable theories of safety that we then build against? Or does finding those theories, and more importantly, working models of safety, require putting products into the field, allowing for widespread experimentation and challenge and a constant updating of our theoretical understanding? And if the latter, what is the appropriate rate at which that should happen?
These aren’t obvious questions. Because, as I said, we don’t know.
We regularly use technologies whose mechanisms we don’t understand and whose safety parameters are established empirically not theoretically. Anaesthetics are one great example of that. We have been using them since the 1840s, and even today while we finally have an incomplete understanding of how they impact receptors and neural pathways, we still establish their safe usage through trial-and-error. As someone who has had the odd anaesthetic, I’m quite pleased we didn’t wait for a theoretical understanding of their safety before we let doctors get their surgical gloves on them.
That isn’t to say that we should career our way into AI development without paying heed to safety.
It’s to say that addressing the question isn’t straight-forward.
All for one and one for all
In my first conversation with Sam in 2020, he expresses how he believes for AI there needs to be a wider more democratic debate. And in a sense, that is what we are having with AI. And this is in sharp contrast to other technologies.
All GPTs change everything. And having a quality debate about them is critical. Such a debate has many facets: the breadth of participation (including public participation), the consideration of different impacts, the long-term perspective, the evidence base used, the way it shaped policy outcomes, how timely it was and its adaptability and accountability.
With previous widely applicable technologies and services, the breadth of this debate was variable or non-existent. Consider Uber’s explosion into our cities. Go further back in history: there was virtually no public awareness or debate around electricity or the roll-out of the steam engine or plastics.
The exploration of the impacts of genetic engineering was much more healthy than those technologies (or things like the Internet) but public engagement was simply not as deep as we are seeing in today’s AI discussions.
By contrast to the vast, fractal complex of discussion from governments to schools, in the media and in conferences, on Twitter and in research papers, the AI debate is very rich indeed.
What we are seeing is the start of widespread public consultation with broad participation of a wide range of groups, with wider discussion of many different impacts, and an increasingly adaptable approach to how we deal with this.
Using this lens, the OpenAI fracas is just part of that process of contestation and norming. It’s not a disaster. It’s probably not even a hiccup. Perhaps it slows OpenAI down for a few months while they catch their breath and rethink priorities: research, engineering and commercial. Over the long durée, it’s irrelevant: the know-how is in people’s heads. We know something as good—and limited—as GPT-4 Turbo is possible. It isn’t a high watermark to which we’ll never be able to return. And given that this crisis probably comes with better governance long-term, it is probably a good thing.
Good factions
I have some simple principles that I like to apply: that concentration of power is unhelpful, that a diversity of actors is helpful, that monocultures are not resilient…. in short, that more players is often better than fewer players.
So a schism in OpenAI which may give rise to people leaving and taking their knowledge with them is a good thing.
Technology is compounded knowledge and it benefits from the intersection of different worldviews and experiences. If researchers leave OpenAI and go elsewhere to create startups or to join competitors, that is a good thing.
We’re probably better off with Dario, Daniela and Jack having left OpenAI in 2021 to found Anthropic than had they stayed there.
So if Sam or Greg Brockman, OpenAI’s former Chair and President who resigned yesterday, goes off and starts a new AI business, from the perspective of scientific research, technical progress and, yes, safe and transparent deployment, it is likely a net positive. And, it wouldn’t be surprising if the either of them is soon involved in one or more than one projects that overlap or compete with OpenAI.
If this ouster spurs interest in other ways of organising and building AI infrastructure, from open-source to utility-style thinking to good-old-fashioned market competition, it’s also a net positive.
Governance matters
One of the less salubrious aspects of AI debate has been about the relative importance of governance of the technology. Many elements of Silicon Valley techno culture push back on the idea of governance (stick-it-to-the-man meets libertarian pioneerism), so it’s not just ironic, it’s emblematic, that OpenAI, the biggest thing to come out of the Valley in decades, has just had a huge governance failure.
Governance does matter. As OpenAI exponentiated from a small research shop in 2015 to the thing it is today, the governance has simply not kept pace. OpenAI today has a sophisticated structure where this comparatively young board oversees a 501(c)(3) non-profit which owns a holding non-profit which is the majority owner of the capped profit company (in which Microsoft has invested).
This isn’t to say that the board of OpenAI didn’t have to act on Friday (perhaps they had to, perhaps they didn’t). But it seems plausible that the Board was falling short - to ask tough questions and hold executives to account - in the months leading up to the crisis. The Board didn’t have a whole lot of governance experience for a firm that was so much in the spotlight.
This is uncharted territory for any organisation - and one key priority will be to strengthen OpenAI’s governance and to clarify what its mission really is.
The upheaval at OpenAI is not an isolated incident but rather a reflection of the broader challenges facing the industry as we grapple with the oversight of this transformative technology. I mean, Sam himself, highlighted that very thing to me in 2020.
Now might be the time to take this episode to heart and find a transparent system of governance as advanced and sophisticated as the AI it seeks to oversee.
I appreciate this thoughtful, informed and timely statement on the changes at OpenAI.
I just hope the changes go far enough. I'm in favor of accelerating the development and deployment of AI—including robots, embedded intelligence and language language models. So why do I applaud a move that may slow down AI?
I asked Perplexity.ai to summarize the governing boards charter. Here is one key aspect: "OpenAI commits to using any influence over AGI's deployment to ensure it is used for the benefit of all and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power."
So far, Altman has proven to be charming, intelligent and diligent. He has used those gifts to convince the US to move forward with regulation...but regulation that will lead to regulatory capture. As things stand now, AI will lead to more and more concentration of power and increased distance between the 1% and the 99%, nationally and globally.
The governance we need for AI, for addressing climate change, etc. needs to be radically different than what we have today. I hope this shift is a tiny start.
Good Line, Shared Widely — “OpenAI’s crisis is like a terrarium for the wider debate about technology-in-society in general and AI in particular.”