I appreciate this thoughtful, informed and timely statement on the changes at OpenAI.
I just hope the changes go far enough. I'm in favor of accelerating the development and deployment of AI—including robots, embedded intelligence and language language models. So why do I applaud a move that may slow down AI?
I asked Perplexity.ai to summarize the governing boards charter. Here is one key aspect: "OpenAI commits to using any influence over AGI's deployment to ensure it is used for the benefit of all and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power."
So far, Altman has proven to be charming, intelligent and diligent. He has used those gifts to convince the US to move forward with regulation...but regulation that will lead to regulatory capture. As things stand now, AI will lead to more and more concentration of power and increased distance between the 1% and the 99%, nationally and globally.
The governance we need for AI, for addressing climate change, etc. needs to be radically different than what we have today. I hope this shift is a tiny start.
I'm with blaine wishart, appreciative of this thoughtful article, and the excerpts from the interview with Sam Altman. I'm also very interested how this impacts Microsoft. It feels like bet the farm on OpenAI with their enterprise offerings. Maybe the impact will be zero or minimal just because of their size, but I'm still curious.
Thanks for commenting about this. It’s still very difficult to figure out what happened. I guess it’s all about security and business outcomes. But I was under the impression that Sam Altman was very sincere in his approach like “ we need to show the world how stuff works now, so they can get ready for the next phase” But at the same time generating tons of revenue. Required to train the new models. And then the only reason why he would be fired would be about not taking AI security seriously enough. This thing is still very weird.
Azeem: I don’t think you can discount serious legal issues. The speed and tone of the decision of the board reminds me of the Uber/Travis Kalanick. case. The story linked below has been rocketing around OpenAI and could very well be the cause of his departure. As someone noted on Twitter (X) right now there is only lesson we can and should absorb immediately … once again … the fact that the governance of one of the most visible AI companies in the world can change literally overnight should be a reminder that we can’t make our judgements about a company’s trustworthiness based simply on a vibe about their CEO. . : https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman-s-sister-annie-altman-claims-sam-has-severely
I find the event underwhelming, just an instance of poor corporate governance, not uncommon. As an example, HP appointed Leo Apotheker CEO, the the stock tanked and he was out after 10 months; apparently some board members had not even met him or interviewed him before a series of extremely expensive missteps that cost tens of billions.
Normally a board would have plans in place for negative consequences, and remediation as they approached removing someone, at a minimum perhaps ample retention bonuses for key staff. Lots of fodder to talk about, but not actually relevant to their product, but OpenAI has a talent for attention, not the LLM kind.
It’s also funny that somehow poor corporate fiduciary responsibility gets conflated with the idea that a Chatbot can get loose and destroy the world. This is a very old theme of folklore, it’s #325 (sorcerers apprentice) or possibly S20 (cruel children) in the Aarne–Thompson-Uther Index for the many persistent examples which shape our thinking - Cronos overthrew Uranus, then Zeus overthrew Cronos ad infinitum. The Golem; Frankenstein.
I appreciate this thoughtful, informed and timely statement on the changes at OpenAI.
I just hope the changes go far enough. I'm in favor of accelerating the development and deployment of AI—including robots, embedded intelligence and language language models. So why do I applaud a move that may slow down AI?
I asked Perplexity.ai to summarize the governing boards charter. Here is one key aspect: "OpenAI commits to using any influence over AGI's deployment to ensure it is used for the benefit of all and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power."
So far, Altman has proven to be charming, intelligent and diligent. He has used those gifts to convince the US to move forward with regulation...but regulation that will lead to regulatory capture. As things stand now, AI will lead to more and more concentration of power and increased distance between the 1% and the 99%, nationally and globally.
The governance we need for AI, for addressing climate change, etc. needs to be radically different than what we have today. I hope this shift is a tiny start.
Good Line, Shared Widely — “OpenAI’s crisis is like a terrarium for the wider debate about technology-in-society in general and AI in particular.”
I'm with blaine wishart, appreciative of this thoughtful article, and the excerpts from the interview with Sam Altman. I'm also very interested how this impacts Microsoft. It feels like bet the farm on OpenAI with their enterprise offerings. Maybe the impact will be zero or minimal just because of their size, but I'm still curious.
Thanks for commenting about this. It’s still very difficult to figure out what happened. I guess it’s all about security and business outcomes. But I was under the impression that Sam Altman was very sincere in his approach like “ we need to show the world how stuff works now, so they can get ready for the next phase” But at the same time generating tons of revenue. Required to train the new models. And then the only reason why he would be fired would be about not taking AI security seriously enough. This thing is still very weird.
All very weird. And who else can raise funds for OAI?
The board hasn’t covered itself in competence and they need to think about how to strengthen it.
Azeem: I don’t think you can discount serious legal issues. The speed and tone of the decision of the board reminds me of the Uber/Travis Kalanick. case. The story linked below has been rocketing around OpenAI and could very well be the cause of his departure. As someone noted on Twitter (X) right now there is only lesson we can and should absorb immediately … once again … the fact that the governance of one of the most visible AI companies in the world can change literally overnight should be a reminder that we can’t make our judgements about a company’s trustworthiness based simply on a vibe about their CEO. . : https://www.lesswrong.com/posts/QDczBduZorG4dxZiW/sam-altman-s-sister-annie-altman-claims-sam-has-severely
I find the event underwhelming, just an instance of poor corporate governance, not uncommon. As an example, HP appointed Leo Apotheker CEO, the the stock tanked and he was out after 10 months; apparently some board members had not even met him or interviewed him before a series of extremely expensive missteps that cost tens of billions.
Normally a board would have plans in place for negative consequences, and remediation as they approached removing someone, at a minimum perhaps ample retention bonuses for key staff. Lots of fodder to talk about, but not actually relevant to their product, but OpenAI has a talent for attention, not the LLM kind.
It’s also funny that somehow poor corporate fiduciary responsibility gets conflated with the idea that a Chatbot can get loose and destroy the world. This is a very old theme of folklore, it’s #325 (sorcerers apprentice) or possibly S20 (cruel children) in the Aarne–Thompson-Uther Index for the many persistent examples which shape our thinking - Cronos overthrew Uranus, then Zeus overthrew Cronos ad infinitum. The Golem; Frankenstein.