I suspect this will be this generations Fairchild Semi.
I did see an hour long interview with Fei Fei Lee and one of her ex students. Building world models.
It seems to me that the bloom is off the rose for OpenAI, though with a US army General on the board I guess they know where they're headed.
Most AI firms are nowhere near profitable, given the compute cost, and Meta continues it's cadence, for free, keeping opensource alive which is the mainstay of where the innovation is coming from.
Plenty of reasons - tho to your point the traditional ones (investors attraction, dilution, downturn avoidance) don't apply since OpenAI acts and feels like big tech less than an early stage startup.
In what sense does it feel like big tech? I don't know your background or the basis under which you make that assertion. But I see a start-up not big tech culture (Google can barely ship anything; it's 100 times the headcount). The growth numbers are not comparable. Startups have, since the days of Uber and Airbnb, accelerated the developmeny of public policy teams.
Those are fair metrics, tho "shipping product" is a bit amorphous. Product shipping looks like a lot of different things and may be internal (infra, policy tools) that the public doesn't see. OpenAI is for me big tech not startup tech for three reasons: Market dominance (for a market they created but nonetheless in terms of usage numbers and interest), talent acquisition, and strategic partnerships (eg early stage company gets potential backing from Apple). That's not Atlassian or Hubspot, that's Netflix or Palantir.
Would be interesting to hear people’s thoughts on those who are putting out apparently well researched pieces about the unsustainability of the OpenAI economic model (and genAI in general) - thinking of Ed Zitron. From the POV of an amateur observer it’s very hard to know what is hype, hating, clickbait etc!
Yes, the word “apparently” is extremely important. I’ve not read anything that would give me sufficient confidence to say this is any more unsustainable than Facebook or Uber or Google were at a similar stage.
That isn’t to say things can’t go wrong — Tesla nearly went to the wire a number of times several years ago. But point me to something well-researched, and I’m happy to have a scan.
I too am a little sceptical that a total failure would happen - if only because Microsoft are so heavily involved and would surely manage any such failures that they didn't appear as such.
I've read it and I honestly can't even begin to make sense of the assertions. The word "massive" appears nine times. In particular "here’s the crux of the matter: generative AI is a product with no mass-market utility - at least on the scale of truly revolutionary movements like the original cloud computing and smartphone booms - and it’s one that costs an eye-watering amount to build and run."
Which is just not what I am seeing in talking to businesses and founders and reflecting on my own experience. And I'm sure it's the 200m people a week or 11m people paying for ChatGPT are getting no "utility". Or what mass market means? As of 2024, global cloud spend is still less than half IT spend. Does that mean cloud (now 18 years old) is or isn't mass market? I don't know.
I'm not gong to counter this kind of speculation and weirdly extreme language with the frameworks, discussions and actual data that I have. Nor does this piece address any of the emerging, and increasingly strong. data that shows these tools being quite helpful.
Who knows, the company might fail. Startups often do. And successful ones often come perilously close to the wire. In 1997, Apple was 90 days from bankruptcy.
But, as I said in my first comment to you, I've yet to see a decent argument that would say this business is more unsustainable than many other tech companies at a similar stage.
Thank you for taking the time to read and respond! The tone of this author in general is indeed quite hyperbolic and end-of-days - in other pieces he takes aim at how a lot of the marketing language around AI products (as they current exist) is very nebulous and is basically being used to sell different versions of a chatbot. So the target is more some of the absurdity of corporate culture than the core potential of AI technologies, i.e., the focus of this page and the discussions with (presumably) very serious people you refer to. Anyway I'll maintain my position of baffled amateur and try to absorb as much conflicting information as I can
Does it matter if it is a chatbot, if companies (or individuals) use it to usefully conduct business?
It doesn't really matter what you and I think, but if a development team uses a chatbot to increase unit test coverage and code quality AND is happy with it, who are we to say "they aren't happy with it, and they don't know what they are doing"?
It seems like a lot of people think they can make those assertions on behalf of other people. That baffles me.
No, of course - genuine utility is always welcome. The example you give would be typically sophisticated users who are already tech-savvy and know what they want from the chatbot/AI. I think the example he was giving refers more to chatbots being jammed onto every kind of work task as a way of increasing the price of monthly SaaS subscriptions that will not necessarily have much uptake (I believe the current usage figures of Copilot in Microsoft 365 is given to support this) - personally I love using Claude (which I guess we can call a chatbot) but if I see a chatbot standing in for a customer service agent, I run a mile
Of course, I feel that the amount of money to develop AI models must be ungodly, hence the access to capital is critical. This explains how prolific they have been compared to their peers as they have unparalleled access to GPUs and compute, and it shows. Regarding the departure of key personnel, I suspect they are cashing in during this next financing round.
I am a little uncomfortable with the company seemingly most likely to achieve AGI first being this chaotic. Altman's purging of security-focussed board members and the loss of nearly all the founders is a worrying sign. His abandoning the non-profit mission and his altruistic motives is also a concern. I hope I am not being unfair but Altman certainly seems drawn to the one ring. Let us hope Mira Murati was not his Samwise.
I don’t really believe AGI will be a thing. And I don’t think anyone can say anything sensible about runaway autonomous development. More likely, this is a technology which will co-evolve with other technologies, and we’ll learn how to manage the risks as they emerge. (This is sort of LeCun’s argument too..)
LeCun and Chollet do not believe AI is on an exponential curve. They think breakthroughs are needed beyond strawberry-type CoT tricks. If they are right - and I take their arguments seriously - things may evolve slowly and haphazardly enough for the future you imagine to transpire. But if they are wrong and simply scaling current techniques does lead to top 1% human expert level agents and beyond quite quickly, there is real potential for accelerating disruption.
Honestly I am surprised the author of "Exponential" disagrees with this just after DeepMind release a paper about an exponential improvement in chips leading to better AI, leading to better chips etc, and after the leap in performance of o1 and the implications for what a GPT5-based o1 might be capable of.
We track top talent using data. Open AI is not falling apart. They are building a formidable engine room. Please see our take https://zekidata.com/zeki-data-reveals-major-shift-in-openai-hiring-strategy/
Great stuff. Did not know about your research. Super interesting
Great summary! I needed it as this story was getting way too strange for my liking.
Thank you for bringing some sanity to all this social media craziness about OpenAI. Helpful.
Onward!
I suspect this will be this generations Fairchild Semi.
I did see an hour long interview with Fei Fei Lee and one of her ex students. Building world models.
It seems to me that the bloom is off the rose for OpenAI, though with a US army General on the board I guess they know where they're headed.
Most AI firms are nowhere near profitable, given the compute cost, and Meta continues it's cadence, for free, keeping opensource alive which is the mainstay of where the innovation is coming from.
Why would a high growth startup want to be profitable less than two years after releasing its first successful product?
Plenty of reasons - tho to your point the traditional ones (investors attraction, dilution, downturn avoidance) don't apply since OpenAI acts and feels like big tech less than an early stage startup.
In what sense does it feel like big tech? I don't know your background or the basis under which you make that assertion. But I see a start-up not big tech culture (Google can barely ship anything; it's 100 times the headcount). The growth numbers are not comparable. Startups have, since the days of Uber and Airbnb, accelerated the developmeny of public policy teams.
Those are fair metrics, tho "shipping product" is a bit amorphous. Product shipping looks like a lot of different things and may be internal (infra, policy tools) that the public doesn't see. OpenAI is for me big tech not startup tech for three reasons: Market dominance (for a market they created but nonetheless in terms of usage numbers and interest), talent acquisition, and strategic partnerships (eg early stage company gets potential backing from Apple). That's not Atlassian or Hubspot, that's Netflix or Palantir.
Would be interesting to hear people’s thoughts on those who are putting out apparently well researched pieces about the unsustainability of the OpenAI economic model (and genAI in general) - thinking of Ed Zitron. From the POV of an amateur observer it’s very hard to know what is hype, hating, clickbait etc!
Yes, the word “apparently” is extremely important. I’ve not read anything that would give me sufficient confidence to say this is any more unsustainable than Facebook or Uber or Google were at a similar stage.
That isn’t to say things can’t go wrong — Tesla nearly went to the wire a number of times several years ago. But point me to something well-researched, and I’m happy to have a scan.
This is the article I was primarily thinking of: https://www.wheresyoured.at/to-serve-altman/
I too am a little sceptical that a total failure would happen - if only because Microsoft are so heavily involved and would surely manage any such failures that they didn't appear as such.
I've read it and I honestly can't even begin to make sense of the assertions. The word "massive" appears nine times. In particular "here’s the crux of the matter: generative AI is a product with no mass-market utility - at least on the scale of truly revolutionary movements like the original cloud computing and smartphone booms - and it’s one that costs an eye-watering amount to build and run."
Which is just not what I am seeing in talking to businesses and founders and reflecting on my own experience. And I'm sure it's the 200m people a week or 11m people paying for ChatGPT are getting no "utility". Or what mass market means? As of 2024, global cloud spend is still less than half IT spend. Does that mean cloud (now 18 years old) is or isn't mass market? I don't know.
I'm not gong to counter this kind of speculation and weirdly extreme language with the frameworks, discussions and actual data that I have. Nor does this piece address any of the emerging, and increasingly strong. data that shows these tools being quite helpful.
Who knows, the company might fail. Startups often do. And successful ones often come perilously close to the wire. In 1997, Apple was 90 days from bankruptcy.
But, as I said in my first comment to you, I've yet to see a decent argument that would say this business is more unsustainable than many other tech companies at a similar stage.
Thank you for taking the time to read and respond! The tone of this author in general is indeed quite hyperbolic and end-of-days - in other pieces he takes aim at how a lot of the marketing language around AI products (as they current exist) is very nebulous and is basically being used to sell different versions of a chatbot. So the target is more some of the absurdity of corporate culture than the core potential of AI technologies, i.e., the focus of this page and the discussions with (presumably) very serious people you refer to. Anyway I'll maintain my position of baffled amateur and try to absorb as much conflicting information as I can
Does it matter if it is a chatbot, if companies (or individuals) use it to usefully conduct business?
It doesn't really matter what you and I think, but if a development team uses a chatbot to increase unit test coverage and code quality AND is happy with it, who are we to say "they aren't happy with it, and they don't know what they are doing"?
It seems like a lot of people think they can make those assertions on behalf of other people. That baffles me.
No, of course - genuine utility is always welcome. The example you give would be typically sophisticated users who are already tech-savvy and know what they want from the chatbot/AI. I think the example he was giving refers more to chatbots being jammed onto every kind of work task as a way of increasing the price of monthly SaaS subscriptions that will not necessarily have much uptake (I believe the current usage figures of Copilot in Microsoft 365 is given to support this) - personally I love using Claude (which I guess we can call a chatbot) but if I see a chatbot standing in for a customer service agent, I run a mile
Thank you for this article - it is briliant
Of course, I feel that the amount of money to develop AI models must be ungodly, hence the access to capital is critical. This explains how prolific they have been compared to their peers as they have unparalleled access to GPUs and compute, and it shows. Regarding the departure of key personnel, I suspect they are cashing in during this next financing round.
I am a little uncomfortable with the company seemingly most likely to achieve AGI first being this chaotic. Altman's purging of security-focussed board members and the loss of nearly all the founders is a worrying sign. His abandoning the non-profit mission and his altruistic motives is also a concern. I hope I am not being unfair but Altman certainly seems drawn to the one ring. Let us hope Mira Murati was not his Samwise.
I don’t really believe AGI will be a thing. And I don’t think anyone can say anything sensible about runaway autonomous development. More likely, this is a technology which will co-evolve with other technologies, and we’ll learn how to manage the risks as they emerge. (This is sort of LeCun’s argument too..)
LeCun and Chollet do not believe AI is on an exponential curve. They think breakthroughs are needed beyond strawberry-type CoT tricks. If they are right - and I take their arguments seriously - things may evolve slowly and haphazardly enough for the future you imagine to transpire. But if they are wrong and simply scaling current techniques does lead to top 1% human expert level agents and beyond quite quickly, there is real potential for accelerating disruption.
Honestly I am surprised the author of "Exponential" disagrees with this just after DeepMind release a paper about an exponential improvement in chips leading to better AI, leading to better chips etc, and after the leap in performance of o1 and the implications for what a GPT5-based o1 might be capable of.