Great analysis. Non experts - and most execs at enterprises - like to say that the impact of GenAI isn't that great today and that it's 'over hyped'. They find it hard to appreciate what, in practice, the developments Eric talks about mean (still sounds like techno-jargon to most). It would be great to paint some more tangible examples than the TikTok one, which still seems distant to most...
I think it's very difficult to do that because it is so domain specific. I was with a pharmaceutical marketer yesterday and he showed me an app he had built. He had never developed anything in his life before. Taught himself Python last year at age 50. Then showed me a research app that was remarkable powerful.
I couldn't, before I had seen that specific use case, have dreamt it up. It is way too domain specific.
People have to acquire the skills and apply their domain knowledge to find the thing that matters to them.
The TikTok example is perfectly clear to me. If you don't criticise the ethics of it or doubt whether it is possible but rather say "let's assume this is possible", what could I do in my business, I think you'll get enough clarity to move forward.
With the emergence of open weight LMs that can run on device + natural language as the programming language, we are getting something close to Alan Kay's 1972 concept of a dynabook—a portable device that could be a personal tool for non-programmers. Even with massive investment and dynamic typing, Smalltalk was too weak a language for most actual users. Those with domain knowledge, however narrow, and willingness to experience the LM phenomena will have increasingly useful results. One could even say that Kay's vision was a bit narrow.
I agree that the term AGI is not helpful. Here is a division that may be. I'll call them AI#1 and AI#2.
- AI#1 is current state. Use case/domain specific applications can be built by various rapidly evolving techniques. AI#1 requires open weights.
- AI#2, for example, will qualitatively improve our ability to address all technical, social and political issues raised by the climate emergency.I see no evidence that anyone is on a path for AI#2. But, perhaps someone is on a path to AI#2 and I don't see it. In that case, I don't want anyone, not Google, Meta, OpenAI, a European company or a Chinese company to have that much power while humans have such poor governance.
Cutting to the chase: raising the billions (more?) to use AI#1 to electrify the US, German, Chinese, etc. using renewables would be a good project. And worth the expenses Eric is talking about.
Some may think my requirements for AI#2 are too stringent, but only if AIs can resolve the climate emergency can we justify giving them more priority than health of the planet and its inhabitants.
Interesting insights and a great summary of the discussion.
The idea of a TikTok copy made me jump. As a software developer, I can't help thinking of the nightmare of fixing the bugs in the AI-generated codebase!
On a more serious note, it seems like Eric forgot that these LLMs beasts rely on crowdsourced knowledge, i.e. they are inherently limited by the availability of open information.
The real valuable things in TikTok, or any successful product, are in trade secrets and NDAs protected know-how. These will remain out of reach, no matter how large the context window and the computing resources they throw at it.
Azeem: It could be interesting to see a review of these prophecies in one of your posts one year from now.
i don’t think he does forget that at all. And more do i. if you actually use even today’s limited systems what you can do is remarkable — and not far off the theoretical example he gave.
“Say to your LLM, ‘Make me a copy of TikTok. Steal all the users, steal all the music, put my preferences in it, produce this program in the next 30 seconds, release it, and in one hour, if it's not viral, do something different along the same lines.’”
I wonder if these kinds of claims end up to the similar point as with self-driving, that getting 80% there is relatively easy, but each % improvement beyond that takes substantially more effort, and without those last % points, the whole thing just doesn’t work well enough to be useful.
Another point that makes me somewhat skeptical towards what can and can’t be done with AI is the saying from complexity science along the lines that “you can’t build a complex system from scratch. It never works and can’t be made to work. You have to start with a simple working system and increase complexity incrementally.”
That TikTok clone examples sounds a lot like trying to build a complex system from scratch. Or of cutting too much corners. Not saying that AI wouldn’t be useful in making those incremental steps, but to envision a complex system and just expect it to manifest out of thin air? I don’t think so.
Thanks for the analysis Azeem. It looks like the video was removed because he was talking about Google's internal culture in not-so-positive terms?
The TikTok example makes me think of the "new Turing test" proposed by Mustapha Suleyman "build me a $1M business on Amazon". What I find super interesting is that he is saying this precisely at the moment where a big part of the pundits are pushing a very different narrative (I'm thinking Gary Marcus, the Acemoglu paper, etc ... hinting that genAI is a dead end and costs too much money for what it can deliver. It's really increasingly difficult to build your own opinion where so many contradictory messages are flying around - delivered by the very experts that built this technology over the last decades!
It's ever more about becoming comfortable being uncomfortable with the speed of change.
Great analysis. Non experts - and most execs at enterprises - like to say that the impact of GenAI isn't that great today and that it's 'over hyped'. They find it hard to appreciate what, in practice, the developments Eric talks about mean (still sounds like techno-jargon to most). It would be great to paint some more tangible examples than the TikTok one, which still seems distant to most...
I think it's very difficult to do that because it is so domain specific. I was with a pharmaceutical marketer yesterday and he showed me an app he had built. He had never developed anything in his life before. Taught himself Python last year at age 50. Then showed me a research app that was remarkable powerful.
I couldn't, before I had seen that specific use case, have dreamt it up. It is way too domain specific.
People have to acquire the skills and apply their domain knowledge to find the thing that matters to them.
The TikTok example is perfectly clear to me. If you don't criticise the ethics of it or doubt whether it is possible but rather say "let's assume this is possible", what could I do in my business, I think you'll get enough clarity to move forward.
With the emergence of open weight LMs that can run on device + natural language as the programming language, we are getting something close to Alan Kay's 1972 concept of a dynabook—a portable device that could be a personal tool for non-programmers. Even with massive investment and dynamic typing, Smalltalk was too weak a language for most actual users. Those with domain knowledge, however narrow, and willingness to experience the LM phenomena will have increasingly useful results. One could even say that Kay's vision was a bit narrow.
I agree that the term AGI is not helpful. Here is a division that may be. I'll call them AI#1 and AI#2.
- AI#1 is current state. Use case/domain specific applications can be built by various rapidly evolving techniques. AI#1 requires open weights.
- AI#2, for example, will qualitatively improve our ability to address all technical, social and political issues raised by the climate emergency.I see no evidence that anyone is on a path for AI#2. But, perhaps someone is on a path to AI#2 and I don't see it. In that case, I don't want anyone, not Google, Meta, OpenAI, a European company or a Chinese company to have that much power while humans have such poor governance.
Cutting to the chase: raising the billions (more?) to use AI#1 to electrify the US, German, Chinese, etc. using renewables would be a good project. And worth the expenses Eric is talking about.
Some may think my requirements for AI#2 are too stringent, but only if AIs can resolve the climate emergency can we justify giving them more priority than health of the planet and its inhabitants.
It is quite helpful. Although.... I would also say that rapid application of AI#1 can deliver really remarkable outcomes - if applied well.
Electrification of steel industry comes to mind.
It was brilliant thank you
Interesting insights and a great summary of the discussion.
The idea of a TikTok copy made me jump. As a software developer, I can't help thinking of the nightmare of fixing the bugs in the AI-generated codebase!
On a more serious note, it seems like Eric forgot that these LLMs beasts rely on crowdsourced knowledge, i.e. they are inherently limited by the availability of open information.
The real valuable things in TikTok, or any successful product, are in trade secrets and NDAs protected know-how. These will remain out of reach, no matter how large the context window and the computing resources they throw at it.
Azeem: It could be interesting to see a review of these prophecies in one of your posts one year from now.
i don’t think he does forget that at all. And more do i. if you actually use even today’s limited systems what you can do is remarkable — and not far off the theoretical example he gave.
“Say to your LLM, ‘Make me a copy of TikTok. Steal all the users, steal all the music, put my preferences in it, produce this program in the next 30 seconds, release it, and in one hour, if it's not viral, do something different along the same lines.’”
I wonder if these kinds of claims end up to the similar point as with self-driving, that getting 80% there is relatively easy, but each % improvement beyond that takes substantially more effort, and without those last % points, the whole thing just doesn’t work well enough to be useful.
Another point that makes me somewhat skeptical towards what can and can’t be done with AI is the saying from complexity science along the lines that “you can’t build a complex system from scratch. It never works and can’t be made to work. You have to start with a simple working system and increase complexity incrementally.”
That TikTok clone examples sounds a lot like trying to build a complex system from scratch. Or of cutting too much corners. Not saying that AI wouldn’t be useful in making those incremental steps, but to envision a complex system and just expect it to manifest out of thin air? I don’t think so.
Thanks for the analysis Azeem. It looks like the video was removed because he was talking about Google's internal culture in not-so-positive terms?
The TikTok example makes me think of the "new Turing test" proposed by Mustapha Suleyman "build me a $1M business on Amazon". What I find super interesting is that he is saying this precisely at the moment where a big part of the pundits are pushing a very different narrative (I'm thinking Gary Marcus, the Acemoglu paper, etc ... hinting that genAI is a dead end and costs too much money for what it can deliver. It's really increasingly difficult to build your own opinion where so many contradictory messages are flying around - delivered by the very experts that built this technology over the last decades!
Easy! Trust my opinion. 😄