Quick comment on the Jonathan Godwin piece - thanks for sharing it. This morning I was just thinking about LLMs and arguing that they're really a (massively overfunded) solution without a problem: it's not like the world absolutely needed a cheap and quick way to generate even more pedantic essays, cookie-cutter fiction, or malevolent misinformation than we deal with today. Godwin writes "Nothing was โsolvedโ when GPT3 was released": bingo. But while he puts it mostly in terms of measurement ("there is no real evaluation metric or target for GPT3"), my concern is mostly with the teleology of it all: not to be a luddite (I am not), but what is the purpose of all this? Has anybody thought really, truly, about the purpose of LLMs? I have nothing in principle against people who are working on technologies to produce better batteries, cheaper desalinized water, more crops, or even more babies: but... more text? more written words arranged in a sequence that appears to make sense? Ever since Gutenberg, when has there ever been a scarcity of text? Where or when have we not drowned in text? Who ever argued that the cost of writing decent enough text was a constraint to reaching humanity's goals? When search engines came out almost 30 years ago, you could at least make a reasonable argument that "organizing the world's information" was a worthwhile goal, because most information was, indeed, disorganized and hard to access for most people, most of the time. But what worthwhile goals do LLMs have? If we don't agree on the definition of the problem, VCs and corporate giants are merely putting billions of dollars into a pointless game, played by mostly white guys, responsible for much potential and actual harm, for no discernible goal.
1- The last one: I am just a white and old male (seeking no fund from VCs...), doing zero harm with my own research ๐.
2- Generative AI will reset everything and work first๐, and YES, it will change my life ๐, and yours probably too...
I focus on 3 exponential (and irresistible) "tidal waves" of our post covid world, name it our "After Office" world. I would argue that Generative AI will be the largest (first wave), combined with hybrid and flexible work becoming scalable (second and third wave). As a consequence, I.P.R/copywright law may become obsolete.
I use ChatGPT every day as a powerful research assistant, but is NOT reliable for "facts" or "opinion (taste)". Within one year, I bet you "$ against donuts" that ALL our tools (collaborative and creative) will integrate "Chat-AI", forcing us to rethink all our jobs. It will have profound impact on all market, starting with the job market.
I thing the combination of the 3 exponential waves will reset everything, because we can now rethink (reset) our work AROUND our life. It is a fundamental change of society, the most important in 5000 years, when our cities where born. We now have the power to choose our cities ONLY for their quality of life (and not their jobs).
The "AI wicked question" is the one of Azeem ๐; trade-off between helpfulness and harmlessness. AI, as an "infinite bullshiter distorting reality" may be more dangerous for our society that liers. Case in point: Trump.
You seem to be arguing that a worthwhile goal for ChatGPT is to... enable knowledge workers to live on the beach? Sure, together with a bunch of other technologies that may be one of the contributing factors. My point was slightly different: hundreds of thousands of books (written by human writers) are published each year already: what is the purpose of being able to produce and publish millions, or billions, of books? At some point you run into the fact that the day only has 24 hours and there's no one to read them.
Way better than "work and live on the beach ๐". I argue that the 3 waves (not just AI) are empowering all knowledge workers with a fundamental power. They can now rethink their work AROUND their (better) lives. It's a new paradigme.
As for your last point, I agree. The biggest threat of AI is allowing the rise of "infinite bullshiters", more dangerous than liers. I recommend here the book of Frankfurt "On Bullshit".
What will we become as a society when truth becomes irrelevant ?
Quick comment on the Jonathan Godwin piece - thanks for sharing it. This morning I was just thinking about LLMs and arguing that they're really a (massively overfunded) solution without a problem: it's not like the world absolutely needed a cheap and quick way to generate even more pedantic essays, cookie-cutter fiction, or malevolent misinformation than we deal with today. Godwin writes "Nothing was โsolvedโ when GPT3 was released": bingo. But while he puts it mostly in terms of measurement ("there is no real evaluation metric or target for GPT3"), my concern is mostly with the teleology of it all: not to be a luddite (I am not), but what is the purpose of all this? Has anybody thought really, truly, about the purpose of LLMs? I have nothing in principle against people who are working on technologies to produce better batteries, cheaper desalinized water, more crops, or even more babies: but... more text? more written words arranged in a sequence that appears to make sense? Ever since Gutenberg, when has there ever been a scarcity of text? Where or when have we not drowned in text? Who ever argued that the cost of writing decent enough text was a constraint to reaching humanity's goals? When search engines came out almost 30 years ago, you could at least make a reasonable argument that "organizing the world's information" was a worthwhile goal, because most information was, indeed, disorganized and hard to access for most people, most of the time. But what worthwhile goals do LLMs have? If we don't agree on the definition of the problem, VCs and corporate giants are merely putting billions of dollars into a pointless game, played by mostly white guys, responsible for much potential and actual harm, for no discernible goal.
LLMs can be used for many other things. I've spoken to people in law firms using LLMs for first drafts - saving a lot of time.
I agree with some of your points, except...
1- The last one: I am just a white and old male (seeking no fund from VCs...), doing zero harm with my own research ๐.
2- Generative AI will reset everything and work first๐, and YES, it will change my life ๐, and yours probably too...
I focus on 3 exponential (and irresistible) "tidal waves" of our post covid world, name it our "After Office" world. I would argue that Generative AI will be the largest (first wave), combined with hybrid and flexible work becoming scalable (second and third wave). As a consequence, I.P.R/copywright law may become obsolete.
I use ChatGPT every day as a powerful research assistant, but is NOT reliable for "facts" or "opinion (taste)". Within one year, I bet you "$ against donuts" that ALL our tools (collaborative and creative) will integrate "Chat-AI", forcing us to rethink all our jobs. It will have profound impact on all market, starting with the job market.
I thing the combination of the 3 exponential waves will reset everything, because we can now rethink (reset) our work AROUND our life. It is a fundamental change of society, the most important in 5000 years, when our cities where born. We now have the power to choose our cities ONLY for their quality of life (and not their jobs).
The "AI wicked question" is the one of Azeem ๐; trade-off between helpfulness and harmlessness. AI, as an "infinite bullshiter distorting reality" may be more dangerous for our society that liers. Case in point: Trump.
You seem to be arguing that a worthwhile goal for ChatGPT is to... enable knowledge workers to live on the beach? Sure, together with a bunch of other technologies that may be one of the contributing factors. My point was slightly different: hundreds of thousands of books (written by human writers) are published each year already: what is the purpose of being able to produce and publish millions, or billions, of books? At some point you run into the fact that the day only has 24 hours and there's no one to read them.
Way better than "work and live on the beach ๐". I argue that the 3 waves (not just AI) are empowering all knowledge workers with a fundamental power. They can now rethink their work AROUND their (better) lives. It's a new paradigme.
As for your last point, I agree. The biggest threat of AI is allowing the rise of "infinite bullshiters", more dangerous than liers. I recommend here the book of Frankfurt "On Bullshit".
What will we become as a society when truth becomes irrelevant ?
https://en.wikipedia.org/wiki/On_Bullshit