Quick comment on the Jonathan Godwin piece - thanks for sharing it. This morning I was just thinking about LLMs and arguing that they're really a (massively overfunded) solution without a problem: it's not like the world absolutely needed a cheap and quick way to generate even more pedantic essays, cookie-cutter fiction, or malevolent misinformation than we deal with today. Godwin writes "Nothing was “solved” when GPT3 was released": bingo. But while he puts it mostly in terms of measurement ("there is no real evaluation metric or target for GPT3"), my concern is mostly with the teleology of it all: not to be a luddite (I am not), but what is the purpose of all this? Has anybody thought really, truly, about the purpose of LLMs? I have nothing in principle against people who are working on technologies to produce better batteries, cheaper desalinized water, more crops, or even more babies: but... more text? more written words arranged in a sequence that appears to make sense? Ever since Gutenberg, when has there ever been a scarcity of text? Where or when have we not drowned in text? Who ever argued that the cost of writing decent enough text was a constraint to reaching humanity's goals? When search engines came out almost 30 years ago, you could at least make a reasonable argument that "organizing the world's information" was a worthwhile goal, because most information was, indeed, disorganized and hard to access for most people, most of the time. But what worthwhile goals do LLMs have? If we don't agree on the definition of the problem, VCs and corporate giants are merely putting billions of dollars into a pointless game, played by mostly white guys, responsible for much potential and actual harm, for no discernible goal.

Expand full comment