🔮 AI's expansion; Deepmind & chat; mind-reading; Taiwan & China; hyperauthorship++ #412
This is the year which could see a significant shock to the system.
This week’s edition lasers in on AI. There is so much going on, and this year could be a turning point. A year that delivers a significant shock to the system. I’ll be covering these issues in the coming weeks and months.
I think it’s important that we have a wide, informed, discussion. You can help. If you think our nuanced and interdisciplinary systems view is valuable then please help other people find it. The best way to do this is to share with five or six friends by email (not social). Click the button below. The debate needs to be widely held — and you can help.
The comments are also open to subscribers. And for premium members engaged in the Exponential Do community, the Slack discussions in #topic-AI are rather vibrant.
Sunday chart: Brain booster
Source: Noy and Zhang, 2022 (not peer-reviewed)
Noy and Zhang found that the use of ChatGPT for writing tasks made participants perform better, and much faster. Essentially, exposure to generative AI increases efficiency and job satisfaction. The most surprising effect, however, is that it is an equaliser: it benefited the worst performers more, thereby reducing the inequality between workers. It’s worth noting that this is a working paper, and hasn’t been through a rigorous peer review yet.
In an essay last week, I compared ChatGPT to a rather mediocre human graduate. Turns out, the chatbot might help mediocre graduates become excellent ones!
See also: Ethan Mollick on how to get the best out of Bing AI, while working around its quirks. Bing AI is more valuable as an analytic engine than a search engine.
Key AI reads
Price goes down, use goes up. The price of OpenAI’s ChatGPT API is low: reflecting a drop of 96.6% for similar capabilities in less than a year. The cheapest version (GPT-3.5-turbo) is not as good as its the best available model (text-DaVinci-003), but it’s 10% the price per token1. That is probably good enough for many applications. Using the ChatGPT API, it would cost $4.3 to process—and learn from—the entire Harry Potter series. It would take an average human reader about 72 hours of non-stop reading.
Horse race. Given the feverish uptake of LLMs like GPT-3, why did Deepmind not get out of the gate first? Jonathan Godwin, a former Deepmind engineer, argues it comes down to company culture. OpenAI is an engineering organisation, whereas Deepmind is a research organisation. My view is this is credible: the engineering company ships products, such as ChatGPT. Products are evolving answers to open-ended questions. A research company builds answers to more well-defined research questions. AlphaFold addresses the protein folding problem. Product/engineering companies ship, learn, ship, learn, ship, learn. A far cry from the discipline of scientific research.
Mind reading. Japanese researchers Takagi and Nishimoto used Stable Diffusion to reconstruct images from brain activity with astonishing precision.
The paper is too not yet peer-reviewed, but it shows the accomplishment of a technology that has been in the works for years. In EV#138 of November 2017, I wrote:
Purdue researchers have paired up high-fidelity fMRI scans with deep learning networks to decode videos people are watching, from brain activity alone. This is approaching real-time mind-reading.
Problematic parrots: Mind reading it might be, mind understanding it is not. Computational linguist Emily Bender warns that AI’s linguistic abilities trick us into thinking that these systems understand and create meaning.2
See also, this great introduction to neuro-symbolic AI. I discussed the subject extensively with contrarian neuroscientist and entrepreneur Gary Marcus on my podcast.
The EU grapples with managing the risks of generative AI. In my book, I introduced the idea of the exponential gap: slow, rigid organisations struggle to keep up with exponential technologies. The EU has surprisingly woken up to many of the challenges of generative AI. As the group considers whether generative AI requires specific regulatory intervention. Big tech is pushing back. My view, which I’ll detail in a future members’ commentary, is that we need to act quickly to put in place some sort of standards or expectations for these powerful technologies and the applications built on them. A starting point will be widespread discussion and the willingness of those building such tools to engage in that discussion. That won’t be sufficient and more will be needed.
Market data
53% of Taiwan’s outbound foreign investment still goes to China.
Global CO2 would have been three times higher in 2022 if it wasn’t for clean energy. See also, in the US, 99% of coal plants cost more than building renewable energy and connecting it to the grid. And… The top 1% of emitters produce 1000 times more carbon than the bottom 1%.
UK car production is at its lowest point in 66 years as the industry struggles to transition to electric vehicles.
The rise of the autocrats. The share of world trade between democracies has declined from 74% in 1998 to 47% in 2022.
Short morsels to appear smart at dinner parties
😍 Our planet may hold vast reserves of natural hydrogen.
📚 Women now allegedly publish more books than men.
🧪 The challenge of “hyperauthorship”, papers with 100+ authors.
👍 The FDA approved the first portable MRI machine. (Don’t hook it up to StableDiffusion just yet!)
📱 Kenya’s e-commerce scene is being transformed by the agent-network model popularised by M-Pesa.
Exponential knowledge community
Join our invite-only Slack by filling out this application [open on a rolling basis].
Members in London are meeting up for breakfast on March 14. Members in NYC are meeting up on March 23.
What you’re up to:
Chanuki Seresinhe is speaking on The Rise of Generative AI: Balancing its Power with Responsible Leadership on March 16.
Jamie McDonaugh launches his startup Superchain after a year in stealth mode, offering to quickly collect and customise blockchain data for use by developers.
Rafael Kaufmann published a piece on strategy for the global regeneration movement.
Share your updates with EV readers by telling us what you’re up to here.
End note
Artificial intelligence tools are getting more powerful and they are getting cheaper. Used well, they are really useful. Coupled with declining prices, we’ll see them used more and more in industry.
At the same time, the problematic aspects are becoming increasingly apparent. Whether it is using ChatGPT to slash the cost of writing effective phishing emails or the proliferation of these technologies (Facebook’s Llama model & its weights has been leaked by 4chan), problematic implementations will grow.
It’s possible that the attempts to nudge these models into good behaviour (as Microsoft has been trying with Bing) will work. It’s also possible that it won’t. That jailbreaking them (for good reason or bad) will remain a possibility. What we can say with certainty is that we can’t be certain that we’ll avoid that hazardous path.
There will be a trade-off between helpfulness and harmlessness. It is unlikely we’ll get a free lunch: a really powerful model that can’t be put to bad use.
It’s a wicked question but one we’ll need to address.
best
Azeem
A token is a text used to make sense of natural language. A typical word might be a couple of tokens.
Quick comment on the Jonathan Godwin piece - thanks for sharing it. This morning I was just thinking about LLMs and arguing that they're really a (massively overfunded) solution without a problem: it's not like the world absolutely needed a cheap and quick way to generate even more pedantic essays, cookie-cutter fiction, or malevolent misinformation than we deal with today. Godwin writes "Nothing was “solved” when GPT3 was released": bingo. But while he puts it mostly in terms of measurement ("there is no real evaluation metric or target for GPT3"), my concern is mostly with the teleology of it all: not to be a luddite (I am not), but what is the purpose of all this? Has anybody thought really, truly, about the purpose of LLMs? I have nothing in principle against people who are working on technologies to produce better batteries, cheaper desalinized water, more crops, or even more babies: but... more text? more written words arranged in a sequence that appears to make sense? Ever since Gutenberg, when has there ever been a scarcity of text? Where or when have we not drowned in text? Who ever argued that the cost of writing decent enough text was a constraint to reaching humanity's goals? When search engines came out almost 30 years ago, you could at least make a reasonable argument that "organizing the world's information" was a worthwhile goal, because most information was, indeed, disorganized and hard to access for most people, most of the time. But what worthwhile goals do LLMs have? If we don't agree on the definition of the problem, VCs and corporate giants are merely putting billions of dollars into a pointless game, played by mostly white guys, responsible for much potential and actual harm, for no discernible goal.