🔮 GPTs for all, AzeemBot; conspiracy theorist AI; big tech vs. academia; reviving organs ++448
An insider perspective on AI and exponential technologies
Hi, I’m Azeem Azhar. As a global expert on exponential technologies, I advise governments, some of the world’s largest firms, and investors on how to make sense of our exponential future. Every Sunday, I share my view on developments that I think you should know about in this newsletter.
The latest from Exponential View
If you’re not a subscriber, here’s what you missed recently:
Sunday chart: GPTs, GPTs everywhere, but AGI nowhere

OpenAI’s developer event was emblematic of the pace of innovation in the Exponential Age. The biggest moment (of many) was the arrival of customisable agents, which the firm calls GPTs. This allows any user to build a bot for a specific purpose.
I’ve been using custom bots for Claude-2 and GPT-4 via Poe and Typing Mind to speed up my interactions. These bots include Russian Literature Bot, DJ Bot & AzeemBot1. Custom GPTs will become much more powerful than my current set because of GPT-4’s functional calling (accessing other apps) and retrieval augmented generation (a way of anchoring text to external references). Early demos are impressive. See, for example, my colleague
’s GPT bot, The Exponentialist; or Rowan Cheung’s social media post optimiser.)The firm announced a much larger context window for GPT-4 (at 128,000 tokens, this is the length of a book). A longer context window means more useful, longer interactions. I have been playing with it this week and am pretty impressed with how you can investigate much lengthier research questions with it. However, its capacity to recall specific pieces of information begins to drop around 73,000 tokens, according to one bit of quick research. This is a problem that is common amongst the LLMs with large context windows.
A few reflections: OpenAI has put paid to the prospects of several startups as it moves to capitalise on its early advantage. GPT-4 is still the most performant model around.
Customised GPTs are the first step to building an “app store” on LLMs-as-a-platform, which Altman teased in this week’s keynote speech. Looking at them as the next computing platform was the message I got when I spent a week in Silicon Valley earlier this year.
This makes sense: LLMs are like databases; they know what they know and struggle to imagine data that they haven’t been trained on; researchers don’t seem to find any evidence of LLMs being able to generalise. But unlike traditional databases, they are non-deterministic, that is you won’t get the same response each time you query them. They also store the data in a high-dimensional latent space, which allows us to explore concepts in that latent space. I think of this as a powerful analog to the “join” operator in traditional databases because connections across concepts, things that are sort-of like something else, is similar to analogic thinking. But these two qualities: the non-deterministic nature and the embeddings in latent space make its behaviours very different to traditional databases. Extremely powerful in some ways, and unhelpful in others.
The new GPT app store could allow for an extensive range of potential markets to be targeted, as the creators of these customised bots can teach them relevant, industry-specific know-how.
I expect this announcement to breed an explosion in creativity and new GPTs as users with no coding expertise get to train and customise these models in a user-friendly way. More creativity, more innovation: the AI train shows no sign of slowing down.
Key reads
Artificial intelligence, biological values. Google has announced a programme for post-doc researchers, one of the many ways in which big tech companies are attracting the best research and engineering talent to their flanks. The trend of researchers leaving academia for big tech was already noted in 2019 by Jurowetzki and colleagues. “Founding mother” of AI Fei-Fei Li shares her own bittersweet experience joining Google after years as a Stanford professor: better tech, huge budgets, unparalleled datasets, and crucially, some of the top talent in the field. The most exciting research could no longer be done outside of a few industrial labs, further compounding their advantage. The problem, Li points out, is their disregard for the societal benefit of the technology, as these concerns compete with the urge to outpace competitors. The solution: AI as a human-centred practice, actively involving traditional universities and their ethical know-how, while being “unafraid to confront the real world”. (You can listen to my 2020 conversation with Fei-Fei here.)
Cautious restraint. A good discussion with FTC chair, Lina Khan, on her approach towards AI which includes things I’ve recommended in the past like structural separation, interoperability and some concerns about big tech acquisitions. (See also, excellent visualisation on the compute thresholds in the recent Biden Executive Order.)
The conditions of the knowledge working class. Of many big shocks from LLMs, one that has stood out is that it is not the so-called routine white-collar work (data entry, administration) that have been most vulnerable to disruption, but rather roles that we might consider “creative” like copywriting, consulting or marketing. A study from Hui et al provides some data by looking at an online gig platform catering to freelance knowledge workers. The release of ChatGPT saw a decline of 2% in advertised jobs and a 5.2% drop in monthly earnings, with “top employees disproportionately hurt by AI.” To dig deeper on the most recent research on generative AI and work, see our Chartpack:
📈 Chartpack: An update on genAI and work
In May, we released a Chartpack that delved into studies exploring the transformative potential of genAI in fields such as writing, coding, and call centre work. The results showed the significant promise of the early LLMs, increasing performance across these domains.
42. This week, X.AI rolled out Grok, Elon Musk’s latest crusade to nudge conversational AI towards unravelling the truths of the universe. It’s a commendable first effort. In just two months, his team has developed a model that outperforms the likes of GPT-3.5 and LLaMa 2 70b on both the MMLU and human evaluation benchmarks. Its standout claim is the promise of ‘real-time knowledge of the world via the 𝕏 platform’. Although many valuable voices have departed from 𝕏, the prospect of an LLM with access to up-to-the-minute news could render Grok a formidable ally. This assumes, of course, that one can filter the voices Grok draws upon. If not, we might witness the emergence of the first LLM whose “hallucinations” are facts – essentially, the first conspiracy theorist LLM…
Market data
Studies have shown that while happiness generally increases with income, this trend plateaus for some at around $60,000 to $90,000; however, recent findings suggest this plateau mainly affects the least happy 20% of people, revealing a more complex relationship between income and happiness.
A study estimates that fake online reviews lead to a welfare loss of 12 cents per dollar spent.
A UNESCO-commissioned survey in 16 countries reveals that over 85% of people are concerned about the impact of online disinformation, with 87% believing it has negatively affected their country’s politics.
Google Cloud TPU executed the world’s largest distributed training job for LLMs, utilising over 50,000 TPU v5e chips.
A recent YouGov survey of 1,063 registered voters in the US found that 64% prioritise lower prices as their top economic concern, while only 7% cited jobs as their primary focus.
Short morsels to appear smart at dinner parties
🇳🇦 Namibia is set to become the first African country with a decarbonised iron plant, using only green hydrogen.
🚎 Free bus rides for equality of movement - and work - for Indian women.
👂 Why we hear voices.
🐈 Genomic research aids the conservation of Scotland’s threatened ‘Highland tigers’ and their distinction from domestic cats.
👣 A spinal implant enabled a man with Parkinson’s to walk again.
📖 Sapiens’s legacy, ten years on: popular acclaim versus academic scepticism.
End note
Twitter/X has been a reliable way to spread the word about Exponential View to the world since I first started writing the newsletter. With Twitter dying, I have a quick request: share this edition with your colleagues and friends — best via email or using the Share button below.
Best,
Azeem
What you’re up to — community updates
Murray Shanahan and colleagues’ research on role-playing with LLMs is published in Nature.
Chanuki Seresinhe has launched the Beautiful Places AI Kickstarter campaign.
Josh Hardman launched a reader-supported model Insider’s guide to the business, policy, and science of the psychedelic renaissance at Psychedelic Alpha.
Founder of Patch, Freddie Fforde. was interviewed for a BBC article about the new “work near home” trend.
Share your updates with EV readers by telling us what you’re up to here.
Russian Literature Bot is designed to help me navigate Russian literature; DJBot focuses on its knowledge of house, techno, ambient and urban music; AzeemBot is supposed to think like me and help me with research.
Loved the concise explanations of the key concepts in OpenAI's announcements. Thank you.
I like Murray's paper, having read it, it's the first I've read that deals with the model as is. In the raw. I have quibbles, at least from my admittedly unusual standpoint, but I'll dump them on Substack. However, the paper itself is worth a read. Also if you've not already apply for access to bluesky as a twitter alternative.