🔮 Global chip chat; AI mythology; world government; quantum cats and Tiktok ++ #423
Your insider’s guide to AI and exponential technologies
Hi, I’m Azeem Azhar. As a global expert on exponential technologies, I advise governments, some of the world’s largest firms, and investors on how to make sense of our exponential future. Every Sunday, I share my view on developments that I think you should know about in this newsletter.
In today’s edition:
The UK goes soft on semis,
Global government,
Collective intelligence.
Sunday chart: Mind the gap
Chips are important. They are the steam engines of the 21st century. The Chinese know this. The Europeans are starting to get it. And the US certainly do. But this week, I wondered whether the British government does. It finally unveiled a semiconductor strategy which amounted to £1bn ($1.2bn) of support until 2030.
It’s a paltry sum, on a GDP-PPP weighted basis, about six times less than the US and EU, strung out over several years.
Of course, this isn’t just about the numbers. China’s significant efforts have only paid off in the lower-value end of the industry. The country has flubbed more advanced chips. Top-down planning and knowledge-intensive industries don’t seem to work too well, as Beijing discovered. Washington’s export controls have, for now, squashed whatever life was left.
Downing Street’s plan isn’t just miserly. It fails to address the looming shortage of talent in engineering, chemistry and materials science — skills indispensable to the semiconductor sector. The US is struggling with finding the talent it needs: the shortfall could reach 400,000 people by 2030. But as the British government is determined to clamp down on legal migration (the kind that could attract the needed talent), closing the skills gap is going to be even harder here.
More sensible is the ambition to focus on areas where Britannia’s expertise rocks (like design and compound semiconductors), and to ally with other nations with complementary capabilities. That focus-and-partner approach is a necessity for all but the three big economic blocs.
But ultimately numbers do matter (and they signify seriousness). London’s plan seems a tad wan, like lukewarm tea or soggy cucumber sandwiches. It isn’t really sufficient to signal to businesses that this sector matters to the UK. As I argued last year, governments are increasingly steering their economies towards the Exponential Age through catalytic approaches that prepare for the next decades. For the UK, semiconductors may just have to take a back burner unless there is a stronger, more directed commitment backed by open-armed access to talent and the financial incentives needed to unlock research and private investment.
Key reads
Governing the world. An excellent interview with Brazilian philosopher, Roberto Unger, on how we tackle global problems. Unger offers a counter-balance to the oft-romanticised vision of global government:
The division of the world into sovereign nation-states is by far the lesser evil, compared to the union of mankind into a single state or into a collection of hegemonic states that would achieve an agreement among themselves and impose it on the rest of mankind in the name of what is allegedly necessary for all.
I’m somewhat sympathetic to this. As readers will know, I think the old operating system of the world has really run out of road. We need new types of institutional experiments, borne of diversity, to figure out the new rules of the road.
Unger proposes that multilateral voluntary coalitions are better positioned to deal with global issues. That is, once again, not as outlandish as it seems. Voluntary arrangements have the benefit of moving quickly. And they can, in some way, use the power of technology and the market to affect global change. One example: solar and wind power are taking off around the world not because of a global compact agreed around an over-sized Rather, enough countries have given a sufficient demand stimulus to drive those technologies down the experience curve. Prices have come down. Self-interest - buying the best solution at the best price - is now delivering the outcome we want: decarbonisation. (See also: The Economist interviews Henry Kissinger on how to avoid World War 3.)
AI & jobs. BT, the UK’s largest telecom company, plans to reduce its workforce by up to 55,000 jobs by 2030, with 10,000 of these cuts resulting from the integration of AI into their customer service operations. As we showed in our latest Chartpack, initial studies suggest that generative AI will likely have the biggest effect on white-collar jobs. Both short-term and long-term predictions of technological unemployment are tricky and don’t have a very good track record.
But… this wave of technology may be different to previous ones: AI systems are capable of automating routine and non-routine tasks; the technology is general-purpose and can impact the whole economy; firms are much more adept, after years of digitisation and responding to Covid, at making technological investments; and there could be a real skills mismatch for those in white-collar jobs facing an automation squeeze. More certain, I suspect, will be downward wage pressure on knowledge workers. Employers, historically, have often used productivity enhancements to reduce salary bills. (See also, detailed reporting on getting hired has become tougher and tougher for white-collar workers in the US with employers emphasising cultural fit and technical skill more and more.)
AI myths. Jaron Lanier wrote an excellent piece about how the best way of thinking about AI is not as a creature, but as a tool, an “innovative form of social collaboration” moderated by statistics. He explains that
[m]ythologizing the technology only makes it more likely that we’ll fail to operate it well—and this kind of thinking limits our imaginations, tying them to yesterday’s dreams. We can work better under the assumption that there is no such thing as A.I. The sooner we understand this, the sooner we’ll start managing our new technology intelligently.
Daniel Dennett’s essay in The Atlantic raises the issue that AI is becoming increasingly good at impersonating people, a danger to democracies, which rely on citizens having reliable information. He does, however, fall into Lanier’s trap of anthropomorphising AI, somehow suggesting that it has its own will and inherent survival instinct.
Weekly commentary
Are large language models more like decathletes or sprinters? Or something else entirely? In this week’s commentary, I explore whether specialist or generalist LLMs will dominate. The answer is not as black-and-white as it may initially seem.
Data
There is a 98% chance that the hottest record temperatures will fall in the next five years.
China’s youth unemployment rate hit an all-time high, surpassing 20% in April.
With $25bn of venture capital investment in the first quarter, San Francisco startups attracted more than twice as many VC dollars as the next nine US metros.
Overwhelmed by meetings? Only 35% of us believe we would be missed in most of their meetings.
Can the aviation industry curb its carbon emissions? History suggests it’s possible. Over the last five decades, the average passenger’s carbon emissions per kilometre have dropped by over 80%. This substantial decrease has been largely facilitated by each successive generation of aircraft becoming more fuel-efficient, typically using 15-20% less fuel than their predecessors.
Short morsels to appear smart at dinner parties
🏎️ Can we build an electric car without China?
🧞 AI enables TikTok creators to reimagine history from decolonial perspectives.
🙊 Langage models cost a lot more in certain languages.
❄️ Unusual Antarctic sea-ice patterns are signalling potential climate change impacts.
⚱️ France wants to fight the iPhones’ planned obsolescence.
🧬 Studies you should know about: (i) Understanding why Covid affects some much more than others; (ii) physicists put the largest-ever object into superposition, it’s a huge Schrödinger’s cat! (iii) challenging the single-region theory of human origins in Africa.
End note
I know, I haven’t said much about the Senate hearings with Sam Altman, Christina Montgomery, or EV friend Gary Marcus. You can watch the testimony, linked in the community updates below.
I’m still digesting. But my short summary was that I thought it was a pretty good hearing given it was a Senate Committee. The Senators showed up largely well informed and the respondents gave decent evidence, even if I don’t necessarily agree with every line of questioning (or every answer).
Cheers,
Azeem
PS. If you’re experimenting with LLMs yourself or want to see how the Exponential View community is using AI, join our members-only AI Praxis session on May 31. The session is made available only to members with an annual subscription.
What you’re up to — community updates
Lubna Gem Arielle celebrates Mental Health Awareness Week with a reflection on the practice of actively caring.
Gary Marcus gave evidence on artificial intelligence risks to the US Senate. (Evidence here.)
Congrats to Reshma Sohoni and Carlos Eduardo Espinal for closing Seedcamp’s sixth fund. Their fund returns are absolutely staggering. Well done!
Congrats are in order to Hampus Jakobson as well — Pale Blue Dot closed its second fund.
Robert Chandler and his team at Wordware deploy an early demo of an agent that can self-write plugins.
Share your updates with EV readers by telling us what you’re up to here.