🔮 AI school; the EU acts; Chinese science; quantum; holobionts ++ #427
Your insider guide to AI and exponential technologies
Hi, I’m Azeem Azhar. As a global expert on exponential technologies, I advise governments, some of the world’s largest firms, and investors on how to make sense of our exponential future. Every Sunday, I share my view on developments that I think you should know about in this newsletter.
In today’s edition:
Will the EU AI Act favour open-source AI?
Chinese researchers overtake the US.
The first synthetic human embryo.
📢 We’re looking for cool products and companies to share with our audience. If you’ve got a product or service you think our readers would love, complete this form and we’ll get in touch to discuss.
Sunday chart: The EU acts on AI
This week, the European Parliament proposed1 the EU AI Act, the first comprehensive set of regulations for, well, AI. (An industry? A technology?) It has taken a risk-based approach to regulation, and categorised AI into low-risk, high-risk, and prohibited areas, with specific measures for each one. The Act also mandates transparency, by requiring AI systems like ChatGPT to disclose their AI-generated nature, differentiate deep-fake images from real ones, and implement safeguards against generating illegal content.
On the one hand, it’s good to see some rules (for example, on real-time facial recognition) but it’s inescapable that there are problems. The rush to put rules around foundation models could be problematic. In particular, the Act seems to regulate models as the end product, rather than the uses of those models. Given there will be bajillions of models out there, I’m not clear this is appropriate.
Researchers at Stanford University tested major foundation models against the rules and found that they largely do not comply. First, there’s a clear lack of disclosure of information about the copyright status of training data. Next, there is no clarity on energy use or risk mitigation, nor any evaluation standards or an auditing system. The most compliant models were open-source ones.
The second problem will be whether some of these rules are simply too early for an emerging technology in a complex, competitive environment. Emmanuel Macron feels the same. Early, heavy rules tend to favour incumbents who can afford armies of lawyers, and additional engineers, to help with compliance.
Measure twice, cut once, they say. Feels like the cutting, especially around foundational models, may have been a bit rushed.
See also: Scientists and former White House advisors sign an open letter focusing on current harmful impacts of AI, rather than future existential risks. To go deeper, see my commentary from a couple of weeks ago on this subject.
Weekly commentary
Key reads
The United Kingdom of AI. PM Rishi Sunak wants the UK to harness AI for productivity, economic growth and more. As The Economist outlines, Britain already houses important AI companies such as Google DeepMind and has several excellent universities. A flurry of new American venture firms on the isle might further fertilise the entrepreneurial ecosystem. The state also generates a lot of valuable data - above all through the NHS. But the UK needs to ensure it has sufficient high-end compute at its disposal. The budget to build a national AI supercomputer is generous. But if the plan is to spend it on a compute cluster that will be outdated within 18 months, it might be better to use the money to incentivise hyperscalers to build GPU clouds in the UK. Another key area is to attract talent which will require an open-armed immigration policy.
My sense is that the UK finally has an opportunity to make Brexit work as an advantage. Although it has lost access to the EU’s single market, it can now tie up different relationships with the rest of the world. Perhaps the combination of deep skills, a Goldilocks-sized economy and ability to act nimbly could make a difference. Notably, Google DeepMind, OpenAI, and Anthropic have agreed to give the UK government priority access to their AI models for research and safety purposes. (Jess Whittlestone and Nikhil Milani put forward some practical recommendations for how such a government-supported information-sharing regime could work.)
But government interventions need to have the precision of a scalpel not the thud of a mallet.
Prompting for pedagogy. Ethan and Lilach Mollick explore how large language models (LLMs) might revolutionise education (h/t EV member Gianni Giacomelli). They present seven AI classroom roles such as tutor, coach, and mentor, each with unique educational benefits and hurdles. The goal is to empower students to collaborate effectively with AI, while staying alert to its limitations. The authors advocate active student participation, so that they learn to use AI as a support mechanism, not a substitute for learning. The great thing about this paper is that it offers a practical roadmap for educators wanting to incorporate LLMs into their pedagogy rather than ban them. What is also interesting is that giving “human-like roles” to these chatbots looks like one of the easiest, and best-performing approaches to prompt engineering.
Other reading: OpenAI’s code interpreter is out to disrupt Oracle, and remake finance; Alex Guzey on a two-sentence jailbreak that flummoxes both GPT-4 and Claude.
China searched, and China found. In 2022, China achieved a significant milestone when for the first time it made more contributions to research articles published in high-quality natural science journals than the United States. The “Share” metric in the Nature Index indicates the proportion of authors from a specific country in each article published within its journals, and China recorded a Share of 19,373, compared with the US’s Share of 17,610. But high-quality research requires collaboration and openness to the world, and the number of foreigners in China is in stark decline. For example, every last Indian journalist has reportedly been expelled from the country. If China wants to continue on its path to becoming a dominant science and innovation hub, this decline will be a problem.
Market data
Artificial artificial artificial intelligence. Between 33-46% of crowdworkers on Amazon Mechanical Turk used LLMs when completing tasks.
Microsoft’s CFO says AI products will add $10 billion to their annual revenue (if this is true, their $10 billion investment in OpenAI is a bargain).
China’s non-fossil fuel energy sources now account for over 50% of the country’s total installed electricity generation capacity.
McKinsey estimates that current generative AI and other technologies have the potential to save 60-70% of employees’ time through automation.
Apple became the first $3 trillion firm.
Short morsels to appear smart at dinner parties
🧫 Biology morsels: A synthetic human embryo has been created by reprogramming embryonic stem cells. Scientists may have identified what causes endometriosis, one of the most painful and underdiagnosed conditions amongst women. And… The next paradigm shift in biology are meta-organisms named holobionts.
🥼 The data is in. Trump is anti-science.
☄️ IBM’s Eagle quantum computer beat a supercomputer at a complex calculation.
🕵️ Google became worse when Reddit went dark.
🇸🇪 Beyoncé is being blamed for high inflation in Sweden after she started her world tour in Stockholm.
🚗 Toyota is stepping up its EV game by announcing a next-gen battery offering 600 miles of driving on a single charge.
👣 Strengthening your social bonds through a simple walk and talk.
End note
I had a fascinating time in Hong Kong speaking to Asia-Pac fund managers courtesy of UBS. I cover some reflections in my weekly commentary above.
Bloomberg also interviewed me on the subject. You can find it here.
Cheers,
Azeem
What you’re up to — community updates
Massimo Portincaso and Arnaud Legris have worked with the EIC to draft a blueprint for scaling up European deep tech startups.
Energy Impact Partners, in which Shayle Kann is a partner, invested in a 100-hour iron-air battery system in Georgia. Check out my conversation with Shayle about how VC can enable deep decarbonisation.
Congrats to Juan Avellan on his new role at Senthorus, a cyber security firm.
Share your updates with EV readers by telling us what you’re up to here.
We mistakenly wrote “passed” in the original version of this newsletter.
Interesting video viewpoints on China. I guess China has always been/felt “in control” with a top down regime. So it’s no surprise they feel “in control of AI”. They probably actually are!