🔮 Partisan machines; fusion feats; decoding AI; heavy cars++ #435
Your insider guide to AI and exponential technologies
Hi, I’m Azeem Azhar. As a global expert on exponential technologies, I advise governments, some of the world’s largest firms, and investors on how to make sense of our exponential future. Every Sunday, I share my view on developments that I think you should know about in this newsletter.
Latest posts
If you’re not a subscriber, here’s what you missed recently:
📈 Chartpack: Beyond LK-99 — The potential of superconductors
🐙 Promptpack: How to build a second-brain (featuring AI)
Sunday chart: AI’s Political Spectrum
A recent pre-print reveals that LLMs have political biases: GPT-4 leans towards libertarianism, while LLaMA is more authoritarian.1
To unearth these biases, researchers posed questions about social and economic ideologies. The results highlighted pronounced biases in models, particularly regarding social issues. One prevailing hypothesis is the nature of the training data. Many AI models have been trained on contemporary web texts (such as in the CommonCrawl dataset), and might be absorbing their liberal inclinations. This is a departure from the conservative undertones of older data sources such as “BookCorpus”, which more authoritarian models like BERT were trained on.2
In a separate experiment, the authors pre-trained models on data with clear political biases. This training made the models stronger at fact-checking text from outlets on the other side of the political spectrum. For example, a model well-versed in right-leaning content becomes skilled at spotting inconsistencies in left-leaning articles.
It will be clear to readers of the wondermissive that any claim that an LLM could hold an objective truth (or neural standard) around political or cultural values is poppycock. But such research is helpful to identify patterns of any type of skew. These slants/biases/call-them-what-you-will are inevitable but they are not necessarily problematic, especially when we can analyse and discuss them. Customs, norms and cultures are also skewed (and, often, internally inconsistent) and we, largely, manage okay.
Key reads
AI regulatory mosaic. As nations grapple with AI’s implications, diverse regulatory strategies are emerging. The EU aims to pass structured AI laws, prioritising risk categorization and banning high-risk applications (see EV#427 discussing the EU AI Act). Conversely, the more market-oriented U.S. is leaning towards industry self-regulation. China, striving to balance tech advancement with societal health, emphasises both corporate transparency and content control. Given AI’s opacity and varied international approaches, achieving unified regulation appears elusive. In my view, we don’t need uniformity as much as healthy competition and experimentation in regulatory and governance approaches. Particularly in these uncertain times, diversity serves as our ally, promoting resilience and adaptability instead of confining us to rigid monocultures.
See also in AI:
Chinese tech giant Alibaba has released its own open-source LLM.
Do machine learning models memorise or generalise?
AI can guess passwords merely by analysing the sound of your keystrokes.
Fusing the future. This week brought a significant milestone for nuclear fusion: the National Ignition Facility reported achieving a net energy gain for the second time, surpassing their initial gain. While scientific breakthroughs pave the way for the fusion era, astrophysicist Ethan Siegel points out that truly advancing the technology demands greater public support. Over the course of 68 years, the U.S. has committed an average of $500 million annually to fusion research. This pales against yearly climate change costs ($165 billion in 2022) which fusion could help reduce. While more private firms than ever are pursuing the goal (see EV#350), public funding is hardly going to crowd out private investment and could help put more of the helpful science on a more sound footing.
Data
By assisting pilots in selecting optimal routes, AI helps reduce contrails by up to 54%.
In 2022, Apple accounted for 23% of TSMC’s revenue, solidifying its position as TSMC’s premier client. Meanwhile, amidst apprehensions of U.S. export curbs, China’s tech giants are aggressively procuring Nvidia chips vital for AI, committing $5bn in orders.
American vehicles are grappling with a weight issue, having put on over half a ton since the 1980s.
82% of U.S. voters are skeptical about tech leaders’ capability to regulate AI. Furthermore, 72% advocate for a more cautious approach to AI progress.
Is ChatGPT experiencing a decline? For the second consecutive month, the platform’s monthly website visits have decreased, sliding from 1.9 billion in May to 1.5 billion in July.
Short morsels to appear smart at dinner parties
🕳️ Co-working, self-storage on demand and micromobility scooters: the plight of “winner takes none” businesses.
🛍️ The era of ultracheap goods might be coming to an end.
🪱 A new study shows that almost 60% of all species on Earth live in soil, making the ground our planet’s most biodiverse habitat.
🌪️ Global natural disasters cost insurers $50 bn in H1 2023, with most losses from severe convective storms.
🥼 There’s evidence to suggest quantum computers could perform hyper-fast DNA sequencing.
🧬 We don’t know what ⅕ of our genes actually do.
End note
LK-99 has been debunked. Despite this, the experience proved overwhelmingly positive, showcasing the power and potential of collaborative research. We observed the emergence of a new form of open science, driven by independent researchers and well-heeled labs worldwide. As we highlighted in last week’s newsletter:
The speed of experimentation around the world and the engagement with the story feels like a new era of science: we need scientific institutions and formal peer review, but DIY science and debate may become a much bigger part of how science gets done.
Azeem and EV team
🔥🔥 I’ll be speaking about the Exponential Age at the FT Weekend Festival on Saturday, September 2nd. I would love to see you there.
For the EV readership, there’s a 25% ticket discount using the code ‘EV23’.
Cheers,
A
What you’re up to — community updates
Carissa Véliz penned an insightful piece titled “What Socrates Can Teach Us About AI” for Time.
Mike Wooldridge has been appointed the Specialist Advisor to the forthcoming House of Lords inquiry into Generative AI and Large Language Models.
Matt Clifford is to lead the global AI safety summit taking place in the UK later this year.
Alberto Mucci, lead of the BoortmaltX accelerator program, is looking for pitches from early-stage food-tech startups. Reach out if you think you could be a match.
Abhishek Gupta writes about moving the needle of voluntary AI commitments to the White House.
Edward Saperia’s team announce the Civic AI Observatory, an initiative to support civic organisations to plan and adapt to the rapidly evolving field of Generative AI.
Share your updates with EV readers by telling us what you’re up to here.
Lots of caveats here, especially given the limitations of the Political Compass test itself. I’d add that for non-Americans, the social axis is probably best read as “Conservative” vs “Liberal” (rather than libertarian).
BERT is a family of language models introduced in 2018 by researchers at Google.
Best Sunday read week after week. https://www.exponentialview.co?r=36i6f