🔮 Exponential renewables; graceful interfaces; DeepMind’s star sign; do cephalopods dream of lab-grown meat?++ #429
Bye coal.
Hi, I’m Azeem Azhar. As a global expert on exponential technologies, I advise governments, some of the world’s largest firms, and investors on how to make sense of our exponential future. Every Sunday, I share my view on developments that I think you should know about in this newsletter.
In today’s edition:
Hard coal facts;
AI’s changing expectations;
Debating AI risks.
💯 Thanks to our sponsor Fin, a breakthrough AI bot by Intercom, ready to join your support team.
Sunday chart: Dark past, bright prospects
Global energy consumption increased by 1.1% in 2022, only slightly below the 1.2% average of the past 10 years. Fossil fuels still constitute 82% of supply, and coal consumption has reached its highest since 2014. In the comprehensive Statistical Review of World Energy 2023, these stats and others ooze gloom. But should they?
While coal consumption may not have decreased, it is no longer climbing so fast. Itis close to reaching (if not already at) its peak.
The annual growth rate of coal consumption (smoothed out of a decade), rose from 0.7% to 4.2% between 2000 and 2010. But as richer countries began to phase out coal in favour of renewables , growth rates fell in the past decade or so, plateauing in the past year as Chinese growth flattened. This switch has even perplexed Australia, one of the largest exporters of coal, which has continuously downgraded its coal export estimates for the last three years due to falling demand.
Renewable energy is poised to be this engine of growth. It is on a 60-year exponential growth curve. Solar energy, for instance, is growing at the fastest rate ever seen for any energy technology. China’s progress exemplifies this: it is on track to double its wind and solar capacity by 2025, five years ahead of its 2030 target. Renewables are ready for lift-off, and we love that here at Exponential View.
Key reads
From syntax slips to AI quips. Wunderkind mathematician and Fields Medal winner Terence Tao has written an excellent blog post on how AI will change our expectations of information technology. Traditionally, IT systems demanded rigidly precise instructions; any deviation from a specific syntax would yield an error code. However, the emergence of large language models (LLMs) signals a transformative change. These AI systems can gracefully handle vague or imprecise inputs, navigating through fuzziness, to output meaningful results - they won’t just tell you your syntax error, they’ll fix it for you. As Terence says:
When integrated with tools such as formal proof verifiers, internet search, and symbolic math packages, I expect, say, 2026-level AI, when used properly, will be a trustworthy co-author in mathematical research, and in many other fields as well.
I think we’ll need to develop new critical protocols to work with these tools. Like wonky autonomous cars or GPS dependence, we must remind ourselves that the systems are not infallible. And today, at least, we prefer fallibility (and those liability and culpability) to reside with humans.
Is ChatGPT compatible with a Gemini? ChatGPT will likely get a new rival in Project Gemini, spearheaded by Google DeepMind. This project, disclosed by DeepMind’s boss Demis Hassabis, plans to marry the expressive, language-focused capabilities of models like GPT-4 with the strategic and problem-solving proficiencies of models such as its human-trouncing AlphaGo. AlphaGo is renowned for its ability to plan strategies several moves ahead by using a combination of a value network (evaluating board positions) and a policy network (suggesting next moves). A similar approach in LLMs could mean more coherent and long-term context-aware responses. The model could “plan” its responses in a way that maintains context and continuity throughout a longer conversation.
This news comes as rumours spread that GPT-4’s trick has been revealed. Rather than being some trillion parameter behemoth, it is speculated to be a combination of smaller specialised models, an approach known as ‘mixture of experts’. This method involves training various ‘specialists’, each equipped to handle certain types of data. The model then dynamically selects the most suited ‘expert’ for any given data. This strategy resembles collective intelligence, where a group’s combined intellect and skills can surpass the abilities of individual members, leading to better decision-making and problem-solving.
See also:
The idea of mixing models has a long history and could be used to give LLMs access to knowledge graphs. As EV member Gianni Giacomelli says, the benefits are that “knowledge graphs can add a layer of data about relationships that complements the predictions from vast but shallow neural network structures.”
De-munk-ing AI’s existential risk. In a captivating debate, AI luminaries Yann LeCun and Melanie Mitchell squared off against Yoshua Bengio and Max Tegmark on the question of artificial intelligence posing an existential threat. At the start of the discussion, two-thirds of the audience were already convinced of AI’s potential menace. By the end of it, LeCun and Mitchell had managed to sway 4% of voters that AI does not pose an existential threat - a modest but notable gain.
While the margin of persuasion may seem small compared to other debates, it reflects the entrenched viewpoints in the AI debate. There seems to be a prevailing consensus, fueled by widespread media coverage, that AI is indeed a looming threat. This predisposition can make it hard to shift public opinion by much.
The best way to alter this entrenched perception is exposure through well-reasoned arguments. Whichever side of the fence you currently sit on - or if you’re perched right on it - this debate will give you food for thought and is certainly worth watching.
🚀 Today’s edition is supported by our partner, Fin.
Meet Fin. Intercom’s breakthrough AI bot for your support team. Fin can resolve half of your customer support tickets instantly, solving complex problems and providing safe, accurate answers. Simply pair Fin with your help centre to learn your written history and hold natural conversations with your customers.
Weekly commentary
To understand how we could think about copyright in the age of AI and ChatGPT, I’ve asked Jeff Jarvis, the Leonard Tow Professor of Journalism Innovation at the Craig Newmark Graduate School, whom I have known for more than 20 years, to share his thoughts on the topic. I love his proposal of creditright and believe it is something from which we can build.
🤔 Gutenberg’s lessons in the era of AI
Hi, it’s Azeem. Today, Weiner Zeitung, the oldest continuously published newspaper in the world, was printed for the last time. It signed off the Gutenberg era with this epic front page. The Gutenberg press was a disruptive technology that expanded the creation and application of information in human society. The world of information in which we now live, and the laws that govern it, was built on the foundations of the printing press—and three hundred years of commercial, social and economic power politics.
Market data
The IMF found that corporate profits account for 45% of Europe’s inflation since the start of 2022.
The global entertainment and media industry grew by only 5.4% in 2022, to $2.3tn, much less than its 10.6% growth rate in 2021.
Doordash has introduced hourly wages for their drivers.
In 2022, Microsoft’s cloud server business amassed a modest $34 billion, which was less than half of the earnings generated by Amazon Web Services.
Short morsels to appear smart at dinner parties
📚 “Ma La Er snorted scornfully” is just one of many AI-generated nonsense books appearing on Amazon.
💊 Human trials have begun for an AI-designed drug.
🧫 Lab-grown meat has received US regulatory approval, but commercialisation won’t be easy.
👽 Scientists have found a new carbon compound in space, reinforcing the possibility that life could emerge in other places than here on Earth.
🐙 There’s new evidence that octopuses may dream while they sleep.
🦬 The myth of men as hunters and women as gatherers is wrong.
🌱 The fascinating anthropology of entrepreneurship around the world.
🦶 Why body-based measurements like arms and feet persist across time.
End note
At some point on Saturday, Elon Musk introduced new rate limits to debilitate Twitter. The underinvestment in the site reliability teams and infrastructure is finally starting to show. Twitter may, in fact, be DDOSing itself.
Annoyingly, several items I wanted to refer to in this week’s wondermissive were in my Twitter bookmarks. Musk won’t allow me to read them.
It’s a bold strategy. To stop people from using your product to encourage more use. All very Bến Tre.
Perhaps he can turn the ship around. The new CEO hire, an ad- and trad-media bigwig, does not convince me. I’m sceptical when media people take the top job at Internet platforms. Perhaps that is because I remember Terry Semel’s awful shanking of Yahoo.
But platforms are resilient. Musk is too. So perhaps he can swing it around.
Cheers,
A
📢 We’re looking for great products and companies to share with our audience. If you’ve got a product or service you think our readers would love, complete this form and we’ll get in touch to discuss.
What you’re up to — community updates
Claudia Chwalisz has written an essay for the RSA about what a new paradigm of citizen-led democracy means for political leadership.
Raj Jena gets a shout-out from Rishi Sunak for his research on reducing the time cancer patients wait for radiotherapy using AI. (You can read about Raj’s breakthrough work in my book.)
Inflection AI, a company led by Mustafa Suleyman, has raised $1.3 billion in funding, with Microsoft and Nvidia leading the round. They intend to build a 22,000-node H100 cluster with the proceeds. (List price for that is around $900m. Thank you very much, says Jensen.)
Rebecca Choong Wilkins launches a new video show on the Chinese economy.
Kevin Delaney published a Q&A with Jared Spataro of Microsoft about how new generative AI features will change the nature of work.
Share your updates with EV readers by telling us what you’re up to here.
Great post as usual. Interesting posting from MSFT research on Orca and LFMs. See https://arxiv.org/pdf/2306.02707.pdf Basically Orca is a 13-billion parameter model
that learns to imitate the reasoning process of LFMs. Orca learns from
rich signals from GPT-4 including explanation traces; step-by-step thought
processes; and other complex instructions, guided by teacher assistance from
ChatGPT.