🔮 AI boosts earnings; the crisis of reading; CO2 protein; Russia’s tangerine payment; brain map, friends & AI podcasts++ #494
An insider's guide to AI and exponential technologies
Hi, it’s Azeem. Welcome to the Sunday edition of Exponential View. Every week, I help you ask the right questions and think more clearly about technology and the near future. For more, see Speaking | Book | Top posts | Follow me on Notes
Ideas of the week
The foundations of future AI
This week I argued that AI’s future will likely feature an ecosystem of domain-specific foundation models…
The most interesting question to me is, how will these models interact with each other in the future? When I used GPT-4 and Chai to predict a protein structure, GPT-4o1 acted as an orchestrator; it gave me an example sequence to test Chai-1’s capabilities with. There is nothing stopping more complex agent workflows from developing like this over time.
Technoalexia
Students are increasingly finding it difficult to concentrate on lengthy texts, Rose Horowitch highlighted in The Atlantic this week. We may jump to blame new technologies (notably smartphones, which have reduced our ability to concentrate), but concerns around this are not new. Compliance with course reading has been around 20-30% going all the way back to the 1970s. It is not a phenomenon of the digital era. The problem may be more with culture and education than technology. Education has long been torn between two aims: training workers for jobs versus cultivating informed, thoughtful citizens. The first aim is winning. Less value is placed on reading for reading’s sake. Reading is not just the acquisition of knowledge; it’s about thinking through ideas, formation of new thoughts. Technology can be one part of the solution. I actively discuss ideas from the books I am reading with Claude, much like one would in a book club. This deepens my engagement with texts, but it won’t change the culture around reading. Without that, long books may not endure.
The spectrum of AI’s capabilities
Bain is making a bold claim this week. Early client work suggests that integrating genAI into business operations could boost earnings by 20% within 18 to 36 months. If this increase is applied to every company globally, it could raise world GDP by about 2%1. If taken at face value, this highlights two important points. First, the payback period for AI investment is much shorter than many expect—closer to 24 months rather than five years. Second, the ‘ChatGPT moment’ only happened 23 months ago, we are still in the early stages of genAI’s impact. Capabilities of this technology are on a “jagged frontier”—meaning AI aces some tasks but struggles at others. Currently, the implementation is focusing on low-hanging fruit. But it will ramp up in complexity over time. I believe that the next phase will be that of “agentic systems”—AI that can chain together multiple skills, adapt to changing conditions and work towards long-term goals, much like a human expert managing a complex project. For example, an AI could plan and perform all the tasks needed to organise a company's annual holiday party: setting a budget; polling employees for preferences; booking a venue; arranging catering; and coordinating entertainment. Kevin Weil, OpenAI’s chief product officer, believes these systems will “hit the mainstream” in 2025—which I take to mean they will be commonplace in everyday life and business.
See also:
AI can give experts “superpowers”. Professor Robert Ghrist, an accomplished mathematician at the University of Pennsylvania with an h-index of 41, recently used AI models to prove a new theorem. This collaboration resulted in a 25-page paper conceived and completed in just two months. It acted as a catalyst, helping him explore ideas more rapidly and efficiently than he could have on his own.
California’s democratic governor vetoed an AI safety bill that was considered vague and costly, focused on sci-fi harms and an area where federal legislation is appropriate. Yet this deepens the divide between an AI-regulated Europe and Silicon Valley steaming ahead at all costs.
Waiting for alt meat
Meat alternatives have been promised for over a decade, but they have been too expensive. Now, two new forms of low-carbon protein may be nearing commercial viability: lab-grown meat and protein derived from carbon dioxide. Scientists have developed a method for producing lab-grown meat that could, at scale, cost as little as $6.20 per pound—comparable to the price of organic chicken. Separately, LanzaTech plans to scale up production of a whey-like protein made by microbes that consume carbon gases. The company has produced 25 metric tons of protein for animal feed. They want to produce 280 metric tons annually by 2026. And they’re targeting over 30,000 metric tons per year in 2028. If successful, this production level could support the protein needs of 1.5 million people annually.
See also:
GLP-1 drugs appear to have triggered peak obesity in America. See our take on GLP-1s here.
Jenny Chase, a solar analyst at BloombergNEF, has published her annual solar opinions thread. While I don’t agree with all her opinions, it is a great update on the current state and challenges of solar.
Data
Conservative Twitter users were 4.4 times more likely to be suspended following the 2020 election.
High-speed traders now dominate Wall Street, with Citadel Securities alone handling 25% of US stock trades.
Tesla Energy reports 750,000 Powerwalls (a massive home battery) have been installed globally.
The UK has the highest industrial power prices in the developed world.
Meta has been fined $101 million by Ireland’s Data Protection Commission for storing passwords in plain-text.
China has 246 ExaFLOPs (FP32) of computing power, around 100 times the power of the world’s fastest supercomputer. And Chinese companies supplied over 80% of the world’s critical lithium-ion battery components in 2023.
OpenAI’s real-time speech AI costs roughly $12 an hour, about 10 times more expensive than an Indian call centre worker. Also, OpenAI closed $6.6 billion in the largest VC funding round of all time.
Short morsels to appear smart at dinner parties
😆 An extra hour of daily video gaming significantly improved mental health and life satisfaction, with benefits peaking at about three hours of play per day. Don’t tell your kids…
🍊 Russia agrees to accept payment in Pakistani tangerines in exchange for lentils and chickpeas amid “payment difficulties”.
🪂 Vice-president Harris’ motorcade got halted by a buggy robotaxi.
🤭 NotebookLM produced a nine-minute podcast conversation by finding meaning in a document containing only the words “poop” and “fart”.
🧠 Scientists create the most complete map to date of a brain (albeit a fruit fly’s).
👯 Who you know can tell a lot about what you’re susceptible to believe in.
End note
We were saddened this week to receive the news that a dear friend of Exponential View, Abhishek Gupta, passed away. Abhishek was the founder of the Montreal AI Ethics Institute, Director for Responsible AI at the Boston Consulting Group and a pioneering voice in the field of AI ethics. Abhishek was an active member of our community, remembered by everyone as a brilliant mind and a kind person, always there to offer support and share his knowledge. We will miss him.
Assuming the ratio of US corporate profits to GDP of 11% holds for the world.
Also on the Japanese kids videogame benefits - super interesting. I would want it to be replicated in other parts of the world. I am not positive that the effects of and on socialization check out one to one elsewhere.
Great coverage. On the 12$/hour vs Indian call center FTE cost - that’s not like for like. 1/ percentage of cost of call center operator as proportion of full cost of people operation per hour should take into account cost of supervision, facilities, hiring etc which is typically very significant 2/ cost of training people and keeping them waiting for redeployment etc is also significant 3/ accent neutralization commands a premium already in people 4/ time zone requiring night shifts attract premium too. Etc. BTW I am not saying that we should use AI instead of those operators, but I am saying that there’s a bigger cost picture and even more importantly processes will be redesigned so human operators do more human-in-the-loop type work etc which will lead to profound transformation of both level-1 (simple, volume based) and level-2 customer support.