š® Ā Postmodern biology; meowing AI; Robotic reliance; insurance inflation ++ #445
An insider perspective on AI and exponential technologies
Hi, Iām Azeem Azhar. As a global expert on exponential technologies, I advise governments, some of the worldās largest firms, and investors on how to make sense of our exponential future. Every Sunday, I share my view on developments that I think you should know about in this newsletter.
The latest from Exponential View
š§ AI and epistemic integrity ā A new public good for the Exponential Age
⨠AI in Practice ā Online event for members on Oct 26. Join us!
š”A golden age of democracy ā On the power of citizens, with Professor HĆ©lĆØne LandemoreĀ
šØ Thanks to our sponsor, Masterworks, an investing platform that enables everyday people to invest in multimillion-dollar paintings by artists like Banksy and Basquiat.
Sunday chart: Same data, different perspectives

A study looking at reproducibility in biology found that 246 biologists analysing the same dataset got widely divergent results (h/t EV member Rafael Kaufmann). These variations in outcomes arose from diverse analytical decisions, influenced by participantsā choice of sample size and their unique methodological backgrounds.
The results from the reproducibility study underscore the inherent inconsistencies in human analysis and the choices within the scientific method. In recent times, much has been made of the inconsistencies and biases in AI and LLMs. However, as we see, humans too exhibit such variability. The central challenge lies in discerning whether thereās a way to marry human intuition with AI precision to foster increased consensus. Or, at the very least, to ensure data can be explored using diverse methods and narratives beyond those chosen by a paperās author. Computational biologist Michael Eisen envisions a future where LLMs facilitate publishing findings in an interactive, āpaper on demandā format:
I think itās only a matter of time before we stop using single narratives as the interface between people and the results of scientific studies.
By doing so, readers could try and critique different methodologies - and hopefully arrive at a broader consensus. Even if this vision seems too futuristic, current tools offer some solutions. For instance, Elicit acts as an AI research assistant and can help you quickly find and summarise research papers, reducing the time to undertake meta-studies or similar surveys.
And this isnāt just about reaching consensus in science. A significant issue arises in medical diagnostics, where diagnostic agreement between physicians could be improved. Weāre already witnessing improvements in AIās diagnostic accuracy ā by augmenting doctors with AI, we might achieve higher diagnostic agreement. For instance, a study from the Mayo Clinic highlighted AIās potential in virtual primary care, revealing that providers opted for one of the five AI-recommended diagnoses in 84.2% of cases. As both research and clinical settings grapple with human variability, AIās potential to bring consistency, accuracy, and speed is evident.
šØ Todayās edition is supported by Masterworks
Historic growth of 36% with lower risk? Itās possible in this market
Itās a market most people have never considered investing in, but its average prices have risen at an annualized rate of 36% over the last 21 years. Additionally, this market's Sharpe ratio, which measures growth adjusted for volatility, came in at 1.5 ā beating the S&Pās .49 over the same time period.Ā
So what is it? Paintings by the world renowned artist Yoshitomo Nara ā a market now accessible to anyone, thanks to Masterworks. This unique investment platform enables everyday people to invest in blue-chip art for a fraction of the cost.
Exponential View readers can skip the waitlist with this exclusive link.Ā
See a disclosure1
Key reads
Can cats go rogue? Yann LeCun, Metaās chief AI scientist, recently noted in the FT that it is premature to be overly concerned about AIās existential risks. He warns against the pitfalls of hasty regulation, suggesting that it might inadvertently bolster the dominance of established tech giants, potentially stifling innovation and sidelining newcomers. LeCun ā and Meta ā are staunch advocates for open-source AI development. Drawing parallels from the past, LeCun highlights the transformative successes of open-source platforms like Linux and Apache on the internet ecosystem. Undoubtedly, there are emerging risks; for instance, a recent report by the Rand Corporation suggests that current LLMs could aid in the planning and execution of a biological attack. However, claims of existential risk might be somewhat exaggerated for now. As Yann aptly describes, current AI is still ādumber than cats.ā
See also:
š Letās talk about extinction
Watch now (9 mins) | We need to talk about AI risk, specifically about existential risk posed by the development of AI. The risk that it wipes out all humans if you canāt escape the discussion. Earlier this week...
Do we slack when machines have our backs? Recent research into human-machine collaboration reveals a curious trend: our attention might diminish when we work alongside robots. When participants were tasked with identifying defects on circuit boards, they detected fewer flaws with robot assistance, spotting an average of 3.3 defects, compared to the 4.2 defects identified when working solo. This suggests a potential drop in mental engagement when humans and machines collaborate. This observation isnāt new; just a few weeks ago, we discussed a study on AIās impact on BCG consultantsā performance. While AI generally improved their efficiency, there was a noticeable declineāby as much as 19 percentage pointsāin tasks where AI wasnāt up to par. It serves as a subtle reminder: while machines offer many advantages, they might inadvertently prompt us to let down our guard.
Underwriting uncertainty. The U.S. insurance industry faces mounting pressures as natural disaster costs have soared from $4.6 billion in 2000 to $100 billion today. This surge is largely attributed to the exacerbation of extreme weather events caused by climate change. Consequently, global reinsurance rates are climbing, pushing insurance prices higher. In particularly vulnerable regions like California and Florida, insurers are pulling back, leaving many without coverage. Since 2015, insurance premiums have jumped by 21%, excluding many from vital protection and diminishing the risk-sharing pool. This grim outlook is underscored by a forecast from the British Institute of Actuaries, which predicts a staggering 50% GDP loss between 2070 and 2090 based on current warming trends. However, these dire predictions hopefully will only serve to hasten the pace of positive change. To shed light on a more hopeful perspective, EV member Sam Butler-Sloss recently co-authored an article for RMI, highlighting common missteps in energy transition analyses. Notably, many energy commentators tend to err on the side of pessimism.
Weekly commentary
š„ Iāll be tackling the question of artificial general intelligence and what we can say about the question āwhen will AGI arrive?ā in my commentary this week. This will be send to paying members by Tuesday. Iāve been thinking quite hard about this question for several months, and Iāll be sharing my perspective. You will enjoy it.
Market data
Climate tech investments have fallen by more than 40% in the last 12 months.
In 2022, 58% of U.S. households owned stock, the highest percentage on record. This surpassed the previous high of 53% set during the dot-com boom.
Chinaās auto transformation. In roughly two years, China went from a $30-40 billion finished vehicle deficit to a $50 billion surplus.
A working paper covering U.S. data from 1983 to 2019 reveals that inflation raised the real income of the middle wealth quintile by two-thirds, while the bottom two wealth quintiles saw a nearly 50% reduction in their real income due to inflation.Ā
Verified users pushed 74% of the most viral misinformation regarding the Israel-Hamas war on X.
Short morsels to appear smart at dinner parties
š How the first company to use Google ads built its business.
š½ Project Silica ā storing data on glass plates for 10,000 years.
š² The earliest evidence of wood being used for structural purposes dates back to at least 476,000 years ago.
š§ A minimally invasive brain implant that patients could operate at home?
š° Meat vs. meat substitutes. Which is more expensive?
š A recursive loop. What happens when you get GPT-4V to describe an image and then ask Dall-E 3 to generate the description?
šø Will AI photos mess with your memory of the past?
š³ļø What a Serbian cave tells you about the weather 2,500 years ago.
š What are the least popular pages of Wikipedia?
End note
I was in Dubai this week doing some work on questions of AI governance as well as the AI/jobs nexus. One question that kept cropping up was what Chinaās perspective on all this would be. At the Belt and Road Forum, Beijing unveiled its AI Global Governance Initiative. The good news is that there is a reasonable amount of concordance with many of the ways the US, UK and EU are thinking about these issues. This suggests that a multilateral approach of some sort might not be impossible.
There is at least one critical distinction: a call to āoppose drawing ideological lines or forming exclusive groups to obstruct other countries from developing AIā. Any nation that signs up to this compact is explicitly challenging the USās export controls.
This is, though, a starting point at least and the timing, just ahead of the UK-hosted AI Safety Summit, is helpful.
And on that event, one piece of good news was the inclusion of Yann LeCun amongst the attendees. Yann had been sceptical of the existential risk dimension as we write higher up in the newsletter (and see this witty tweet). I had become concerned that too much of the agenda was being plied by adherents of or alignees to the effective altruism belief system. LeCun has the credibility to be a counterweight.
Unlike climate change where there is deep scientific consensus, the degree of consensus on AI risk amongst scientists is far, far lower. So a diversity of input is vital.
Cheers,
A
P.S. Iāve put quite a fun poll up about AI on LinkedIn. Go and vote and share it!
What youāre up to ā community updates
Mike Zelkind opens a new 200,000 sq ft vertical farm in Kentucky.
Claudia Chwalisz wrote a piece with Zia Khan from the Rockefeller Foundation in Fast Company: āTo redesign democracy, the U.S. should borrow an idea from Dublin.āĀ
Kevin Delaneyās company, Charter, has released a playbook on how to use AI in ways that enhance worker dignity and inclusion.
John Lazar, along with co-founders Mike Mompi and David Cohen, launched the founder partner program for Enza Capital, their venture capital firm focused on African companies.
Ekaterina Prytkova and Simone Vannuccini published āArtificial Intelligenceās new clothes? A system technology perspectiveā in the Journal of Information Technology.
Share your updates with EV readers by telling us what youāre up to here.
Past performance is not indicative of future returns, investing involves risk. See disclosures masterworks.com/cdĀ
Super interesting! On "Same data, different perspectives": the second paragraph implies that there is only āone trueā version of everything (in this case the ācorrectā or āmost accurateā interpretation of the study results), whereas quantum physics teaches us that there can be many different versions, and that in fact the observer/scientist alters the outcome of any experiment they themselves conduct.
So if and when we do stop using single narratives, are we at a risk of getting lost in too many views? Curious for perspectives.