🔮 The Sunday edition #509: the intention economy; why tech turning right; sonic booms, life in space & declining trust ++
An insider’s guide to AI and exponential technologies
Hi, it’s Azeem.
Wow, what a week with DeepSeek shaking the markets and the politics. We’ll be sure to cover some of the open questions in more depth soon, but, in the meantime, do check out our two pieces on DeepSeek for a primer:
In today’s edition… a landmark AI Safety report warning of risks from manipulation to malfunction (and spooky “black box” reasoning), to the rise of an “intention economy” that shapes what we want rather. Meanwhile, the real fault line in tech politics may be acceleration vs deceleration, another optimistic jobs forecast could be blindsided by breakthrough AI models, and a global baby bust may pave the way for reactionary “natalist” measures.
And if you missed my discussion with Paul Krugman on AI, the economy and Trump, catch it here.
The first international AI Safety report
This week, the first International AI Safety report was led by Yoshua Bengio. It sets out a scientific understanding of general-purpose AI, focusing on how to understand and manage its risks. The report identifies three main risk categories:
Malicious use (such as manipulation of public opinion)
Malfunctions (loss of control)
Systemic risk (labour market disruption)
While I think AI doomerism is overblown — my personal probability of doom is 0.000131% by 2050 — there are still plenty of tangible risks. And the pace of AI progress isn’t slowing down. These models can be used broadly, potentially soon autonomously, with capabilities advancing faster than they can be studied. All this while we still don’t really know why AI models work so well.
Nothing highlights these interpretability concerns more than DeepSeek R1-zero (the non-supervised-fine-tuned version of R1)1, which would spontaneously switch between English and Chinese while solving problems. The model was trained via a novel approach that rewarded correct answers regardless of how comprehensible its thinking process was. The worry is that this could lead AI systems to develop cryptic methods of reasoning – or their own non-human languages – if that proves more efficient.
We need to make sure we maintain the ability to somehow monitor these AI systems – and to do that will require more collaborative governance. The international nature of this report is a good start.
Beyond attention. The “intention economy”
Attention has become a commodified resource, manipulated by modern media and tech. As American commentator and author Chris Hayes points out in his book The Sirens’ Call, attention is now “the defining resource of our age,” extracted and traded much like labour was during the Industrial Revolution – leading to a sense of alienation even when we’re “free” to choose where our focus goes.
LLMs and predictive AI can go beyond this landscape of attention, to shape our intention – guiding what we want or plan to do, which some refer to as the “intention economy”. AI systems can infer and influence users’ motivations, glean signals of intent from seemingly benign interactions and personalise persuasive content at scale. Take for example, Meta’s Cicero AI, trained to play the game Diplomacy, reads each player’s chat and previous moves to guess what they plan to do next. It then uses that guess to propose deals or alliances that steer people toward its own goals – showing how AI can use reasoning to deduce a user’s intention and subsequently shape their behaviour.
In other words, we may soon face an emerging market in which data about our future plans and goals is captured, bought and sold – redirecting or even rewriting what we want. This might explain some of the US furore over TikTok (and, more recently, DeepSeek). It raises tough questions about how much international, or even domestic, firms may be able to guide the intentionality of citizens.
Love to hear your thoughts about how to address this in the comments below.
It’s not left vs right – it’s accelerate vs decelerate
Tech commentator
points out that what many call the “tech right” isn’t purely a conservative movement. Instead, it’s driven by a pragmatic desire to clear away whatever hinders innovation. Drawing on a 2017 survey of more than 600 tech founders, she notes they tend to be more liberal than most Democrats on social issues and taxes, yet very conservative on regulation and labour – an outlook tracing back to pro-market values formed in adolescence.Figures like Elon and Marc Andreessen might back left-leaning policies if they speed up R&D or immigration – but just as easily align with right-wing efforts that cut red tape. In a recent New York Times interview, Andreessen argued that the Democrats’ pivot from the pro-tech enthusiasm of the Clinton-Gore years to tougher regulation and union support under Biden “broke” the old alliance, prompting many in Silicon Valley to see the party as an obstacle rather than a partner. Sun’s insight is that acceleration versus deceleration best explains Silicon Valley’s politics: it’s a broad “progress coalition” united by the drive to build more – and faster – regardless of who’s in power.
I’d add to that view. This isn’t just about build more, it is also about building different. An entrepreneur’s mindset, particularly a product entrepreneur, is to look at the world and imagine it could be different. Otherwise you wouldn’t invent a new way to listen to music. Or to manage supply chains better. Or to invent a self-driving car. So naturally, policies that make that already hard task harder aren’t going to clash with the founder’s ambition.
WEF says 78 million net new jobs by 2030 – but is AI outrunning the data?
The 2025 World Economic Forum’s Future of Jobs Report projects 78 million net new jobs by 2030, with low substitution risk for many human skills. Is the report underestimating AI’s potential? The report analysed over 2,800 skills from Indeed’s taxonomy. Nearly 70% were deemed “very low” or “low” capacity for replacement by current generative AI (GPT‑4o), with zero skills rated “very high” for automation. However, there was one line that caught me:
Skills requiring nuanced understanding, complex problem-solving, or sensory processing show limited risk of replacement by genAI, affirming that human oversight remains crucial even in areas where genAI can provide assistance.
I’m not sure for how long this will be true.
The report’s focus was on GPT‑4o, not on the new wave of advanced reasoning models. If an AI system had “PhD-level” reasoning – capable of deep conceptual understanding and creative problem-solving – then more sophisticated knowledge tasks could become automatable, challenging the relatively upbeat outlook.
See also:
A new study provides evidence of cases where AI is already substituting for human work, and where it is increasing demand for complementary skills.
Economists remain pessimistic about AI’s labour market impacts. In 2023, many anticipated a negative net effect of AI on labour.
The Financial Times recently reported on 20 legal AI tools that are already accelerating tasks such as contract reviews and legal research, cutting hours of manual effort but also raising questions about back-office roles.
The demographic puzzle
Birth rates keep declining worldwide and no amount of money seems to fix it. Two-thirds of the global population now live in countries where fertility rates are below the replacement level. Economists warn that a shrinking workforce erodes tax revenues just as pension and healthcare costs skyrocket. Multiple policy solutions have been tried – immigration, extended retirement, subsidised childcare, direct cash incentives but success has been limited. As researcher
highlights, digital connectivity and exposure to egalitarian ideas are widening the gulf between women’s expectations and patriarchal family structures. In settings where men still insist on “the first slice of cake” (to borrow Evans’s phrase), more women opt out of marriage or motherhood altogether.If policies to boost fertility fail or productivity gains don’t materialise, a push toward more assertive “natalist” measures could come. Reactionary movements might rise if this isn’t handled with care. The cultural milieu seems primed for it. For instance, a fifth of young men in the UK regard the misogynist influencer Andrew Tate favourably, a sign of rising anti-feminist sentiment that could open the door to regressive solutions, such as banning abortion or restricting contraception, in the name of increasing birth rates.
In other updates…
Europe’s first Gen Z revolution is happening in Serbia where the student protests are challenging a long-standing autocracy. As our own
explains in this slide deck, if they succeed, Serbia could become the first country where Gen Z alone gets to decide on the shape of a provisional government.Edelman Trust report. Despite 13 countries having elections or government changes from 2024 to 2025, only two (Argentina and South Africa) showed a meaningful lift in their overall Trust Index.
Which programming languages consume the most energy? Compiled and semi-compiled languages (such as C++ and Java) fare best, while interpreted languages (Python, MATLAB, R) can use up to 54× more energy. But there’s an important trade-off: less energy-hungry languages are often more complex to write, whereas interpreted ones can be easier to develop in – potentially speeding up innovation. Remember, energy is literally defined as the capacity to do work, so using it isn’t inherently bad. If it’s sourced from carbon-heavy grids, the environmental cost is higher; if it’s from cleaner sources, the impact is mitigated.
Asteroid Bennu: Building blocks for life? NASA’s OSIRIS-REx returned pristine Bennu samples containing all five DNA/RNA nucleobases that form DNA and RNA on Earth. But “left-handed” and “right-handed” amino acids are nearly balanced in Bennu’s samples, whereas life on Earth predominantly uses the left-handed versions. This challenges the idea that asteroids such as Bennu seeded life on our planet. Life’s origins continue to be mysterious.
Elon’s big bet on Tesla’s “unsupervised self-driving” vehicles: Musk says it’s coming around Q2 2025, showcasing a 1.2-mile trip on private roads from the factory floor to the loading docks. Skepticism is fair; he’s promised this since 2018. But I think we are close, as I argued last year when I shared why I changed my mind on self-driving cars.
Boom breaks the sound barrier: A US startup aptly named Boom has become the first independently developed jet (with no government backing) to break the sound barrier. They aim to offer supersonic passenger flights by 2029 – an audacious timeline, but potentially another leap forward in the aviation industry.
Supervised fine-tuning is a technique used to adapt a pre-trained LLM to a specific downstream task using labeled data.
On another note, about Tesla’s FSD, I am skeptical that they can make it truly safe when it comes to the edge cases. Which you’d need to do, unless you accept and admit that accidents are allowed to happen, in which case would you risk your life and use it? This is because of the shift to using only cameras. Waymo uses LIDAR as well, right?
WSJ made a great video about autopilot crashes recently: https://youtu.be/mPUGh0qAqWA?si=XGqmls9cRnv56_5M
The main point there is that if the car encounters something on the road that it has not been trained on, it has no way of telling whether that thing is solid or not. And it will collide without braking. A LIDAR would immediately tell it that there’s something there. It would be the computers way of “touching” and using that data to interpret what it is seeing.
If I remember right, Elon’s argument for visual-only system was that people can navigate the world with only visual information just fine, so it should be possible. However, he’s ignoring the fact that ever since babies we have also relied on other senses, namely touch, to identify what has substance and what doesn’t. And that has “trained” our minds to assess whether a thing we see has substance. A purely camera-based system has no way of telling, unless someone has programmed that into it or it has been part of the training data and classified as such.
I have a M3 Highland and its camera-based system already gets confused by e.g. strong lights, reflections, and shadows in our parking garage. It regularly thinks there’s a solid object in front of it where none exists. When you’re the one driving these drawbacks don’t matter, but there is no room for errors when talking about fully autonomous driving, and trusting your life to it.
The issues of governance / control, and capabilities are inextricably linked. The more nuanced and capable these machines are the more humans will trust them with complex reasoning, including governance reasoning. The reality is that our current human-based governance infrastructure is obsolete, designed before information technology became what it is now. I continue to believe that the future is in the building of governance and processes where the native interplay of N humans and N machines is intentional. And humans, individually, but even more importantly, networks, focus on the why and the what, while machines provide most of the how. These roles are not mutually exclusive - we badly need machines to help us with the why and the what.