🔮⚡ Mustafa Suleyman special; fair society, fair tech; inflated unicorns; rethinking commons and trust; manservants and misogyny ++ #137

🔮⚡ Mustafa Suleyman special; fair society, fair tech; inflated unicorns; rethinking commons and trust; manservants and misogyny ++  #137

We spend a lot of time at Exponential View talking about the societal impact of artificial intelligence. This week, I asked Mustafa Suleyman, the co-founder of DeepMind, to put together an Exponential View with the things he is thinking about. He’ll say more below.

Prior to co-founding DeepMind, the world’s foremost AI lab, Mustafa started a charity, worked for the Mayor of London, and helped set up Reos Partners, a conflict resolution firm where, amongst other things, he worked for the UN during the Copenhagen climate change negotiations. It’s highly unusual to meet a tech founder—let alone an AI founder—who cut their teeth in the world of charities and public policy, and it’s no coincidence that Mustafa is energetically trying to bring all those worlds together to create safe and ethical AI.

Enjoy his thoughts!

💌  Would your colleagues enjoy this issue? Forward it!

🌎  Please share *|SHARE:twitter|* *|SHARE:linkedin|*

🚀  Supported by Arlo Skye and WorkShape.


I’m Mustafa (@mustafasuleymn), and I’ve spent most of my life working on ways to tackle complex social problems at scale. While the world continues to improve in many respects, we remain overwhelmed by injustice and suffering on a level that can often feel impossible to address.

Even if the history of technology transforming society for the better is a familiar narrative, we can’t assume beneficial outcomes will emerge by default. In fact as technology advances, ethical challenges will continue to grow. I do, however, believe that AI will play a crucial role in helping people unearth new strategies to solve complex social problems and make more efficient use of our natural resources.

This positive future won’t happen simply because AI systems magically discover how to do the right thing. It will be because we, as a society, collectively figure out how we can justly use, govern and control these systems. It also means finding ways to widely and fairly distribute their benefits. For me, an integral part of ethical AI development is provoking wider conversations about fairness, justice and the kind of world we want to live in, and to feed those insights back into these technologies.

This is going to be really hard. It means bridging gaps between technologists, companies, government, civil society and the wider public—gaps that can often seem painfully large—to make sure we shape these tools together and put social impact and ethics at the heart of everything we do.

I’m pleased to have the chance to edit an edition of Exponential View (thanks, Azeem!) and I wanted to use it to explore these themes a little further. If you have ideas about how to make this a reality, please get in touch!


As a society, we’re struggling with a daunting set of problems, from neglected infectious diseases affecting a billion people to raging inequality and frighteningly unsustainable consumption. Technology can help us understand and tackle these challenges, but there is less faith than ever in this once-feted industry.

This TechCrunch piece sums up the commentary about the tech industry over the last few months, arguing that Silicon Valley has gone from “hero to villain”.

#️⃣  The industry is finally on a hard but essential journey towards greater diversity and representation. As the flurry of #metoo turns into a blizzard, and anger rightly turns on #himtoo, it’s clear that this journey is far from over. From images of hard-drinking brogrammers calling the shots, to the exposure of the misogynist culture of some Valley VCs and the underrepresentation of women and minorities (see their stories and the Kapor Center’s study on how it affects career choices), we must all consider what changes are necessary to create the safety and room required for diverse voices not just to be heard, but to play an equal role in shaping our societies at every level.

🦄  We also need to rethink the standard metrics our industry uses to measure progress - investment round valuations (which are severely inflated according to Stanford researchers), “active users” (read Paul Lewis’s excellent summary of the attention/distraction debate) and revenues are the crudest of proxies for company success, and largely ignore externalities, longer-term consequences, diversity and wider social purpose, to name a few. This FT view proposed that businesses should “fix the failures of capitalism” by coming up with “concrete proposals on how to change executives’ incentives”, which even the head of the CBI blamed on “a fixation on shareholder value at the expense of purpose”.


Tech isn’t value-neutral, and technologists need to factor in the downstream effects of their efforts, as this excellent AI Now report argues. Well-intended actions can still cause unintended harms. Where might classroom technologies that measure students engagement levels by monitoring their brainwaves take us? Such a narrow field of vision can lead the market to prioritise ‘first world’ problems (rentable ‘manservants’ and personalised soda drinks) over real-world problems.

💉  But the application of technology to real problems is rarely plain sailing. It’s a story of messy and uneven progress. Patients and clinicians could both benefit from better tech but many doctors continue to use pagers and most of us can’t change our hospital appointment without speaking to someone on the phone. Partly due to a lack of digital maturity, many health systems are under enormous strain.

AI is showing promising signs in helping clinicians identify disease at an earlier stage, but there is a real risk that perpetuating existing biases could widen existing health inequalities. For example, genomics start-ups—a field that’s attracted $3B in VC investments this year—face a lack of diversity in genetic data (only 3.5% of genetic data comes from African or Hispanic populations), which can lead to ineffective or even dangerous treatments for some populations.

Despite the complexities, it’s exciting to see a growing set of examples of civil society and tech coming together to figure some of this out:

  • Joy Buolamwini and the Algorithmic Justice League are developing new ways to report bias at companies and in software, and offering services for testing technology with diverse users.
  • Researchers from Data & Society, the National Academy of Engineering, a wide set of universities and Microsoft have proposed ethical guidelines for organisations conducting research with “big data”.
  • ProPublica and Google News Lab have teamed up to document reported hate crimes across the U.S., using Google’s Natural Language API to visualize the information in interesting ways.

We’re also seeing radical ideas get a lot more air time, like George Monbiot’s proposal for a commons to safeguard societal interests:

A commons, unlike state spending, obliges people to work together, to sustain their resources and decide how the income should be used. It gives community life a clear focus. It depends on democracy in its truest form. It destroys inequality. It provides an incentive to protect the living world. It creates, in sum, a politics of belonging.

🚿  Finally, The Economist questions whether data should be treated as a utility, like water and electricity, and asks if users would be prepared to pay a fee for using the service instead of watching adverts.

There are no easy answers, and progress towards better, fairer standards for technology will always be messy and imperfect. But if we can break down silos and discuss these issues openly, that will be our best hope of ensuring these systems reflect our highest collective selves.


Reading Scott Rosenberg’s account of how ‘everybody fucking knew’ about Harvey Weinstein is sickening, and begs a larger question: when a culture (including liberal elites) produces this much sexual harassment, it’s no longer an accident—so what needs to change?

How often do women take steps to avoid attention and how much work must they put toward feeling safe? Dr. Fiona Vera-Gray documents what this looks like, from developing a ‘death stare’ to crossing the road at night to avoid strangers.

🗞️  Has the new media elite created its own filter bubble? As local newspapers decline, can we be sure national or global coverage reflects truth on the ground? That’s one explanation for why Trump voters don’t trust journalists.

Reporter Bret Stephens argues we’ve lost the ability to have proper political debates because

the primary test of an argument isn’t the quality of the thinking but the cultural, racist, or sexual standing of the person making it.

Can we escape identity-driven politics to create room for thought?


I hope you enjoyed Mustafa’s guest issue of Exponential View. I think he does a wonderful job of outlining the complexities and nuances of some key debates, and providing us with sufficient context to reflect on them.

Please take a moment to thank Mustafa with this tweet!

On some practical notes, send us an email if your company wants to be our sponsor for November. 🚀


This week's issue is brought to you with support from our partnersArlo Skye and WorkShape.


The next generation of luxury luggage - TIME Magazine


Helping companies hire software engineers.


Sign in or become a Exponential View member to join the conversation.