š®ā”Ā Mustafa Suleyman special; fair society, fair tech; inflated unicorns; rethinking commons and trust; manservants and misogyny ++ #137
We spend a lot of time at Exponential View talking about the societal impact of artificial intelligence. This week, I asked Mustafa Suleyman, the co-founder of DeepMind, to put together an Exponential View with the things he is thinking about. Heāll say more below.
Prior to co-founding DeepMind, the worldās foremost AI lab, Mustafa started a charity, worked for the Mayor of London, and helped set up Reos Partners, a conflict resolution firm where, amongst other things, he worked for the UN during the Copenhagen climate change negotiations. Itās highly unusual to meet a tech founderālet alone an AI founderāwho cut their teeth in the world of charities and public policy, and itās no coincidence that Mustafa is energetically trying to bring all those worlds together to create safe and ethical AI.
Enjoy his thoughts!
Azeem
šĀ Would your colleagues enjoy this issue? Forward it!
šĀ Ā Please share *|SHARE:twitter|* *|SHARE:linkedin|*
šĀ Supported by Arlo Skye and WorkShape.
ABOUT MUSTAFA
Iām Mustafa (@mustafasuleymn), and Iāve spent most of my life working on ways to tackle complex social problems at scale. While the world continues to improve in many respects, we remain overwhelmed by injustice and suffering on a level that can often feel impossible to address.
Even if the history of technology transforming society for the better is a familiar narrative, we canāt assume beneficial outcomes will emerge by default. In fact as technology advances, ethical challenges will continue to grow. I do, however, believe that AI will play a crucial role in helping people unearth new strategies to solve complex social problems and make more efficient use of our natural resources.
This positive future wonāt happen simply because AI systems magically discover how to do the right thing. It will be because we, as a society, collectively figure out how we can justly use, govern and control these systems. It also means finding ways to widely and fairly distribute their benefits. For me, an integral part of ethical AI development is provoking wider conversations about fairness, justice and the kind of world we want to live in, and to feed those insights back into these technologies.
This is going to be really hard. It means bridging gaps between technologists, companies, government, civil society and the wider publicāgaps that can often seem painfully largeāto make sure we shape these tools together and put social impact and ethics at the heart of everything we do.
Iām pleased to have the chance to edit an edition of Exponential View (thanks, Azeem!) and I wanted to use it to explore these themes a little further. If you have ideas about how to make this a reality, please get in touch!
DEPT OF INCENTIVES
As a society, weāre struggling with a daunting set of problems, from neglected infectious diseases affecting a billion people to raging inequality and frighteningly unsustainable consumption. Technology can help us understand and tackle these challenges, but there is less faith than ever in this once-feted industry.
This TechCrunch piece sums up the commentaryĀ about the tech industry over the last few months, arguing that Silicon Valley has gone from āhero to villainā.
#ļøā£Ā Ā The industry is finally on a hard but essential journey towards greater diversity and representation. As the flurry of #metoo turns into a blizzard, and anger rightly turns on #himtoo, itās clear that this journey is far from over. From images of hard-drinking brogrammers calling the shots, to the exposure of the misogynist culture of some Valley VCs and the underrepresentation of women and minorities (see their stories and the Kapor Centerās study on how it affects career choices), we must all consider what changes are necessary to create the safety and room required for diverse voices not just to be heard, but to play an equal role in shaping our societies at every level.
š¦Ā Ā We also need to rethink the standard metrics our industry uses to measure progress - investment round valuations (which are severely inflated according to Stanford researchers), āactive usersā (read Paul Lewisās excellent summary of the attention/distraction debate) and revenues are the crudest of proxies for company success, and largely ignore externalities, longer-term consequences, diversity and wider social purpose, to name a few.Ā This FTĀ view proposed that businesses should āfix the failures of capitalismā by coming up with āconcrete proposals on how to change executivesā incentivesā, which even the head of the CBI blamed on āa fixation on shareholder value at the expense of purposeā.
DEPT OF DOING BETTER
Tech isnāt value-neutral, and technologists need to factor in the downstream effects of their efforts, as this excellent AI Now report argues. Well-intended actions can still cause unintended harms. Where might classroom technologies that measure students engagement levels by monitoring their brainwaves take us? Such a narrow field of vision can lead the market to prioritise āfirst worldā problems (rentable āmanservantsā and personalised soda drinks) over real-world problems.
šĀ Ā But the application of technology to real problems is rarely plain sailing. Itās a story of messy and uneven progress. Patients and clinicians could both benefit from better tech but many doctors continue to use pagers and most of us canāt change our hospital appointment without speaking to someone on the phone. Partly due to a lack of digital maturity, many health systems are under enormous strain.
AI is showing promising signs in helping clinicians identify disease at an earlier stage, but there is a real risk that perpetuating existing biases could widen existing health inequalities. For example, genomics start-upsāa field thatās attracted $3B in VC investments this yearāface a lack of diversity in genetic data (only 3.5% of genetic data comes from African or Hispanic populations), which can lead to ineffective or even dangerous treatments for some populations.
Despite the complexities, itās exciting to see a growing set of examples of civil society and tech coming together to figure some of this out:
Joy Buolamwini and the Algorithmic Justice League are developing new ways to report bias at companies and in software, and offering services for testing technology with diverse users.
Researchers from Data & Society, the National Academy of Engineering, a wide set of universities and Microsoft have proposed ethical guidelines for organisations conducting research with ābig dataā.
ProPublica and Google News Lab have teamed up to document reported hate crimes across the U.S., using Googleās Natural Language API to visualize the information in interesting ways.
Weāre also seeing radical ideas get a lot more air time, like George Monbiotās proposal for a commons to safeguard societal interests:
A commons, unlike state spending, obliges people to work together, to sustain their resources and decide how the income should be used. It gives community life a clear focus. It depends on democracy in its truest form. It destroys inequality. It provides an incentive to protect the living world. It creates, in sum, a politics of belonging.
šæĀ Ā Finally, The Economist questions whether data should be treated as a utility, like water and electricity, and asks if users would be prepared to pay a fee for using the service instead of watching adverts.
There are no easy answers, and progress towards better, fairer standards for technology will always be messy and imperfect. But if we can break down silos and discuss these issues openly, that will be our best hope of ensuring these systems reflect our highest collective selves.
FOOD FOR THOUGHT
Reading Scott Rosenbergās account of how āeverybody fucking knewā about Harvey Weinstein is sickening, and begs a larger question: when a culture (including liberal elites) produces this much sexual harassment, itās no longer an accidentāso what needs to change?
How often do women take steps to avoid attention and how much work must they put toward feeling safe? Dr. Fiona Vera-Gray documents what this looks like, from developing a ādeath stareā to crossing the road at night to avoid strangers.
šļøĀ Ā Has the new media elite created its own filter bubble? As local newspapers decline, can we be sure national or global coverage reflects truth on the ground? Thatās one explanation for why Trump voters donāt trust journalists.
Reporter Bret Stephens argues weāve lost the ability to have proper political debates because
the primary test of an argument isnāt the quality of the thinking but the cultural, racist, or sexual standing of the person making it.
Can we escape identity-driven politics to create room for thought?
END NOTE
I hope you enjoyed Mustafaās guest issue of Exponential View. I think he does a wonderful job of outlining the complexities and nuances of some key debates, and providing us with sufficient context to reflect on them.
Please take a moment to thank Mustafa with this tweet!
On some practical notes, send us an email if your company wants to be our sponsor for November.Ā š
Cheers
Azeem
This week's issue is brought to you with support from our partnersArlo Skye and WorkShape.
The next generation ofĀ luxury luggageĀ - TIME Magazine