Azeem Azhar’s Weekly Wondermissive: Future, Tech & Society
This issue has been supported by our partner: Ocean Protocol.
Ever wonder how you can give your dataset a unique ID without needing a central ID provider? Introducing Dentralized Identifier
You’re reading the free edition of the Exponential View
If you enjoy reading Exponential View, consider subscribing—monthly and annual plans are available. We have a special discount for students and academics (email firstname.lastname@example.org to find out more).
This weekend I am away on my annual camping trip with my elder daughter, so I have asked my friend, Dr Stephanie Hare, to handle this week’s Exponential View. Stephanie is an independent researcher and broadcaster who is spending a great deal of time understanding the issues relating to biometrics and facial recognition. About a decade ago, she introduced me to the idea of ‘corporate foreign policy’, the notion that the very large tech platforms had a unique structural position in geopolitics and the global economy, and so nations would need different and novel ways of engaging with them. We've seen that trend play out in recent years.
I'm enjoying the outdoors, sunny weather and a few water fights. I hope you'll enjoy Stephanie's Exponential View.
About Stephanie Hare
Stephanie Hare here!
I’m a researcher and broadcaster working across technology, politics and history. Previously, I have worked in Accenture Research, Palantir, and Oxford Analytica. I was the Alistair Horne Visiting Fellow at St Antony’s College, Oxford, after earning a PhD and an MSc in Theory and History of International Relations, both from the London School of Economics. See you on Twitter!
Dept of the near future
📋 In order to care for humans, robots will need to learn to respond to constantly changing environments, ‘[o]ne day, it could mean robot carers that are as in tune with human needs and as capable of meeting them as another human.’
🇺🇸🇨🇳 Former US Treasury Secretary Henry Paulson argues that the balkanization of the tech industry kick-started by the US campaign against Chinese company Huawei will backfire. According to Paulson, the US’s problems here are twofold: firstly, if the US forces other nations to choose between it and China, some of them are going to choose China. Secondly, the US risks isolating itself if others look at the precedent set by the Huawei ban and decide simply not to engage with the US. We’re already seeing parts of this backlash play out in the EU’s debate over 5G. American companies are also seeing impacts on their operations internationally as Chinese partners seek out home-grown alternatives. (See also: Western intelligence attempted to get to the users of Yandex, ‘Russia’s Google’.)
🔎 Searching for the 21st century liberalism.
⌛ Climate catastrophe: 413.82ppm | 3,979 days
Each week, we’re going to remind you of the CO2 levels in the atmosphere and the number of days until reaching the 450ppm threshold, the point at which significant portions of the planet’s ice will melt away.
The latest measurement (as of June 25): 413.82 ppm; 12 months ago: 409ppm; 50 years ago: 326.66ppm; 250 years ago, est: 250ppm. Share this reminder with your community by forwarding this email or tweeting this.
EV reader, Professor Eric Beinhocker, argues that morality, not statistics, will fuel the fight against climate change. He looks to the abolition movement for inspiration on how morality can drive real social and economic change: ‘The climate change movement must become a Carbon Abolition movement.’
💭 Climate change adaptation is not just a matter of keeping the sea out or building more resilient city infrastructure. It’s also about social adaptation—the laws, the institutions and the basic commitment to recognising the rights and needs of those worst affected by climate change.
⭐ In this paper in Joule, Auke Hoekstra demonstrates that a switch to battery electric vehicles from internal combustion engines reduces carbon emissions to 98g/km (compared to 244g/km with a diesel or petrol engine). If we had a fully renewable energy system, carbon emissions would decline to 10g/km. (These are fully-loaded estimates including the costs of creating batteries and charging them using estimates for the electrical mix used across their lifetime.)
Dept of surveillance technologies
I’ve been researching biometrics for the BBC and my forthcoming book on technology ethics. Biometrics, which include our face, voice, eye (iris, retina), gait, footprint, veinprint, are fascinating because they represent things that seem so separate from technology—our bodies, existing in the physical world—and translate them into data, thereby embodying us in the digital world.
Biometrics are big business, especially when paired with AI and surveillance technology.
Amazon, for example, received a patent for a ‘surveillance as a service’ this month. In 2018, the AI video analytics market was estimated already to be worth more than $3.2 billion, a value that’s projected to balloon to more than $8.5 billion in the next four years. The behavioural biometrics market is predicted to reach $3.9 billion by 2025, with the largest share of the market being dominated by voice recognition (see Dr Rita Singh’s groundbreaking research on voice).
Governments are going big on biometrics, too: the global government biometric market is set to reach $7.9 billion by 2027, with more than 35% of that market led by fingerprint recognition, followed by facial recognition. China’s surveillance and safety industry is expected to grow strongly to 800 billion yuan ($116 billion) by 2020.
While the use of biometrics to facilitate mass surveillance is problematic, there are more benign uses (although even these are not so clear cut.) They can provide a more secure digital identity, including to people who do not have traditional forms of ID; coupling biometrics with AI is driving innovation in forensic science; and customers seem to prefer contactless transactions, even if they only speed up the process by a couple of seconds.
Yet a backlash is growing, particularly against facial recognition technology due to its poor track record in misidentifying women and people of colour (scrutinised recently in the US Congress and the UK Parliament).
The technical problem of facial recognition technology’s bias and inaccuracy might be solved eventually, given enough training data. It’s less clear how to solve the social problem of facial recognition technology, as this requires us to ask: do we even want this technology at all? If we do, then we need to decide under what conditions, and whether we need additional protections and rights.
One of the problems is that surveillance is following the business model of the Internet, which depends on taking our data and compromising our privacy. We’re facing a hard U-turn as we navigate questions of consent and data extraction without user knowledge—the practices we’re only now beginning to tackle in the cybersphere, decades since the first computers came online (we’ve failed to successfully tackle consent in junk mail thus far!).
No one prepared us for a world where we would post our personal photos one day on Flickr, only to later learn that IBM had used them to train facial recognition systems. (Earlier this month, Microsoft deleted a database of 10m images of 100,000 people which was used to train a system operated by military researchers and Chinese firms linked to the crackdown on ethnic minorities).
One of the questions I hear regularly is: We already have CCTV everywhere, so how is using facial recognition technology any different? This is often accompanied by the all too common argument: I have nothing to hide, so I have nothing to fear. This misses the point, as security expert Bruce Schneier explains, ‘Too many wrongly characterize the debate as 'security versus privacy.’ The real choice is liberty versus control.’ And researcher danah boyd points out that ‘[p]eople often feel immune from state surveillance because they’ve done nothing wrong.’
But access to our data translates to real dangers for people who are vulnerable, which includes anyone who is in an abusive relationship or is trying to escape one; is LBGTQIA, particularly in a country where anything other than heterosexuality is illegal (for example, Grindr was a victim of US-China tensions); or needs to protect their information due to their profession, such as lawyers, judges, doctors, politicians, government officials, police officers, members of the armed forces or security services, as well as journalists, activists, trade unionists; or is associated with anyone who is in any of these categories (in other words, most of us!).
Furthermore, living under mass surveillance has a chilling effect that threatens our most cherished freedoms. This has inspired a landmark legal challenge in the United Kingdom, while in Hong Kong protesters have had to take extraordinary precautions to avoid leaving digital trails that would prove their attendance. The shifting framework for human rights in the age of surveillance is presenting us with some thorny questions: we do not allow the police to enter and search our property without a warrant, so why would we let them take our face, voice, fingerprint or DNA data when this can be far more revealing? Do we want our phones to have greater protections than our faces? [🐦 TWEET THIS] We do not allow the police and data scientists to follow us around, recording us indefinitely as we go about our daily business, day in, day out so why would we let them do so via facial recognition and other surveillance technologies?
With incentives for governments and private companies pointing towards the proliferation of biometric surveillance, what legal protections, oversight and additional rights can we put in place?
A number of different approaches have emerged.
In the United States, some cities, states and companies are taking action.
San Francisco, home of America’s tech elite, banned the use of facial recognition technology by the police and other agencies (but not by commercial entities). Somerville, in Massachusetts, just became the first city on the East Coast to do the same. States are considering similar action (California and Massachusetts, while Illinois already has the nation’s strongest biometrics protections).
Axon, which supplies 47 out of the 69 largest US police agencies, banned facial recognition technology this week because its ethics board decided the technology is not reliable enough to justify using. This shows that companies do not have to wait for the government to regulate them—they can decide their own ethical position.
The United Kingdom is pursuing a national strategy, but because of the Brexit gridlock it may stand a better chance with legal action from Liberty and Big Brother Watch or the Information Commissioner’s investigation than with anything that could come out of Parliament. That said, some members of parliament across the political parties have called for a ban or a moratorium on facial recognition (David Davis, Conservative and Jo Swinson, Liberal Democrat) and have demanded a new Biometrics Strategy, as the one published last year did not cover facial recognition technology and is ‘not fit for purpose and needs to be done again’ (Darren Jones, Labour).
By contrast, bigger organisations such as the European Union and the United Nations (see the UN Special Rapporteur on freedom of opinion and expression David Kaye’s report in which he called for an immediate moratorium on the sale, transfer and use of surveillance technology) are proposing actions that, while welcome, will likely take years to translate into action—if at all.
The efforts to question this technology are welcome. But if we do nothing, most companies will do nothing. When it comes to technology ethics, most companies are not like Axon. Most companies are like Amazon: they will sell their technologies to equip police and military forces faster than norms and ethics can adapt—unless we act.
❓ What do you think we should do? I’d love to hear your views on Twitter.
💡 Meanwhile, I’ve put together a Twitter list of some of the best researchers and analysts of facial recognition, surveillance and biometrics topics.
📚 For your summer reading, here’s a list of books on technology and ethics that I recommend.
Short morsels to appear smart at dinner parties
Good kids? Today's teenagers drink less, smoke less, take fewer risks, and are even more likely to wear bike helmets.
🎓 Doing well on standardised testing correlates with being wealthier, whiter and more male. That has implications for college admissions.
🎮 Alibaba is using gamification to modify behaviour in China.
A disinformation expert walks into a public library... If this was the set-up to a joke, it wouldn't be very funny—the punchline is that the same methods used to combat disinformation are further marginalising disadvantaged people.
👀 Near-death experiences may have more in common with ketamine trips than you’d think.
😟 North Korea’s HIV epidemic emerges from the shadows.
As we head into the summer break, I want to let you know that EV will appear in your inbox as usual. Including Stephanie, we have three superb guest curators who will be taking over the Sunday newsletter while I am away. They are all world experts with insightful, lateral brains, so I am confident you’ll enjoy it.
As always, take a moment to thank this week’s guest curator, Stephanie Hare. Twitter is best. And she is happy to get into a discussion with some of the issues raised.
Sent from a sunny campsite north of London!
P.S. Scroll down for updates from your fellow EV readers!
This issue has been supported by our partner: Ocean Protocol.
Interested in more protocol details?
Learn how metadata (or, data about the data) works on Ocean Protocol
What you are up to—notes from EV readers
Congrats to Zaid Al-Qassab for his appointment as Chief Marketing Officer of Channel 4.
Nicolas Colin writes in Forbes that today's employees are more like hunters than settlers, and that businesses need to adapt accordingly.
Ian Hogarth just co-published the latest State of AI report.
Yann Ranchere’s insightful piece on the financial services for Gen Z.
Jan Erik Solem’s Mapillary just released the world's largest publicly available dataset for teaching cars to understand traffic signs globally.
Paul Thagard: ‘AI should be based on human needs, not on greed’.
Milly Shotter shares Bethnal Green Ventures Learning and Impact report: one of the highlights revealed that, in 2018, £0.44 in every £1 invested by BGV went to all female founding teams and £0.21 to mixed teams (vs. industry average of £0.01 in every £1 and £0.10 in every £1 respectively).
Lysandre Debut writes how to scale a massive state-of-the-art deep learning model in production.