🔮 The privacy commons; mirror world; AI and geopolitical competition; morality, π and conscious fish++ #205
|Feb 17||Public post|| 5|
Azeem Azhar’s Weekly Wondermissive: Future, Tech & Society
This issue has been supported by our partner: Ocean Protocol.
Ocean Protocol is launching their network soon! See their exciting Roadmap ahead.
You are reading the free version of Exponential View. While this newsletter will stay open to all, you are missing out on at least 25 special issues and exclusive conversations with some of the most brilliant minds shaping our future.
Our upcoming members-only briefings will address the future of transportation, climate change and intangible economy; dates to keep in mind:
March 1, Horace Dediu: How is micromobility transforming urban transit and how does it impact the future of cars?
March 22, Reilly Brennan: The state of autonomy: electric trucks & light vehicles.
If you're a student or an educator in academia contact our support at firstname.lastname@example.org for details of academic programmes.
Dept of the near future
👬 Kevin Kelly on the coming of “mirrorworld”, the digital, augmented reality twin of the real world. Like Borges’ cartographers, advances in augmented reality are building a 1:1 scale map of the world which could become the next computing platform. (Azeem's comment: the simulacra we are building are in some cases more high-fidelity than the real world because “digital twins” often have attributes that don't exist in their physical manifestations. I see a chaotic rather than well-ordered progression in this direction.)
🔏 Jon Evans: “Privacy is like voting. An individual’s privacy, like an individual’s vote, is usually largely irrelevant to anyone but themselves […] but the accumulation of individual privacy or lack thereof, like the accumulation of individual votes, is enormously consequential [...] if we need to defend privacy as a commons — and we do — then we can’t start thinking of it as an individual asset to be sold to surveillance capitalists. It, and we, are more important than that.” (See also, private firms now have more than 26m DNA profiles, courtesy of home ancestry tests. It is an incredibly valuable resource consolidated in a few hands. And Fitbit launches a fitness tracker for employees and insurers.)
☠️ Drones of mass destruction: how drone swarms impact the balance of nuclear, chemical and biological warfare. The British say they want drone swarms for the air force by 2022, and are holding a hackathon in March on the topic. Azeem’s comment: Drone swarms will be a potent new tool for military arsenals and together with high-speed precision missiles, render large-scale power projection assets, like aircraft carriers, rather vulnerable.
🧩 Kevin Werbach: Is regulation one way to restore faith in blockchain systems? #assumptionshaker
⭐️ Computer programming once had much better gender balance than it does today. What went wrong? I’m not going to try to do justice to Clive Thompson’s beautiful writing and summarise his argument, but this is worth a read.
Dept of national strategies
Understanding China's national strategy on AI. We’ve long pointed out that China views dominance in AI as crucial for national security. The country is moving fast to reduce dependence on foreign suppliers. While there is a burgeoning awakening that there may be an arms race (and so some understanding that arms control protocols may be need), that won’t stop them from putting pedal to the metal. Semiconductors will play a key part in the future focus of China’s national AI strategy but so are the declared “National AI Champions”: Baidu, Alibaba, Tencent, iFlytek, and SenseTime. (Six of the world’s eleven AI unicorns are Chinese.)
Europe continues to lag behind China and the US in AI, suggests a detailed study by McKinsey. Weaker than the US on core research, patents, startup formation and venture capital investments, and smaller in scale with less co-ordinated state focus than China. (My comment: Europe's picture is spotty. Some governments, like France and the UK have put together explicit AI plans. The UK also has players right across the stack, from Graphcore in chips to Prowler (odd choice of name!) in advanced algorithms. The Nordics have a strong entrepreneurial basis and well-educated, resilient populations. So, yes, Europe is a bit of a plum pudding.)
Donald Trump, the President of the US, signed an Executive Order on artificial intelligence. Light on specifics, heavy on bombast, was one conclusion.
Elsewhere: Ant Financial, the massive Chinese financial services player that spun out of Alibaba, is acquiring the UK’s WorldFirst, in its biggest overseas expansion deal. (Ant Financial is an AI-driven financial services firm without compare. To get a flavour of it, watch this presentation by Alan Qi, their chief data scientist, on the EV stage at the CogX Festival last year.)
Dept of artificial intelligence
OpenAI, a private research group, demonstrated GPT2, an AI text generator, which the group reckoned was too dangerous to be released publicly. Some of the text generated can seem quite realistic. The following was created by GPT2. It is pretty good:
The first thing that strikes you about GPT2 is its simplicity. First, the system is built on unsupervised learning from text, which essentially means the software is trained to spot instances of certain words that, when paired with other words, trigger the system to give it a human review of them. The system also learns through example and repetition, but does not have the capability to recognize itself.
Yes, the above was generated by a machine. For another example, read this realistic article about talking unicorns produced through GPT2.
So this clearly creates many more risks. The Web is already awash with low-quality human-generated spam and misinformation, but this sort of thing could dramatically lower the cost. And it could play potently into the confirmation bias of the conspiracy-prone, like anti-vaxxers.
I have some misgivings about the way hyperbolic manner in which GPT2 was announced. Nvidia’s Anima Anandkumar says it better than me: “What [OpenAI is] doing is opposite of open. It is unfortunate that you hype up + propagate fear + thwart reproducibility + scientific endeavor.”
Barney Pell, whose firm, Powerset, was a deep text parser subsequently acquired by Microsoft, said:
What is worrying (and fascinating) here is that the generated text is coherent. Our ability to rapidly discard an article based on bad spelling, bad grammar, lack of structure and lack of coherence are all addressed here. Much harder to filter!
These technologies will diffuse rapidly. It only took a few months from DeepMind’s Go playing wonder to inspire an open-source clone, Leela.
While it doesn’t hurt to be have some methods or policies of review, reflection and attentuation in place, the question is less about preventing the diffusion. It is about how we start making the systemic changes to build resilience into our systems.
And systemic means that. Not just technical solutions, like blockchain-based approaches, that could ensure provenance and authority, but also investments in building social capital and trust across societies that help them withstand disinformation. It also stretches to investments in improving the media literacy and analytical toolkits of citizens. For example, globally about two-thirds of people do not know that the Facebook newsfeed is algorithmically generated. It also means reducing economic incentives for publishers, platforms and advertisers to recirculate pap and lies.
Also, see This Person Does Not Exist, which uses generative adversarial networks to create realistic looking faces of non-existent people. Hit reload for another shot.
🍟 The golden age of chip architecture. Developments often framed as problems (end of Moore's Law, deceleration of performance gains, the end of Dennard scaling) are inescapable facts but also potential opportunities for the chip industry.
High-level, domain-specific languages and architectures, freeing architects from the chains of proprietary instruction sets, along with demand from the public for improved security, will usher in a new golden age for computer architects. Aided by open source ecosystems, agilely developed chips will convincingly demonstrate advances and thereby accelerate commercial adoption.
This is quite technical but interesting for any historians of the industry. See also, Nvidia, the first of the post-Intel chip giant, announced terrible results, driven by the crash in crypto prices. Jack Wang breaks these results down cleanly. Softbank has offloaded its entire $4bn stake in Nvidia. Nvidia once seemed like such a runaway winner as demand for compute exploded.
Google researchers outline in more detail one proposed approach to federated machine learning. This allows the algorithm to be trained on decentralised data: for example, data held on a multitude of different devices (like phones). The model ‘addresses the fundamental problems of privacy, ownership, and locality of data’ because the data does not actually leave the person’s device.
💯 The Leverhulme Centre for Future Intelligence & the Nuffield Foundation released a report on ethics of artificial intelligence. There are many such reports being produced by companies, countries and non-profits, but mostly they are lists of principles. We’re drowning in lists of ethical principles. This report takes a different approach. It is actually a set of questions, often framed as dichotomies, to be a jumping off-point to further exploration. I believe it’s a more useful analytical tool for this set of questions, and this period in time. I spoke at a debate launching this report last week. (See also Morsels this week.)
Online pricing algorithms may collude with each other: “these autonomous pricing algorithms may independently discover that if they are to make the highest possible profit, they should avoid price wars. That is, they may learn to collude even if they have not been specifically instructed.”
🔥🔥 Climate catastrophe 412.28ppm
Each week, I’m going to remind us of our level of the CO2 in the atmosphere. We must avoid a level of 450 parts per million.
EV reader, Bill Gross, talks about EnergyVault, his new renewable energy storage solution, which rocks in a 3.5¢/kWh, which, when paired with scale solar, is only 0.5¢ below being competitive with any new fossil fuel solution. It is quite impressive.
Short morsels to appear smart at dinner parties
🧭 An Oxford study suggests there are seven universal rules of morality. “People everywhere face a similar set of social problems, and use a similar set of moral rules to solve them.”
Interoception: Sarah Garfinkel on her research demonstrating how we think and feel “is shaped by this dynamic interaction between body and brain.”
An accessible introduction to Polkadot, which aims to build a global network of blockchains.
🚗 🚗 🚗 Volvo's car subscription service is off to a cracking start, but dealers are trying to kill it.
💕 Rabbinical wisdom for Valentine’s week: all love is fish love. 💕
🌀 Video: A remarkable way to calculate π.
Gorgeous photo essay of Chernobyl three decades after the accident. “Tell people Chernobyl is not such a horrible place.”
Our private expert directory now hosts almost 500 readers who’ve offered to be close by when we go looking for expert commentary, interviews, podcast guests, or event speakers.
If you haven’t shared your profile yet, please do so here.
P.S. Scroll down to see what EV readers are up to—this community is keeping busy!
This week’s Exponential View has been supported by our partner: Ocean Protocol.
Ocean Protocol is kickstarting a Data Economy by breaking down data silos and equalizing access to data for all.
Get involved and see how you can benefit from Ocean.
What you are up to — notes from EV readers
Dame Frances Cairncross’ review on the sustainability of high-quality journalism reported back to the British Government. It recommends measures to maintain the sustainability of high-quality journalism in the age of social media (Azeem and other EV readers were part of the advisory panel). BBC report here.
Julia Ross invites entrepreneurs from across the world to take part in Zinc’s company-building programme, focusing on products and services to provide quality of life for the elderly.
From Alex Housely's Seldon team: anomaly detection in machine learning pipeline using Seldon Core open source deployment platform.
Sam Smith makes some incisive points about the role of OKRs in guiding addictiveness in digital products.
Pascal Finette’s critique of Steve Blank’s assessment that McKinsey’s Three Horizons Model no longer applies to corporate innovation.
George Bevis’ Tide has won 16% of the SMB bank accounts in the UK.
Congrats to many readers at Lux Capital: Johnson & Johnson will acquire their portfolio company Auris, becoming the largest robotics and largest medtech private M&A deal in history.
Phil Hayes-St Clair is kickstarting a predictive healthcare company in Australia. As he puts it: all from a drop of blood, and nothing like Theranos.
Olly Buston is hiring Head of Think Tank to lead the programme for socio-economic and political impact of AI at Future Advocacy.
Rebecca Choong Wilkins comments on why Lamper’s bid won the contest for Sears. Hint: it’s not to save 50,000 jobs.
Will life in 2045 be one long gap year? Calum Chace will offer an answer on Monday, 19th, at this event in London.
Joshua Leigh shares a project his team and Hitachi did in collaboration: Trust2030 explores three different futures, and brings them to life with artefacts.
Sam Gilpin on how businesses can stay resilient in times of uncertainty.
Neal Freyman’s Morning Brew is 1 million readers strong. Congrats!
Mark Schaefer published a new book “Marketing Rebellion: The Most Human Company Wins”.
Scott Smith shares his podcast Underfutures which looks at the contrarian sides of future issues.
Well done to Anne Boden for netting £75m in further funding for Starling Bank.
If you want to share your project or news, email Marija: email@example.com.