🔮 DeepMind’s grand challenge; Tencent gaming & the cost of AI; Yamannaka factors; ageing, hydrogen airplanes & clean meat ++ #299

🔮 DeepMind’s grand challenge; Tencent gaming & the cost of AI; Yamannaka factors; ageing, hydrogen airplanes & clean meat ++ #299

Hi, I’m Azeem Azhar. I convene Exponential View to help us understand how our societies and political economy will change under the force of rapidly accelerating technologies.

This Friday, I asked the members… If you were given $20 million to execute the most effective strategy to mitigate the climate crisis, what would you do? Brilliant answers.

🎧 Investor Kai-Fu Lee is back on my podcast, discussing how Covid-19 has moved forward automation and consumer robotisation in China. He’s optimistic that technology could help build the bridges between China and its Western counterparts. Listen on Apple Podcasts or your platform of choice.

A couple of weeks ago I discussed SPACs, investing in climate change mitigation, and the future of Silicon Valley with Chamath Palihapitiya. The transcript is available here for members; you can listen to the full episode here.

Dept of the near future

🧬 Deepmind’s protein folding breakthrough is a real triumph for artificial intelligence. “It’s one of biology’s grand challenges – proteins curl up from a linear chain of amino acids into 3D shapes that carry out vital tasks.” The AlphaFold algorithm solved this using deep learning and a comparative strategy for determining how a protein becomes 3D. Even more remarkable in this historic announcement is that training this model seems to have cost around $20,000, according to EV reader Jack Wang (compare that with Tencent’s recent announcements below). See also EV member Vishal Gulati’s considered view on AlphaFold 2 and what it will take to get drugs into development. My view is that this is the classic example of “AI as a tool”, which can help us improve the quality and speed of scientific discovery.

🕹Talking about Tencent… The company is turning its AI systems to strategy games. While their systems have been defeating top esport players in games like Honor of Kings, the cost is breathtaking in comparison with AlphaFold 2. The Tencent training cluster involves 250,000 CPU cores and 2,000 Nvidia V100 GPUs compared with other benchmarks like Dota 2, which uses 150,000 CPUs and 3000 GPUs. Jack Clark has a  key perspective on the sheer power (and cost) of Tencent’s gaming AI: “These computer infrastructures like telescopes – the larger the set of computers, the larger the experiments we can run, letting us ‘see’ further into the future of what will one day become trainable on home computers.”

📚 The National Bureau of Economic Research found that machine readers and artificial intelligence programmes might be skewering corporate reporting. How so? Companies that are expecting levels of machine readership manage the “sentiment and tone of their disclosures to induce algorithmic readers to draw favorable conclusions about the content” and they may prepare their disclosures in ways that are more readable by machine readers. (Interesting fact: 15 years ago, I was Head of Innovation at Reuters where one of our teams launched the world’s first newswire service designed to be processed and ‘read’ by the early algorithmic systems operated by upmarket hedge funds.)

🧑‍💻 The UK is planning to create a digital competition regulator focused on digital advertising, general search, and social media markets. It is a much overdue move. One interesting proposal is the suggestion there should be a “pro-competition unit”, which I read as an ex ante type of regulator with the power to enforce interoperability and separation remedies. This is a difficult space, the kind of thing that can easily go wrong (cookie notice, anyone?).

👀 Using Yamanaka factors, researchers have found ways to restore eyesight in mice with virtually no errors. This could lead to restoring lost sight in humans and apply to other restorative procedures needed to combat the effects of ageing. Go deeper on the science of ageing in my conversation with Harvard Professor David Sinclair.

📊 Our approach to technical risk and market risk in the VC context needs some fresh thinking, Jerry Neumann argues in a thoughtful blog post. It’s actually not risk that matters, uncertainty is the key. He explores the idea with a look at the companies that have done well in the clean tech space (Tesla and Beyond Meat) precisely because they have pursued new markets, complexity and uncertainty. Long read

🔋 Dept of decarbonisation: 412.78ppm | 3,463 days

Each week, I’m going to remind you of the CO2 levels in the atmosphere and the number of days until we reach the 450ppm threshold.

The latest measurement (as of December 3): 412.78 ppm; December 2019: 411.13 ppm; 25 years ago: 360 ppm; 250 years ago, est: 250 ppm. Share this reminder with your community by forwarding this email or tweeting this.

EV reader Sophie Purdom has put together an excellent map of the key venture capitalists focusing on the climate tech domain. An incredible resource.

✈️ Zero-emission passenger planes will be commercially viable in the 2030s, according to Glenn Llewellyn, who’s the VP of zero-emissions technology at Airbus.

Short morsels to appear smart during the next lockdown

Over $10.8 billion was spent on Cyber Monday. A handsome figure, but it doesn’t hold a candle to the $74 billion spent on Alibaba’s Singles’ Day.

🚙 Comma.AI has created one of the best Advanced Driver Assistance System programmes available, outranking Tesla and Audi, according to Consumer Reports. (Well done. I was sceptical about Comma.AI after its initial hype. Proved wrong, yet again.)

⛹️‍♀️ How does a collective work? Complexity science is revealing the answers, bit by bit.

Erik Engheim explores why Apple’s M1 chip is so fast. It’s one whole computer on a single chip, which raises some interesting questions concerning the need to upgrade RAM.

🔥 South Korea’s magnetic fusion device K-Star has set a new world record in maintaining the super-hot plasma – it successfully operated at a temperature of 100 million degrees Celsius for 20 seconds.🔥

No patients who took Moderna’s vaccine in their trial developed severe Covid-19. See also, a few doses of the experimental drug ISRIB can reverse age-related declines in memory and mental flexibility within mice, and may have potential for Covid-19 treatment.

🐓 Singapore has given approval for a ‘clean meat’, which is lab-grown chicken meat that doesn’t come from slaughtered animals. (I’m on the fence about clean meat. Why not just eat vegetables?)

No surprise here: More high net worth individuals are seeking residencies and citizenship in various countries than ever before – with an increase of 50% since last year.

🤷‍♀️ Who could forget the Gatwick drone? Samira Shackle has a great long read on the phenomenon in the Guardian.

👩‍💻 Google’s retaliation against organising workers was illegal, finds the NLRB. (Also see the endnote.)

End note

Timnit Gebru has a well-deserved reputation in the field of AI. A pioneering and active researcher in ethical AI, her work has identified many problems with current AI implementations. I’ve read some of her research output, including her work on model cards and, of course, Gender Shades, her key paper on intersectional accuracy problems in facial recognition. (You can read EV on her work here, here, and here.)

Gebru also co-led the Google Ethical AI team. Earlier this week she says she was fired by Google. The details of the email exchanges that led to her firing were revealed by Casey Newton. (Google says she resigned. You can read the emails to make your own judgement.)

This was triggered by a research paper which Gebru and others had written which “highlighted the risks of large language [AI] models”. Such models are increasingly used by Google in its core search business. Karen Hao, one of the few outsiders to read the research, has a superb review of the paper.

This raises a bunch of complex issues, and we don’t know enough of the details. But here is why I think it is worth commenting.

Gebru’s treatment does matter. Firms like Google (and many others) are building the digital infrastructure in which we will lead our lives for the coming decades. Understanding its culture is important. Technology is not neutral and the designers shape it with their values. Ethical AI work is really important if we want our digital infrastructure to be sensitive and accountable to the excess of bias in its worst forms, particularly long-standing, structural and historical problems.

This experience shows a number of different challenges:

  • Industry-driven research may be lucrative and comfortable, but you are ultimately employed by a company with a profit motive, not the academy charged with furthering the frontiers of knowledge.
  • Regardless of a researcher’s best efforts, firms will still defend their bottom line.
  • Self-regulation in this area will not work. It didn’t work with oil companies and climate change. It won’t work with tech firms.
  • That Google dealt with Gebru by email is astonishingly bad form. As someone who has hired employees and had to let go of some (more than a 200 in the first camp and about 50 or so in the latter, mostly from restructuring), I’ve some experience of hiring, team- and company-building. I was surprised that Google ended her employment by email rather than bringing her in for a virtual face-to-face conversation. It doesn’t warrant treating such a respected leader in the organisation in this way: “Don’t let the door hit you on the way out.” (Jeff Dean, who runs the team, published a longish explanation late on Friday night.)
  • Academia itself has suffered a significant brain drain, with ML experts heading to industry. An increasing amount of AI research is now produced by the biggest three or four firms. And university ML departments struggle for budgets to explore the most complex and pressing issues. Compute is expensive. Access to data could also be difficult.

My suggestion is straight forward:

  • Google says that the work that its Ethical AI team does is important. I agree with that view.
  • However, an in-house team is made of up employees who really can’t expect the same freedoms as an independent group. This circle doesn’t square.
  • Since the work is important to Google, can’t be completed by an in-house team, and academia is stretched, why doesn’t Google just endow an independent research group, with sandbox access to data from the firm, which can be staffed by a diverse set of researchers? A non-repudiable endowment for $500m to $1bn, eminently affordable by Alphabet (Google’s parent), would suffice to create a decent research institution.
  • Separately, rules much change so firms can focus on the business of responsibly and sustainably serving customers rather than forcing them to de facto auto-legislate on the hoof.

Much to think about.

Azeem


What you’re up to – notes from EV readers

Congrats to Stewart Butterfield on selling Slack to Salesforce for $28 billion.

EV researcher Sanjana Varghese helped put together WIRED UK’s cover story for January/February, which features many EV readers as both nominators and nominees, including Carly Kind, Daniel Ek, Marietje Schaake, and more.

Lisa Solomon is moderating a fascinating discussion about the rise of mischievous materials and design’s role to shape a more inclusive future.

Abishek Gupta is quoted extensively in this good report on how the ethical AI field is handling the Google firing I discuss above.

Congrats to Samanth Subramanian and Carissa Veliz, whose books were shortlisted in the Economist’s books of the year!

Genna Barnett and Nesta have a new report out on mapping career causeways for employees whose jobs are at risk of automation.

Pete Dulcamara discusses humanity-centric innovation on The Artificial Podcast.

Markus Schorn co-published 10xDNA: Mindset For a Thriving Future with Frank Thelen. They’d like to gift a dozen books to readers in Europe and N. America; fill out this form to receive a copy on a first-come-first-serve basis.

Aidan Fitzpatrick and his team have spent 2020 building Camo, an app that transforms your iPhone camera into an ideal remote work webcam. (It is really awesome!)

In case you missed my short explainer on how I webcast, the piece is live on LinkedIn. Look out for a longer essay next week about building my remote setup.

Finally, on December 16, I will speak at Media & Culture Fast Forward about the Exponential Age and its impact on our business models. Register here for free and I hope to see you there!

Email marija@exponentialview.co to share your news and projects with other members.

Comments

Sign in or become a Exponential View member to join the conversation.

You might also like:

🔮 Hyper growth; Internet control; Early-stage crypto; New approaches to recycling & walking whales ++ #340

Hi, I’m Azeem Azhar. I convene Exponential View to help us understand how our societies and political economy will

8 min read

🔮 The ongoing chip crisis; Fusion energy; Infrastructure love; Green China & misconceptions about creativity ++ #339

Hi, I’m Azeem Azhar. I convene Exponential View to help us understand how our societies and political economy will

8 min read

📊 EV’s Charts of the Week #42

📊 EV’s Charts of the Week #42

5 min read

🤩🔮 Investment flows; China’s semiconductors; Zero-emission cargo ships & library ebooks ++ #338

Hi, I’m Azeem Azhar. I convene Exponential View to help us understand how our societies and political economy will

7 min read

🔮 Creators; epistemologies; Apple ripening; thinking ESG; Chinese hackers & lucid dreaming ++ #337

Hi, I’m Azeem Azhar. I convene Exponential View to help us understand how our societies and political economy will

8 min read