🔮 💥 The AI issue: progress, economy, jobs & learning; Tezos tantrum; paperclips, concrete, Soviet Woodstock++ #136

Go & reinforcement learning; humans, robots & factories; ridesharing and cities; and a whole lot more. Have some great conversations.

💌  Would your colleagues enjoy this issue? Forward it!

✏️  Please share *|SHARE:twitter|* *|SHARE:linkedin|*

🚀  Supported by Arlo Skye and WorkShape.


😲  DeepMind blew my mind again. They announced AlphaGoZero (AGZ), a new approach to playing Go which relies

solely [on] self-play reinforcement learning, starting from random play, without any supervision or use of human data. Second, it uses only the black and white stones from the board as input features. Third, it uses a single neural network, rather than separate policy and value networks. Finally, it uses a simpler tree search that relies upon this single neural network to evaluate positions and sample moves.

In summary, it’s a single neural net, not trained on any training data. (You’ll recall previous versions of AlphaGo were trained on millions of human games.) The results are impressive. Within three hours AGZ played as well as a human beginner. Within a few days of self-play, it became the world’s best Go player.

Additionally, AGZ achieved this using only four of Google’s tensor processing units, a chip dedicated to these types of neural nets. The original AlphaGo needed 176 GPUs (a less optimised technical architecture). AGZ was whipping AlphaGo’s silicon derriere three days after it was instantiated. DeepMind’s blog post is a very clear read.

🤔  One observation made by Pedro Domingos, Professor at University of Washington:

AlphaGo Zero is great, but hold on: self-play is one of the oldest ideas in ML, and humans take far less than 5 million games to master Go.

Honestly, 3 days is a pretty quick time to become the world’s best anything. And playing 5m games of Go is hardly expensive, in terms of resources for a computer, while it would be prohibitively so for a human.

🏭  Dark factory: how robots and humans work together in factories. This stunning essay by Sheelah Kolhatkar provides a glimpse into automated workplaces and the dynamics that drive them. For workers:

[a]utomation was bringing greater and greater efficiency, even though, at a certain point, the logic of increasing efficiency would catch up with [them], and [they] wouldn’t be around any longer to witness it. One day, the factory might go dark. In the meantime, [they were] enjoying the advantages of work that involved less work.

🚶  Sheelah’s piece is multidimensional, so do read. But one aspect that rings out ominously is what Tim O’Reilly characterises as:

the thrall of an economic theory that says that wages and working conditions are entirely subject to inevitable laws of supply and demand, not recognising the rules and incentives we have created that ruthlessly allocate the benefit of increased productivity to the owners of capital and to consumers in the form of lower prices, but dictate that human workers are a cost to be eliminated.

👨‍💻  Related, Nature has a good special on the future of work including hinting that the ongoing training needed to help workers constantly upskill might come from MOOCs. Which is a pity, because Clarissa Shen, a big cheese at Udacity, one of the instigators of the MOOC, has declared them “dead”.

🤚  Tezos, a cryptocurrency beloved by anarcho-capitalists because of it’s supposed self-governing affordances, facepalmed. Having raised more than $230m in a rambunctious ICO earlier this year, its founders have fallen out threatening the entire project. Anna Irerra does a superb job unpicking the story. (Tezos’ futures have dropped 75%. (Read this for a simple intro to Tezos.)

🚦  Uber and Lyft may add to traffic congestion in cities while moving people away from mass transit, says a fascinating new study. Londoners dodging fleets of Uber Priuses will not be surprised, although more research is needed. The study also identifies that ride-hailing has appealed to a much larger segment of the urban population than previous sharing models, like Zipcar. UBS reckons that by 2035 using robotaxis will be half the cost of owning a car but that today’s ride-sharing is not competitive with owning a car. (h/t Reilly Brennan.) My maths tells me it isn’t cost effective to replace my primary vehicle with an Uber, but Uber has certainly replaced the second car (which I now don’t have.)

🕰️  Noah Yuval Harari calls on creating new models for the post-work economy and political system, warning that we don’t have much time left:

The challenges posed in the twenty-first century by the merger of infotech and biotech are arguably bigger than those thrown up by steam engines, railways, electricity and fossil fuels. Given the immense destructive power of our modern civilization, we cannot afford more failed models, world wars and bloody revolutions. We have to do better this time.

🗳️ A lengthy, but worthwhile commentary on what it means to be a citizen in democracy:

Democracy, instead, requires treating people as citizens – that is, as adults capable of thoughtful decisions and moral actions, rather than as children who need to be manipulated. One way to treat people as citizens is to entrust them with meaningful opportunities to participate in the political process, rather than just as beings who might show up to vote for leaders every few years.


I’m always trying to grow the audience of interested people receiving Exponential View. The single best way to do this is through personal recommendation. If you know people who would enjoy reading it, please take a moment to forward this email to a few of them.

If you’ve already done so in the last few months, then please take a moment to Tweet or post to LinkedIn. These social shares really help!


Two interesting reports came out in the UK this week. (Apologies if this section is UK focussed, I think it's sufficiently interesting for international readers, too).

They were Professor Dame Wendy Hall and Jerome Pesenti's Independent report on Growing the Artificial Intelligence Industry in the UK, and Olly Buston's report on the regional impact of AI across the UK.

There is some good material in the Hall report (mostly around encouraging wider sharing of and access to data) and it is encouraging that a storied academic like Wendy and an industry leader like Jerome were drafted in by the government to raise awareness of AI. The report recognises how AI might drive the economy’s growth rate from 2.5% to 3.9% within 20 years.

Yet, the recommendations felt rather conservative and somewhat muted, like a weak cup of tea.

I wasn't the only one.

Benedict Dellot at the Royal Society of Arts writes:

[T]he Review does not go far enough in extolling the virtues of AI. There are few challenges which it cannot help address, from extending lifespans to tackling climate change. GlaxoSmithKline believes its investment in AI will cut average drug development time from 5.5 years today to a single year.

The Hall report had sensible things to say about encouraging open data across the UK, but I was struck by a key recommendation to create a few hundred graduate AI posts across the country. These recommendations centred around training people with a narrow set of AI skills. Here in lies a pair of related problems.

Most importantly, designing and implementing AI systems, except if it is the most fundamental basic research, is about designing economically-viable products that are going to play a role in human systems. Lots of great PhDs in machine learning do not necessarily make great products that further humanity or are devoid of prejudice, distraction or other negative consequences. Take Facebook, which has some of the best applied AI research in the world, but uses it to build the warty Facebook we know today. The machine learning expertise was applied to build a highly-addictive user experience, as well as automated systems to promote advertising quality. But few cycles were applied to the impact of what all that 'addiction' would do to our psyches or the wide lacuna it allowed malevolent state actors unparalleled access to democracies around the world.

Or take the whole stream of ad tech, where machine learning expertise has been squandered on pushing the limits of personal privacy and data.

So no. AI PhDs alone don't cut the mustard unless teamed with product managers, UXers and business people who understand how to deliver useful, meaningful, human-centred  applications. AI is, after all, just software. Don't produce hundreds of AI PhDs unless the curricula change to include a Hippocratic oath and the ancillary team members required to deliver AI beneficially are also being trained up.

If you want a vivid example of this, then think back to Nick Bostrom’s Superintelligence. He imagines someone designing an AI to:

Manage production in a factory [with] the final goal of maximizing the manufacture of paperclips, and proceeds by converting first the Earth and then increasingly large chunks of the observable universe into paperclips.

(If you are brave, you can try out the PaperClip game, which lets you play out this exact scenario. Warning, this will suck your time. Thanks Tom Standage for pointing this out to me.)

The second problem in this report is how those trained PhDs actually create value and to what extent that value will accrue to the UK economy over the next few years. The biggest commercial AI centres in the UK are owned by American firms (Google DeepMind, Microsoft Research, and so on). And those with AI PhDs will be in high demand in those firms, as well as in high demand in nations which welcome migrating talent. As it is many of the UK AI teams I have met are made up of broad swathes of European talent, whose status in the UK post-Brexit is not guaranteed by the Government. How will the UK retain people with some of the most desirable technical skills in the world? And if we retain them, how do you prevent all the value drifting to the GAFA megaliths?

But really my sense of this review was that it was watered down while skirting most of the ethical challenges of AI (though, curiously, the report does make recommendations the ethical issues around explicability.)

We ought to read it in the context of China wanting to reinvent its economy around intelligence by 2030. As reported in the FT this week they intend to "vigorously use governmental and social capital” to dominate the industry. (We also recommend you check out our EV Special on China's ambitions or Dubai's ambitious plans, including a cabinet-level Minister for AI.)

A more exciting analysis came from Olly Buston and colleagues who looked in detail at the impact of AI and job losses in British constituencies. The typical British constituency has about 70,000 adults in it: this survey is pretty granular.

Olly reckons that the most at-risk constituencies will see around 40% of their jobs at risk from automation within 15 years, and the least at-risk constituencies will see about a fifth of jobs at risk. The key drivers are the nature of the industries represented. Areas with high transport, logistics and warehousing face the largest risks, given the substantial progress being made in warehouse automation, such as Symbotic, Amazon or Ocado. Those areas with strengths in health and education are relatively less at risk.

I quibble with the time frame: the two decades seems too far away. Two decades represents about 13 Moore's Law doublings. Processor power will be 8,192 times greater on a per-dollar basis than today. And two decades has been plenty long enough for new programming and platform paradigms to take hold (like NoSQL, cloud computing or virtualisation). We are in the deployment phase of these technologies & two decades feels like an eternity. Just look at the massive leap DeepMind has made in one year from AlphaGo to AlphaGoZero.

📈  Earlier this week, I presented at All Party Parliamentary Group on AI. The topic was the impact of the technology on inequality, work & skills. My short presentation is available here.

🔎  For an absolutely wonderful view on AI learning theory, I strongly recommend this debate between Gary Marcus and Yann LeCun on how much innate machinery does AI really need? Marcus is best known for taking a nativist stance: that we must have some sort of innate machinery in our brains to learn as efficiently as we do, and so general AI systems will need the same. LeCun, who has driven the development of convolutional nets, is more closely associated with a more Lockean view that you need data from experience will get you a long way.



If trends continue, 50% of US workers will be freelancers within a decade. What might that mean?

A solid history of concrete.

👏  Xi Jinping has received around a billion virtual claps for his recent Congress speech through this WeChat game.

🚴  An 8 meter long 3D printed bridge opens for use in the Netherlands.

43% of tech workers in the US fear they will lose their jobs due to age.

💔  Heart rate during layoff, as recorded through Apple Watch.

“We see Trello as a feature, not a product.” Hiten Shah on lessons from Trello on building a SaaS product that can’t be copied.

🤘 Russian Woodstock of 1989: Fascinating recollection of the first time curtains lifted for Soviet fans of Rock & Roll.

Does having a sense of purpose support better sleep? Seems like it.

💨  Pollution kills more people than war or disease, according to a new epidemiological research in the Lancet, summarised here. (Also: Oxford to be the world’s first zero-emissions city by 2035.)

➗ This will make you smart: dual-n-back is the only brain training app that works. (I’ve played them. They aren’t a heap of fun.)

Squirrels have a system for organising food so they don’t waste much energy remembering where they put it. 🐿️


It ended up being a longer than normal Exponential View this week. I’m reading a lot because I’m building some hypotheses about “where next” over the next 10-15 years.

In short, there is a lot to think about. Which often means: a lot to write about.

Hope it all stimulates your conversations this week.


👋  P.S. If your company is looking to reach an audience of investors, successful entrepreneurs, critical thinkers and decision-makers, send us, well Marija, a note.

This week's issue is brought to you with support from our partnersArlo Skye and WorkShape.


The next generation of luxury luggage - TIME Magazine


Helping companies hire software engineers.