🔮⚡️ Accelerationism; WannaCry & cybersecurity; innovation in AI; unbiasing machine learning; Macron's honeypots, Brexit hiring, Nvidia++ #113
Tackling biases in machine learning; the creed of accelerationism; understanding cyberattacks; super-star firms; how Nvidia plans to power the AI economy; two ways Macron' gets digital. Have exceptional conversations!
❤️ Want to support EV? Share this via Twitter, on Facebook & LinkedIn.
🚀 Forwarded this from a friend? Sign up to Exponential View here.
👍 Supported by SVB, Silicon Valley Bank
Dept of the near future
The first three articles are long reads. Best enjoyed with a hot drink.
🤡 Physiognomy’s new clothes: A brilliant essay on how “machine learning can also be misused, often unintentionally. Such misuse tends to arise from an overly narrow focus on the technical problem.” (We’ve covered this topic many times in _Exponential View/ _It’s another instance of technology being too important to be left to technologists.)
👟 On the intellectual origins of the accelerationists who “argue that technology, particularly computer technology, and capitalism, particularly the most aggressive, global variety, should be massively sped up and intensified – either because this is the best way forward for humanity, or because there is no alternative.” Connects the extreme right, the extreme left, post-structuralists, cyberpunks and jungle music.
🌡️ The threat. Fascinating interview with Ross Anderson, professor of security engineering at Cambridge University on the digital revolution, cybersecurity, hacking, crime, network effects, game theory, inequality & politics. (Video also available.) EXCELLENT for the multi-decade context of today's cybersecurity issues.
🚨Nicole Perlroth on the growth of ransomware: "hackers are discovering it is far more profitable to hold your data hostage than it is to steal it." (See also: Zeynep Tupfekci on how we can fix the coming cyber-security crisis. More on #wannacry below.)
⚠️ Superstar firms have been better for investors than employees with the "risk that the dominance of superstars will eventually contribute to a fall in economic dynamism and productivity that will further entrench their power."
Dept of a## rtificial intelligence
Two takes on innovation in different AI domains caught my eye this week.
The first is this review on AI chatbots and the importance of the human trainer to push these services across the ‘uncanny valley’. It is certainly true that many startups ‘fake it until they make it’. Or to put it another way, do things that don’t scale until they learn how to scale them. Training an AI system falls into that category.
The question really is whether this current crop of startups, like Amy, are chasing a tractable problem that they can solve in the time available (before cash runs out) through the heavy use of human trainers. This fascinating Bloomberg analysis takes us into the detail of just how much human assistance these AI systems need for the rather complex task (for a computer) of scheduling an email. It’s also a reminder of Moravec’s paradox: scheduling meetings is a quotidian human activity but tough for AI systems.
The second piece is this incredibly smart take on self-driving cars from Sam Lessin at The Information on self-driving cars:
New technologies usually take longer than initially expected to be introduced because rapid improvement eventually hits a point where the next stage of work becomes very expensive. That’s what will happen to self-driving cars, which is why their introduction is many years away.
The point with self-driving cars is:
they need to be very, very close to perfect before they are valuable at all. There is no 50% credit. A self-driving car that works 90% of the time, or even 99% of the time, might be a nice safety addition, but it doesn’t deliver the true dream of not needing a driver at the wheel.
Worth reading this piece, and also EV reader, Sutha Kamal’s comment on it, which you can find here too.
Separately, Alphabet's self-driving cars have driven more than three million miles autonomously (and billions more in virtual environments). The last million those miles took less than seven months, compared to over a year for the previous million. At the same time, the rate of driver engagements (when the human needs to take over) has declined. Interesting, long-term data shows that it typically takes three decades for new car safety features to become ubiquitous on vehicles on the road.
What connects these distinct analyses of the AI-bot and autonomous vehicle market is ultimately the observation that the path to successful innovation is not a known one. How far are we really from Amy taking over all our scheduling or autonomous vehicles achieving level five control in the streets of Paris, Abuja or Manila? And to what extent are we as consumers going to be part of the training data? (And is "being the training data" even more disarming than "being the product"?)
The major news in the AI domain this week came from Nvidia, the chipmaker. Nvidia is one of the closest things you can get to an AI hardware stock. Deep learning algorithms are hungry for its GPUs. Highlights of their announcements are (a) the Volta GPU chip, 21.1bn transistors on 815mm2 of silicon capable of in excess of 30 teraflops (b) a teaser for an Nvidia GPU cloud. Funny how Google is getting into silicon with its own TPUs. And Nvidia is getting into the cloud.
Tom Simonite has a short interview with Jensen Huang, CEO of Nvidia, which is insightful. In it, Jensen doesn't confirm whether the Nvidia Drive PX-2 fitted in all new Teslas will have the computational muscle for full autonomy. Says Huang, "I'm not exactly sure, but we'll find out."
My sense is that short of some significant improvement in the computational strategy (either in more efficient classifiers or ways of reducing theactor-state space complexity without loss in predictive performance/safety) the PX-2 seems unlikely to take us all the way to level five autonomy.
Nvidia's own announcement with some nice performance charts and technical info is here. (Nvidia also unveiled a partnership with Toyota for the Drive PX for its autonomous vehicles.)
Elsewhere:
Jeff Bezos' clear articulation of what AI means for Amazon.
Professionals and the apprenticeship problem. As AI takes over grunt work, how will professional services train their junior staff?
Using machine learning on Apple Watch data to improve cardiac health. (Nice application.)
"There’s a tipping point at which autonomous driving technologies will actually create more danger for human drivers rather than less." SUPER INTERESTING
Citymapper, a British urban navigation app, is launching a commuter bus service.
EV reader, Jan Erik Solem, releases the world's largest street-level imagery dataset.
Deep learning at scale in Twitter. Moderately technical but quite a good read if you are a product manager.
A deep reinforcement model for abstractive summarisation: Fascinating progress on machine generated summaries.
Understanding deep learning requires rethinking generalisation. Insightful technical blog post.
Improbable, a British startup that builds a platform for simulating massive virtual worlds, raises $502m from Softbank. (Consider what Improbable must be planning if they enticed Masayoshi Son to invest.)
Will "AI" inferencing move to the edge or the cloud? Answer my twitter poll.
Short morsels to appear smart at dinner parties
💸 Darien Huss spent £8.79 to activate the kill switch to stop wannacry spreading. (See also Graham Clulely's reasonable early take on wannacry. Nice compilation of updates is here, including links to the 3 BTC payment addresses.)
How power grids should prepare for cyberattacks.
Emmanuel Macron's campaign used honeypot accounts to ensnare would-be hackers.
Oxford Internet Insitute on how Macron's campaign used technology to counteract computational propaganda.
📷 How secure are IoT devices? The Persirai botnet has infected 120k cameras.
🛑 Uber has a set-back as the ECJ is advised to treat it as a transport company (not a software one).
Brexit is biting hiring in many UK sectors & especially tech.
Indians make 50m minutes of WhatsApp video calls per day.
Manipulating foods' appearance is the norm, particularly in the culture obsessed with the colour of their food.
20 years ago this week, Apple launched the PowerBook 2200c, it was the smallest laptop at the time.
⌛ Delaware-sized ice chunk from one of the Antarctica's largest ice shelves hanging by a thread.
End note
Yes, you'll have noticed this week's Exponential View has a sponsor. The newsletter is at a point where it has meaningful costs and so I'm excited that Tuan, Josh and Jack from SVB have stepped up to support us.
We've also changed our mailing platform. This email will now come from azeem.azhar@exponentialview.co Please update your address books.
Finally, I'm attending CogX London 2017, a conference which will explore the impact of AI across industry. I have a 50% discount off the Early Bird tickets for the Exponential View community. (A 75% discount overall.)
Just visit and use the code 4z33m4z50
Ciao!
Azeem
This week's issue is brought with support from our partner, SVB.