⭐️ Mobile meets AI; AI superpowers; the end of left and right; filter bubbles; dinosaurs, opiates, fusion++ #91
When mobile meets machine learning, new avenues open up. What comes after left and right? Globalisation’s next phase. On Amazon’s new store. Why AI favours a few superpowers. Magic Leap not so magic. Giving the poor money. Heroin. Fusion power. Dinosaur tails.
Hope this sparks some good conversation…
🚀 Forwarded this from a friend? Sign-up to Exponential View here.
Dept of the near future
📲 Benedict Evans: Mobile is eating the world. “What’s the state of the smartphone, machine learning and ‘GAFA’, and what can we build as we stand on the shoulders of giants?” MUST READ
⚖ Left and the right dead, it’s now reason & openness vs closed & ideological, argues Tobias Stone
🗞 On Google, democracy and the truth of internet search. EXCELLENT (See also Frederick Filloux: Facebook’s walled wonderland is no substitute for the news & super research visualising the Democrat/Republic filter bubble on Twitter.)
🌏 The most radical phase of globalisation is just beginning as technologies provide a substitute for just being there, argues Richard Baldwin. THOUGHT-PROVOKING
🛍 The Amazon Go is quite a remarkable store, entirely automated. It uses sensors and machine vision to deliver a very convenient user experience, with clear implications for retail workers. (Promotional video is worth a watch.)
Dept of artificial intelligence
Artificial Intelligence is becoming a game for the big dogs. To succeed in genuine artificial intelligence efforts, beyond trivial prediction, you need three things.
The first is people. AI is hard. AI is emerging, so you need to be able to recruit and attract a sufficient core of people and create the culture that allows them to explore. You also need mechanisms to exploit their progress commercially.
The second is training environments, which include training data but also systems which let you train and test your systems. The web now offers tons of labelled training data for images and text, with all of its attendant biases. There are data sets for self-driving cars available too.
This week researchers thinking about reinforcement learning or AI systems that can plan ahead received a double fillip. OpenAi, the non-profit backed by Elon Musk, released Universe a wide-ranging environment for researchers to develop generalised game playing systems. DeepMind released a similar environment. Both of these will lower the cost of research and experimentation.
The third is plank is processing. Moore’s Law has been our friend for 40 years, in recent years architectural improvements have helped drive down the cost of computing, particular the matrix operations so crucial in neural net algorithms. However, the computational complexity needed to afford modern breakthroughs is increasing substantially.
Training a simple classifier might only take a few GPU hours. But newer systems that can do more just require so much more computational horsepower. And those horses will come at a cost.
Discussing this with one AI researcher, we noted that it isn’t uncommon for the computational cost to train a modern set of algorithms might run into the tens, if not hundreds of thousands of dollars.
Example: Deepmind’s AlphaGo required 1202 CPUs and 176 GPUs to play Lee Sedol. Making some very crude estimates of cost… if you were to rent that level of processing from Amazon Web Services at 10 cents per hour, a very rough mid-point for a single reasonable processor – then this rig would cost nearly $200 per hour to run. And typically the ‘inferencing’ step is computationally cheaper than training AI systems.
Yes – the estimates are only directionally correct, we should work on more accurate ones.
But the point is that each of those ingredients people, data and compute are hard. But it could be increasingly the case that compute is the hardest & most expensive. As Andrew Ng, lead AI researcher of Baidu, recently said “There are multiple experiments I’d love to run if only we had a 10-x increase in performance.“ A 10-x performance improvement will take 3-5 years to drop to today’s already high prices.
So a handful of superpowers emerging in real frontier artificial intelligence, among them Google, Amazon, Apple, Baidu, Facebook, Microsoft and, increasingly, Uber.
The firm may be struggling to get adoption of its deep learning toolkits. It took the unusual step of starting to publicise some of its research, and announced a willingness to collaborate. (Some juicy details on Apple’s technical progress.)
Baidu has been making great strides under Andrew Ng’s leadership. Here Ng argues that an AI winter is not coming.
Uber is proving rather visionary. By some estimates about a thousand of its 3,000 engineers are involved in machine learning. This week it acquired Geometric Intelligence, an AI startup that focusses on solving the problem of learning without needing scads of data.
I’ll pen some thoughts on what 2017 might bring to AI before the end of the year. I’m particularly interested in understanding ecosystems effects (greater adoption of custom hardware, research toolkits, wider availability of easy-to-use APIs, like Amazon’s, availability of data). Tweet me your ideas.
Paul Daugherty: Artificial intelligence could double economic growth rates in developed economies over the next twenty years.
Deep learning reinvents the hearing aid
Short morsels to appear smart at dinner parties
How Tesla reinvented the car (nice tale)
Google will run on 100% renewable energy next year
🙋🏽 Good things happen when you give the poor money. (Interesting data that supports positive effects of UBI)
Paul Stephenson: How to win a referendum. (Stephenson ran comms for the pro-Brexit faction.)
🐓 Entire chunk of feathered dinosaur preserved in amber
The Wendelstein 7-X fusion reactor reaches a milestone
Gun fires stem cells to heal scars. Remarkable.
Truckers think automation won’t take their jobs for 40 years.
We’re looking for speakers for our dinners or guests on the podcast. So if you’d like to be one of them, please just fill out this form.