🌟 Tech bubbles, how to treat your AI, Tinder, robo-dreams++: Azeem's Exponential View - Issue #14
Leading investors, A16Z, argue we aren’t in a bubble. We investigate how we should treat our robots and AI helpers, and get a glimpse at how these systems dream and understand the world. A bit AI-heavy this week - hope you enjoy.
Beta notes Experimenting with more ‘context’ in this week’s issue. Do give me feedback on it. And please continue the Twitter referrals!
Dept of the near future
💫 We are not in a technology bubble. This stunning presentation by A16Z argues that the fundamentals point to more demand and thus more investment in technology.
💋 Tinder took four months to becoming the dominant dating platform. How did it disrupt an established industry so quickly?
💥 Product-centric management teams combined with ‘renting operational scale’ allows startups to disrupt incumbents faster than ever, argues Hemant Taneja.
📰 Google’s DeepMind teaches computers to read… the Daily Mail (!!)
💸 The on-demand consumer services wave will cross over into the enterprise and create a $1trn opportunity.
Dept of artificial intelligence
How should you treat your artificial intelligence programs & assistants? Is it ok to be mean to them? I’ve been playing around with Amy, an AI that helps you manage your calendar. I found that when dealing with her I was more abrupt than I might be with a human. Why was this? Was I testing the limits of the programming? Was I abrupt because I knew she/it was code?
This week, X.AI extended access to Amy for me, and I have been using her repeatedly to schedule calendar appointments for me. Amy has been designed to be very polite and helpful. In return, I am respectful, polite and business-like back to her. The affordances of the design (smart, helpful conversation) have anthropomorphized her. I’m am nudged (by the product design) and encouraged (by the quality of interactions) to treat her like a person.
But why am I? I’m not polite or deferential to our microwave ovens or toasters or even mobile phones? Call this the emerging area of anthroporoboethics - what is the ethical framework for humans dealing with AIs.
Let me know on twitter: what will your AI need for you to treat it well?
We’ve already got some experience.
As Sony’s AIBO dogs reach their end of life and parts run out, some Japanese families deal with the painful loss of their robotic pets.
You may also remember when videos of test robots being kicked surfaced, you may have felt icky.
Oxytocin, the love hormone, may be behind our tendency to anthropomorphize inanimate objects.
On the other had, kids in a shopping mall will happily abuse and terrorise even humanoid robots.
As for our robotic lovers, Real Doll is investing heavily in better robotics and AI to create more lifelike robo-sex companions. Adds a complex dimension to the nice or nasty dichotomy.
One key area in designing systems involving AI is how we create mechanisms for trust to emerge. Unlike traditional mechanistic systems, AI systems will adjudicate in ambiguous conditions. Mistakes might arise. Trust will be important. Narrative and provenance might be two of the key design affordances, argues Kris Hammond.
Dept of generative machine learning
In a previous week, we touched on how deep learning models were helping us understand how we perceive reality. (See algorithms of the mind.)
This week, Google’s machine vision group blew us away with the stunning images pulled from inside a deep learning network, which summarised, in some sense, the deeper awareness of these primitive neural networks.
🐙 Singularity’s essay on this Inceptionism is recommended.
🐹 The word abstractions generated by word2vec (a shallow-learning algorithm) are amazing. They draw out the underlying semantic relationships between similar concepts fed in to the algorithm via simple algebra. (eg. “politics - lies = Germans”) first half recommended, second half is about implementation.
Continued work in generative algorithms, I like this one on the AI which can describe what it sees.
Short morsels for dinner parties
🔮 Scientists demonstrate mind-reading by reconstructing speech by observing brain activity. Spooky video Also don’t let the NSA get hold of this technology.
Twitter users appear to generate broader and more diverse ideas compared to non users. (Abstract only)
Amazon will pay authors according to how many pages of their books are read. Interesting test of a new model - rewarding the readable.
Graphene engineers have created a light bulb that is one atom thick.
AI engineers are wearing seven league boots at the moment. This lecture shows how one group manage to speed up a key process (path finding) more than 1,000 times. You don’t see this type of progress in non-exponential industries. Technical
Global groundwater might be running out. It could be a new source of political tension.
What would a fully renewable electrified economy look like? In general, it’ll be be cheaper & cleaner. Lessons from New Zealand
What I wrote
The six accelerants of the AI boom explains why progress in artificial intelligence deserves so much attention now.
End notes
As always, please feel free to forward this to friends. More subscribers is good. And email me for any queries.