đ„ Rogue AI; blockchain & the poor; exponential climate change; synthetic life; thoughts on the future++ #55
How AI goes rogue and what we learn from it. What the blockchain will mean for the worldâs poor. Self-driving cars & segregated cities. The exponential rise in sea levels. 34 thoughts about the future. Apple shouldnât give in to the FBI. And much more in our Easter special edition. đ đ·
Please recommend: Twitter (preferred) | Facebook (also good)
Dept of the near future
đ What the blockchain will mean for the worldâs poor: better property rights, remittances, identity, trust & supply chains.** EXCELLENT.**
đ Will self-driving cars lead to cities whose roads and planning are even more attuned to the needs of vehicles rather than humans? THOUGHT PROVOKING: these algorithmic beasts will have better navigation, awareness and coordination capabilities than two-legged wetware.
â What is a robot? âSelf-operating machines are permeating every dimension of society, so that humans find themselves interacting more frequently with robots than ever beforeâoften without even realizing it.â
đ Farhad Manjoo: Can the Uber model translate to other businesses?
đ°How to build a billion dollar business. RATHER GOODÂ presentation by Sarah Tavel of Greylock Partners.
Dept of artificial intelligence
A Microsoft chatbot went rogue on Twitter and started spewing Nazi epithets. It is an extremely helpful case study in outlining some of the key issues around the application of machine learning and AI to everyday tasks. What isnât interesting is not that Tay, the bot, was taught to say rude things. What is interesting is what this tells us about designing AI systems that operate in the real world and how their learning can be hacked.
AIs are going to need to learn and interact somewhere akin to the real world. Equally, if we allow AI-systems unexpurgated access to the âreal worldâ while they are learning, there could be ramifications.
The first: we donât have common ethical standards within societies or cultures nor across them. Philippa Footâs trolley experiment never results in unanimity within cultures. (Click here if you donât know the Trolley problem.)
đĄ However, this recent paper looks at cultural differences and variations of the Trolley problem. It finds that ordinary British people will pull a switch between 63 and 91% of the time but Chinese people would do so between 33% and 71% of the time. SUPER ACCESSIBLE, worth reading. Should future systems follow British or Chinese ethical standards? And who decides?
The Chalmers-Bourguet survey of Anglo-centric philosophers revealed that philosophers would switch 68% of the time, suggesting they might reflect their own politico-cultural heritage.
The second: we already have similarly poorly designed optimisation systems in the world. And they affect hundreds of millions of people, only they are less transparent and the people operating them are less responsive than Microsoft has been with Tay.
Examples including credit scoring algorithms or indeed complaints-handling protocols amongst large firms. Those algorithms are often simple regressions built-in small datasets. Or in the case of complaints protocols often very simple, inflexible decision trees applied by a human. Sure Tay is more advanced, nuanced and embarrassing but no real harm is done compared to a poorly designed decision tree which might determine your access to a financial product or insurance.
How many algorithmic approaches make it out into the real world without adequate testing or understanding of their ramifications or worse, their unintended consequences?
The third: Microsoft should have known better. In the experimental crucible that is the Internet, users will push and pull an experiment to its limits. Experience dictated that we, the collective, could have known this. â ïž Vice asks some bot experts how to design a bot that doesnât go 'haywireâ.
Iâm extremely surprised Microsoft didnât do what DeepMind did with AlphaGo. In that case, DeepMind seemed to keep AlphaGo under wraps until they were reasonably clear that AlphaGo was going to do an amazing job in the field. Microsoft appears to have taken the opposite tack. Strikes me as weird. You can read their statement here.
The fourth: Itâs not really clear how much harm was done by Tay. Itâs embarrassing for Microsoft on a corporate level but itâs really not clear that Tay influenced anyone to become more xenophobic or racist. But we have all had a great lesson in some of the issues around training algorithmic systems. (Remember Google and the gorillas?)
The fifth: Tay appears to support the notion of the 'unreasonable effectiveness of dataâ. That large amounts of training data determine the effectiveness of an AI-based system more so than the smarts of the algorithms.
Elsewhere, the Japanese AI whose short story nearly won an award.
Dept of climate change
No much good news here, but at least better informed means better decisions.
đ„đ„ We are approaching exponential rates of glacial melt and sea-level rises. EXCELLENT, and alarming review of a new paper by James Hansen, the NASA scientist, on the non-linear acceleration of climate degradation and a possible imminent Heinrich event.
đ„ What we are doing to the environment has no parallel in the last 66m years. Our models are totally in uncharted territory.
And now, at last, 96% of US meteorologists accepted climate change is real.
The Rockefeller family divests from Exxon Mobil. Their wealth stemmed from Standard Oil, a precursor to Exxon. Scotland closed its last coal-powered station 115 years after it began use the fossil-fuel for electrical generation. Oregon becomes the latest US state to ban coal suggesting that it is game-over for the commodity.
Short morsels to appear smart at dinner parties
â The conflict between science and religion mayhave been neuroarchitectural roots. Excellent read.
The relationship between maths and physics and modernist art. Lovely brief essay.
đŸÂ Everything weâve learnt about platform business models we learnt from mediaeval France and the Duke of Champagne.
đ§ Near half of all mobile revenues comes from just 0.19% of users/losers. Invokes some ethical questions: is that industry taken advantage of a disadvantaged micro-minority? (Those with lower will power/high addictive potentiality?)
Craig Venter builds a synthetic cell with fewer than 500 genes.
Anti-vaxxers have successfully brought measles and whooping cough back to the US.
â§Â Tech entrepreneur requires butler for his Noe Valley family. Salary $175k.
The Tesla Model X: fastest, greenest SUV in production.
Can you believe video? Extremely realistic facial reenactment tech lets you clone your facial expression onto another person.
What EV members are up to
I am speaking on Artifical Intelligence at the awesome Business of Software Europe conference in May. This is run by Mark Littlewood & there is a special âEVâ discount.
Tom Glocer, angel investor, former attorney & former CEO of Thomson Reuters: Why Apple should not be forced to open a back-door to iPhones.
Pete Flint, founder of Trulia: â34 Remarkable things about the futureâ
John Havens has composed a song about the philosophical ramifications of AI. Weâd love to see it set to music (and sung.)
End note
I have a few days with my family over Easter so will be very slow on email. If you have holidays in your part of the world, I hope you enjoy them too!
P.S. Still hiring Product Managers for my marketplaces team based in London. Send qualified referrals my way.