🔥 Unicorns; Reid Hoffman's global ambition; AI & algorithms meet ethics; gender equality; plant consciousness & dead unicorns++ #30
|Oct 11, 2015||Public post|
Algorithmic ethics: how can we define the ethical parameters for AI if we don’t know them ourselves? Reid Hoffman and the global economic graph. Decacorns, unicorns, financial wizardry and the coming bloodbath. The equality dividend.
Referred by a friend? Sign up
Dept of the near future
💫 The Network Man: Deep profile of Reid Hoffman, founder of LinkedIn, philanthropist & EV subscriber. Great read
👮 A future without jobs doesn’t mean a future with work. EV reader, Scott Santens: “There are a lot of people out there getting paid to do absolutely nothing. There are also a lot of people out there getting paid nothing to do everything.” Recommended
💰 Anatomy of a decacorn: WeWork is a modern ‘exponential organisation’: an asset light, platform-based marketplace riding the secular trend of millennial entrepreneurship (buzzwords not used in irony). Here is how it sells its multi-billion dollar valuation. Must read.
📰 Ad blockers will elevate digital media argues Frederic Filloux by forcing publishers into better practices around privacy, tracking, the quality of their advertising and new micropayment models. Yay.
🌆 Imagining the driverless city: less traffic, more walking, better housing or longer commutes & streets buzzing like video games. What does the driverless city look like? Secretly hoping for utopia. (Also, autonomous car crash reports.)
Dept of ethics in AI
“How will machines know what we value if we don’t know ourselves?” asks John Havens. (Recommended). It is a difficult and important point. Ultimately, the AI systems we are designing are optimised to some goal. Today’s basic machine learning implementations might optimise recommendations on an e-commerce site but only maximise the retailer’s profit, not the temperance of the browsing human. Or our emissions control systems might optimise to maximise the automaker’s profit rather than social welfare.
More complex systems will be making more open-ended decisions. Let’s assume these systems are not maliciously or idiotically designed. Idiotic design might be, for example, the autonomous bus that is optimised for punctuality rather than actually delivering passengers and determines it is quicker not to stop to pick them up or drop them off. (Weirder computer systems have been designed.)
So even if
we have systems which are well thought through, by exceptional observation of human behaviour
we manage to encode the nuance of human and societal judgement into some objective function (ha, ha, not easy),
we’ll run into the problem that we (as humans) don’t agree what the ‘correct outcome’ is in a large number of situations. Those situations are the domain of ethics and ethical thinking.
In some respects the ethical issues surrounded AI are less about killer robots (although we are deploying thousands of killer-capable drones 24x7 to fight wars remotely) but about how we enumerate our ethics in a way that can be translated to the AI systems that may form the future interface to our ordinary interactions with the services we need to use daily.
Society is more at risk from capitalism than killer robots, says Stephen Hawking. (Short)
Micro-services are an essential component in facilitating the growth of AI systems. (Excellent intro by Sequoia Capital).
Gartner reckons algorithmically deep firms will trump those that aren’t. (My comments: this is why understanding the ethics of algorithms now is important.)
Exploring Polyani’s Paradox that 'we know more than we can tell’ and how that may or may not be a blocker in designing AI systems. (Don’t agree with this author really.)
Dept of equality
👫 “Gender inequality is not only a pressing moral and social issue but also a critical economic challenge” argues McKinsey. Taking steps to address it will also release $12 trillion in annual global income, or 11% of global 2025 GDP. (You’ll remember from AEV #26 that Citi estimated the cost of proactively addressing climate change was only 1% of global GDP, or less than a tenth of equality premium that McKinsey estimates. Think about that.)
Bros funding bros: Silicon Valley is funding lookalike, trivial startups because of its frat-bro-selfsame networks. The biggest problems are being left untackled, because of it, argues Chamath Palihapitiya. Better diversity will help.** Recommended**
Natural language processing and other AI tools are being applied by several firms to uncover gender biases at work. (I love this application, part of a series of ways of exposing latent relationships in the everyday.)
🔬 Physicist Stephanie Meyer tackles the origins of gender gap in science.
Short morsels for dinner parties
😢 I accidentally deleted the entirety of this section, with my original mellifluous, beautiful prose. Apologies. I’ve quickly assembled some of the links. Hope you enjoy it.
What Exxon knew about melting Arctic Ice, and when. (Great reportage).
The Silicon Valley funding cycle has turned. Unicorns are de-horning. (Great Dan Primack)
More data on the turning of the Unicorn cycle.
😗 The importance of empathy in our people-centric, services economy by Irving Wladsky-Berger, former Chief Scientist of IBM. (If you are an engineer prone to skip articles on empathy, give this one a go).
Why does the US keep a giant blimp floating above Kabul?
A quarter of the world’s energy will be renewable by 2020. Excellent data.
What people in 1900 thought the year 2000 would be like. Mostly off target but worth understanding why. Given the rate of change of technical change since 1900, the equivalent gap for us is probably only 10-20 years away.
Meat-free meat comes closer to mainstream as Impossible foods raises $100m.
🌺 Conference review of Plant Consciousness 2015. (Yes - plant intelligence is an emerging field.)
Despite radiation, but without humans, the Chernobyl ecosystem is thriving.
We need to consider what happens in the event of a Saudi collapse.
What are you up to
This is a section to brief you on what other participants in Exponential View are up to. We’ve already talked about Reid and Scott (above).
EV reader, James Bromley’s Swiftkey has released a new AI-powered predictive keyboard that is “creepily good”.
Remember: if you are doing something exponentially interesting, let me know :)
Thanks to all contributors this week. It was a busy week for me and I had great help from Rick Liebling recommending content.
Also, if you want to connect with me on LinkedIn, please do so.