🔮 Autonomous weapons; software & machine learning; the church of AI, gene editing and ant zombies ++ #140
|Nov 19, 2017||Public post|
The changing nature of software development. Digital skills do make a difference. Can Google and Facebook even control their own algorithms? How to regulate autonomous weapons? Why the transition to self-driving vehicles needs to happen soon. IBM's 50-qubit quantum computer is working.
💌 Would your colleagues enjoy this issue? Forward it!
✏️ Please share *|SHARE:twitter|* *|SHARE:linkedin|*
DEPT OF THE NEAR FUTURE
⚡ Neural nets and blockchain may change the nature of software development. Neural nets are a new stack, Software 2.0, argues Andrej Karpathy.
No human is involved in writing this code because there are a lot of weights […] Instead, we specify some constraints on the behavior of a desirable program and use the computational resources at our disposal to search the program space for a program that satisfies the constraints.
Pete Warden agrees that machine learning is changing the way we build software:
Instead of writing and maintaining intricate, layered tangles of logic, the developer has to become a teacher, a curator of training data and an analyst of results. This is very, very different than the programming I was taught in school, but what gets me most excited is that it should be far more accessible than traditional coding, once the tooling catches up.
🔒 Blockchain & its decentralised apps will also change the art of software development. Muneeb Ali makes the case that blockchain enables a new stack which has both security and collaboration at its core.
⛑️ The share of jobs requiring digital skills has more than tripled in the US in the 14 years to 2016, according to the Brookings Institute. The fastest growth of digitalisation in the nature of work was amongst previously low-digital jobs, such as personal care or truck driving. And more than 33m jobs (about 25% of the pool) had moved from requiring low digital skills to medium- or high skills. Tom Simonite has a brief summary of the findings and recommendations of that Brookings Report. But there is more, Erik Brynjolfsson and co have a new paper who reckon that it is the novelty of reconfiguring businesses around machine learning which explains why we haven't yet seen substantial job destruction from this new technology. Yet.
⚕️ Andrew Ng shows off an algorithm detects pneumonia with higher specificity and sensitivity than practising radiologists. (Raj Jena, a childhood friend who taught me to code in Pascal, has developed a machine vision system for automatically detecting cancers. You can hear him talk about it with EV reader, Rory Cellan-Jones, at the 24m30s mark of this radio show.)
⌛ America has its ‘left behinds’. In the UK we have “shit life syndrome”. Sarah O’Connor has an exceptional piece of reporting on the British town of Blackpool, the profiles of those who have missed out on the recent tech-driven wealth creation, the impact of longevity and quality of life & how the town, and its social mechanisms, are adjusting to cope. (Behind the FT reg barrier, but worth it.)
Martin Wolf: Six key questions for taming the technology giants. (FT) MUST READ
Melody Kramer wonders whether Google and Facebook even have control of their algorithms anymore. This is:
problematic for journalism. We cannot write about what we cannot see, but we increasingly write about what we think will be surfaced by these algorithms.
🗞️ From the dept of good ideas, several high-end publishers teamed up to provide quality scores for their output. The sensible idea would be that these would be reflected in the content ranking. Google intends to do this. Facebook, so far, does not.
DEPT OF AUTONOMOUS WEAPONS
Straight out of defence labs, autonomous and semi-autonomous weapons are already in use, but there’s no overarching agreement among key stakeholders on how to control their implementation and diffusion. Unlike nuclear or biological weapons whose proliferation have been largely controlled, autonomous weapons pose some tricky problems.
The first is the absence of an international treaty. The second is the comparative ease by which autonomous weapons can be developed. Nuclear weapons are hard. The nine countries with nuclear weapons have achieved this with multi-decade projects backed substantially by state resources and administrative capacity.
Not so for autonomous weapons. The hardware is getting cheaper and cheaper. Drones in use by insurgent forces cost only a few hundred dollars. And the software engineering know-how (machine vision, autonomous navigation, collision detection) is widely understood and rapidly dispersed through legitimate channels (such as ArXiv and GitHub), and increasingly in commercial products. An autonomous weapon does not need to be a T-1000. It could be a $500 drone carrying a shaped charge. (Longish review of autonomous drone weapons currently in use by state military and non-state actors here.)
In sum: the knowledge to build these subsystems is widely available. And the tools to turn them into reality are cheap. This 8-minute video published by Future of Life Institute depicts one possible scenario, too realistic to be ignored. (An immediate threat is on the horizon, however, in the form of machine learning-enabled malicious software that learns normal behaviour patterns and uses them to get past security gates. WSJ paywall.)
Paul Scharre, a security analyst, warns that there's not much time left for negotiating proper regulations:
Four years ago, the first diplomatic discussions on autonomous weapons seemed more promising, with a sense that countries were ahead of the curve. Today, even as the public grows increasingly aware of the issues, and as self-driving cars pop up frequently in the daily news, energy for a ban seems to be waning. Notably, one recent open letter by AI and robotics company founders did not call for a ban. Rather, it simply asked the UN to “protect us from all these dangers.”
AI researcher Subbarao Kambhampati questions, too, whether any ban is worth supporting because a ban is likely to be ineffective and "a pyrrhic victory" for the proponents of peace.
I do think there are two things to bear in mind. First, by calling these powerful machines “killer robots” (as the Campaign to Stop Killer Robots does) we eliminate the most important variable in today's systems—the humans who design, maintain and manage them. As this excellent, comprehensive study by Maaike Verbruggen and Vincent Boulanin has shown, the largest militaries do not immediately envision fully-autonomous systems taking over the battlefield:
The focus on full autonomous systems is somewhat problematic as it does not reflect the reality of how the military is envisioning the future of autonomy in weapon systems, nor does it allow for tackling the spectrum of challenges raised by the progress of autonomy in weapon systems in the short term. Autonomy is bound to transform the way humans interact with weapon systems and make decisions on the battlefield, but will not eliminate their role. [...] What control should humans maintain over the weapon systems they use and what can be done to ensure that such control remains adequate or meaningful as weapon systems’ capabilities become increasingly complex and autonomous?
✈️ The failure of system designers to account for human error has been particularly exposed in air travel, another industry with high stakes. In 2009 Air France 447 disappeared into the ocean as its pilots were befuddled after the autopilot transferred controls to humans. (See also this story of an Airbus whose control systems went rogue resulting in several passenger injuries.)
The second is that, yes, the technologies & raw materials are widely dispersed. Yellow cake and centrifuges, it isn't. Cheap drones and free software, it is. Rather than throwing our hands up in despair, we should spend some time seriously raising awareness of these issues is, in turn, might generate creative solutions to the problem. Any workable solution is likely to multi-factor, ranging from technical, to ethical, regulatory, administrative and even social.
Worth looking into: the co-founding organisation of Campaign to Stop Killer Robots, PAX, put out two reports in the wake of the latest UN discussion: an overview of trends and weapons under development and a report on the positions of European states.
Boston Robotics Atlas robot has grown up. It can now really terrify with perfect backflips;
Shadow contacts & linking: How Facebook figures out everyone you might possibly be connected to;
Brain researchers make a convincing case for new class of privacy ethics for our mental states;
What do the neurons in a deep neural network really represent? (Nice visuals.)
DEPT OF FUTURE OF TRANSPORT
🔥 Bob Lutz, the 86-year-old former senior exec at GM and Chrysler, wises up to the tornado hitting the automotive value chain:
Now we are approaching the end of the line for the automobile because travel will be in standardized modules.
_The end state will be the fully autonomous module with no capability for the driver to exercise command. You will call for it, it will arrive at your location, you'll get in, input your destination and go to the freeway.
[...] Of course, there will be a transition period. Everyone will have five years to get their car off the road or sell it for scrap or trade it on a module._
When a guy who bleeds the old model says it is over within 20 years, then it’s probably over. The EV view is that the transition will happen faster than that (although like so many other transitions to the future, it won’t be evenly distributed.)
🕛 RAND researchers suggest that we need to put autonomous vehicles on the road before they’re perfect, so they can learn better, faster, and get about saving lives quicker.
The researchers used the model to examine 500 different future scenarios. Under most scenarios, introducing autonomous vehicles sooner rather than later saved more lives in the near term, and under all scenarios, introducing AVs early would save more lives over the long term.
Why doesn’t Tesla include Lidar, the advanced laser-based sensor used by other autonomous vehicle firms, like Waymo?
While the high cost of LIDAR technology will hold other companies back, Tesla will be gathering data from its entire fleet of cars daily*. By 2020, Tesla will have collected information from more than 11 billion miles of driving. Other companies will struggle to amass one tenth as much.
🚚 Tesla unveils its electric heavy-duty truck and it’s roadster, the fastest car in the world.
[...] Ford has spent the past several years quietly snapping up tech talent. It struck a partnership with Lyft and acquired Chariot, a San Francisco-based start-up that runs group shuttles for commuters. It set up an office in Palo Alto that now has 205 employees and created Greenfield Labs, a business incubator. It invested in Argo AI, a Pittsburgh-based artificial-intelligence start-up, and several other companies, including Velodyne, which makes Lidar sensors, and a connected-car software outfit called Autonomic.
💨 EVs have a lower carbon footprint, even when you consider their manufacturing costs, than traditional vehicles. (FT)
SHORT MORSELS TO APPEAR SMART AT DINNER PARTIES
IBM says it has a 50-qubit quantum computer working. (And is releasing access to a 20-qubit one via the cloud)
⛪ The first church of AI. This is truly bizarre, and is probably a PR ball thrown at the media to turn the attention away from Waymo’s lawsuit against Levandowski. But at least it’s a big idea.
Crapsules and other medical treatments built on a deeper understanding of our guts. Quite fascinating.
🚨 Uber drivers in Lagos are cheating the system with fake GPS.
Israeli researchers have developed an invisibility cloak. Can't show it to us yet.
November 1971, Electronic News announces the first commercially available microprocessor, Intel 4004 with 2300 transistors.
🐜 Zombie fungus takes over ants’ bodies to control their minds.
Architect to use the darkest material on Earth to create an “impossibly black building” for the Winter Olympics.
🐶 Our best friends: dog owners have a substantially lower risk of death and CVD than non-owners, a large-scale survey has found.
I was lucky to spend last weekend at the Global Future’s Councils of the World Economic Forum in Dubai. This group explores many of the issues we review in _Exponential View _and does so by bringing together some of the world's most interesting thinkers and doers from many disciplines. Quick thanks to Jeremy Juergens and Olivier Woeffray for facilitating that.
There is some impressive experimentation going on in Dubai. The governance structure, wealth and size of the state makes it an interesting sandbox for innovation. The UAE has a minister for AI in the cabinet, which makes sense given the AI impacts so many different parts of society. Dubai also has the world's largest 3-d printed building which I, unfortunately, wasn't able to visit.
"The OnePlus 5 smartphone is spectacular on every level" - Mashable
Solve intelligence. Use it to make the world a better place.