š®Ā Autonomous weapons; software & machine learning; the church of AI, gene editing and ant zombiesĀ ++ #140
The changing nature of software development. Digital skills do make a difference. Can Google and Facebook even control their own algorithms? How to regulate autonomous weapons? Why the transition to self-driving vehicles needs to happen soon. IBM's 50-qubit quantum computer is working.
šĀ Would your colleagues enjoy this issue? Forward it!
āļøĀ Please share *|SHARE:twitter|* *|SHARE:linkedin|*
šĀ Supported by OnePlusĀ & DeepMindĀ š
DEPT OF THE NEAR FUTURE
ā”Ā Neural nets and blockchain may change the nature of software development. Neural nets are a new stack, Software 2.0, arguesĀ Andrej Karpathy.
No human is involved in writing this code because there are a lot of weights [ā¦] Instead, we specify some constraints on the behavior of a desirable program and use the computational resources at our disposal to search the program space for a program that satisfies the constraints.
Pete Warden agreesĀ that machine learning is changing the way we build software:
Instead of writing and maintaining intricate, layered tangles of logic, the developer has to become a teacher, a curator of training data and an analyst of results. This is very, very different than the programming I was taught in school, but what gets me most excited is that it should be far more accessible than traditional coding, once the tooling catches up.
šĀ Blockchain & its decentralised apps will also change the art of software development. Muneeb Ali makesĀ the case that blockchain enables a new stack which has both security and collaboration at its core.
āļøĀ The share of jobs requiring digital skills has more than tripled in the US in the 14 years to 2016, according to the Brookings Institute. The fastest growth of digitalisation in the nature of work was amongst previously low-digital jobs, such as personal care or truck driving. And more than 33m jobs (about 25% of the pool) had moved from requiring low digital skills to medium- or high skills. Ā Tom Simonite has a brief summary of the findings and recommendations of that Brookings Report. But there is more, Erik Brynjolfsson and co have a new paper who reckon that it is the novelty of reconfiguring businesses around machine learning which explains why we haven't yet seen substantial job destruction from this new technology. Yet.
āļøĀ Ā Andrew Ng shows off an algorithm detects pneumonia with higher specificity and sensitivity than practising radiologists. (Raj Jena, a childhood friend who taught me to code in Pascal, has developed a machine vision system for automatically detecting cancers. You can hear him talk about it with EV reader, Rory Cellan-Jones, at the 24m30sĀ mark of this radio show.)
āĀ Ā America has its āleft behindsā. In the UK we have āshit life syndromeā. Sarah OāConnor has an exceptional piece of reporting on the British town of Blackpool, the profiles of those who have missed out on the recent tech-driven wealth creation, the impact of longevity and quality of life & how the town, and its social mechanisms, are adjusting to cope. (Behind the FT reg barrier, but worth it.)
Martin Wolf: Six key questions for taming the technology giants. (FT)Ā MUST READ
Melody Kramer wonders whether Google and Facebook even have control of their algorithms anymore. This is:
problematic for journalism. We cannot write about what we cannot see, but we increasingly write about what we think will be surfaced by these algorithms.
šļøĀ Ā From the dept of good ideas, several high-end publishers teamed up to provide quality scores for their output. The sensible idea would be that these would be reflected in the content ranking. Google intends to do this. Facebook, so far, does not.
DEPT OF AUTONOMOUS WEAPONS
Straight out of defence labs, autonomous and semi-autonomous weapons are already in use, but thereās no overarching agreement among key stakeholders on how to control their implementation and diffusion. Unlike nuclear or biological weapons whose proliferation have been largely controlled, autonomous weapons pose some tricky problems.
The first is the absence of an international treaty. The second is the comparative ease by which autonomous weapons can be developed. Nuclear weapons are hard. The nine countries with nuclear weapons have achieved this with multi-decade projects backed substantially by state resources and administrative capacity.
Not so for autonomous weapons. The hardware is getting cheaper and cheaper. Drones in use by insurgent forces cost only a few hundred dollars. And the software engineering know-how (machine vision, autonomous navigation, collision detection) is widely understoodĀ and rapidly dispersed through legitimate channels (such as ArXiv and GitHub), and increasingly in commercial products. An autonomous weapon does not need to be a T-1000. It could be a $500 drone carrying a shaped charge. (Longish review of autonomous drone weapons currently in use by state military and non-state actors here.)
In sum: the knowledge to build these subsystems is widely available. And the tools to turn them into realityĀ are cheap. This 8-minute video published by Future of Life Institute depicts one possible scenario, too realistic to be ignored. (An immediate threat is on the horizon, however, in the form of machine learning-enabled malicious software that learns normal behaviourĀ patterns and uses them to get past security gates. WSJ paywall.)
Paul Scharre, a security analyst, warns that there's not much time left for negotiating proper regulations:
Four years ago, the first diplomatic discussions on autonomous weapons seemed more promising, with a sense that countries were ahead of the curve. Today, even as the public grows increasingly aware of the issues, and as self-driving cars pop up frequently in the daily news, energy for a ban seems to be waning. Notably, oneĀ recent open letterĀ byĀ AIĀ and robotics company founders did not call for a ban. Rather, it simply asked theĀ UNĀ to āprotect us from all these dangers.ā
AI researcher Subbarao KambhampatiĀ questions,Ā too, whether any ban is worth supportingĀ because a ban is likely to be ineffective and "a pyrrhic victory" for the proponents of peace.
I do think there are two things to bear in mind. First, by calling these powerful machines ākiller robotsā (as the Campaign to Stop Killer Robots does) we eliminate the most important variable in today's systemsāthe humans who design, maintain and manage them. Ā As this excellent, comprehensive study by Maaike Verbruggen and Vincent Boulanin has shown, the largest militaries do not immediately envision fully-autonomous systems taking over the battlefield:
The focus on full autonomous systems is somewhat problematic as it does not reflect the reality of how the military is envisioning the future of autonomy in weapon systems, nor does it allow for tackling the spectrum of challenges raised by the progress of autonomy in weapon systems in the short term. Autonomy is bound to transform the way humans interact with weapon systems and make decisions on the battlefield, but will not eliminate their role. [...] What control should humans maintain over the weapon systems they use and what can be done to ensure that such control remains adequate or meaningful as weapon systemsā capabilities become increasingly complex and autonomous?
āļøĀ Ā The failure of system designers to account for human error has been particularly exposed in air travel, another industry with high stakes. In 2009 Air France 447 disappeared into the ocean as its pilots were befuddledĀ after the autopilot transferred controls to humans. (See also this story of an Airbus whose control systems went rogue resulting in several passenger injuries.)
The secondĀ isĀ that, yes, the technologies & raw materials areĀ widely dispersed.Ā Yellow cake and centrifuges, it isn't. Cheap drones and free software, it is.Ā Rather than throwing our hands up in despair, we should spend some time seriously raising awareness of these issues is,Ā in turn, might generate creative solutions to the problem. Any workable solution is likely to multi-factor, ranging from technical, to ethical, regulatory,Ā administrative and even social.
Worth looking into: the co-founding organisation of Campaign to Stop Killer Robots, PAX, put out two reports in the wake of the latest UN discussion: an overview of trends and weapons under development and a report on the positions of European states.
Elsewhere:
Boston Robotics Atlas robot has grown up. It can now really terrify with perfect backflips;
Appleās uncrackable FaceID has been cracked. (Younger brothers can also crack FaceID, apparently);
Shadow contacts & linking: How Facebook figures out everyone you might possibly be connected to;
Brain researchers make a convincing case for new class ofĀ privacy ethics for our mental states;
What do the neurons in a deep neural network really represent? (Nice visuals.)
Making AI explainable and liable. Harvard recently published a paper reviewing current norms around explanation and legal requirements.
DEPT OF FUTURE OF TRANSPORT
š„ Bob Lutz, the 86-year-old former senior exec at GM and Chrysler, wises up to the tornado hitting the automotive value chain:
Now we are approaching the end of the line for the automobile because travel will be in standardized modules.
_The end state will be the fully autonomous module with no capability for the driver to exercise command. You will call for it, it will arrive at your location, you'll get in, input your destination and go to the freeway.
[...] Of course, there will be a transition period. Everyone will have five years to get their car off the road or sell it for scrap or trade it on a module._
When a guy who bleeds the old model says it is over within 20 years, then itās probably over. The EV view is that the transition will happen faster than that (although like so many other transitions to the future, it wonāt be evenly distributed.)
šĀ Ā RAND researchers suggest that we need to put autonomous vehicles on the road before theyāre perfect, so they can learn better, faster,Ā and get about saving lives quicker.
The researchers used the model to examine 500 different future scenarios. Under most scenarios, introducing autonomous vehicles sooner rather than later saved more lives in the near term, and under all scenarios, introducing AVs early would save more lives over the long term.
Why doesnāt Tesla include Lidar, the advanced laser-based sensor used by other autonomous vehicle firms, like Waymo?
While the high cost of LIDAR technology will hold other companies back, Tesla will be gathering data from its entire fleet of cars daily*. By 2020, Tesla will have collected information from more than 11 billion miles of driving. Other companies will struggle to amass one tenth as much.
šĀ Ā Tesla unveils its electric heavy-duty truck and itās roadster, the fastest car in the world.
Can Ford become a tech company?
[...] Ford has spent the past several years quietly snapping up tech talent. It struck a partnership with Lyft and acquired Chariot, a San Francisco-based start-up that runs group shuttles for commuters. It set up an office in Palo Alto that now has 205 employees and created Greenfield Labs, a business incubator. It invested in Argo AI, a Pittsburgh-based artificial-intelligence start-up, and several other companies, including Velodyne, which makes Lidar sensors, and a connected-car software outfit called Autonomic.
šØĀ Ā EVs have a lower carbon footprint, even when you consider their manufacturing costs, thanĀ traditional vehicles. (FT)
SHORT MORSELS TO APPEAR SMART AT DINNER PARTIES
IBM says it has a 50-qubit quantum computer working. (And is releasing access to a 20-qubit one via the cloud)
šĀ The bitcoin network now uses as much electricity as Slovakia (more than Nigeria and equivalent of 2% of all US households). Power usage is growing at about 30% monthly.
āŖĀ Ā The first church of AI. This is truly bizarre, and is probably a PR ball thrown at the media to turn the attention away from Waymoās lawsuit against Levandowski. But at least itās a big idea.
CRISPR seenĀ chewing up DNA. See also: using an older alternative to CRISPR, scientists edited DNA within the human for the first time. š®
Crapsules and other medical treatments built on a deeper understanding of our guts. Quite fascinating.
šØĀ Ā Uber drivers in Lagos are cheating the system with fake GPS.
Israeli researchers have developed an invisibility cloak. Can't show it to us yet.
What would happen if the US went vegan?
November 1971, Electronic News announces the first commercially available microprocessor, Intel 4004 with 2300 transistors.
šĀ Ā Zombie fungus takes over antsā bodies to control their minds.
Architect to use the darkest material on Earth to create an āimpossibly black buildingā for the Winter Olympics.
š¶Ā Ā Our best friends: dog owners have a substantially lower risk of death and CVD than non-owners, a large-scale survey has found.
END NOTE
I was lucky to spend last weekend at the Global Futureās Councils of the World Economic Forum in Dubai. This group explores many of the issues we review in _Exponential ViewĀ _and does so by bringing together some of the world's most interesting thinkers and doers from many disciplines. Quick thanks to Jeremy Juergens and Olivier Woeffray for facilitating that.
There is some impressive experimentation going on in Dubai. The governance structure, wealth and size of the state makes it an interesting sandbox for innovation. The UAE has a minister for AI in the cabinet, which makes sense given the AI impacts so many different parts of society. Dubai also has the world's largest 3-d printed building which I, unfortunately, wasn't able to visit.
šĀ One more thing. Please take a moment to share this newsletter with your friends. Do it by email. Or if you want to bring a smile to the collective _EVĀ _face, share this on Twitter and LinkedIn.
Cheers,
Azeem
This week's issue is brought to you with support from our partners, OnePlusĀ and DeepMind.
"The OnePlus 5 smartphone is spectacular on every level" - Mashable
Solve intelligence. Use it to make the world a better place.