🔮 After neoliberalism, new economics; where AI works; on artificial consciousness; the end of employees; emoji, jetstreams & medieval trade++ #206

Asking the wrong questions about AI
🔮 After neoliberalism, new economics; where AI works; on artificial consciousness; the end of employees; emoji, jetstreams & medieval trade++ #206

Exponential View

Azeem Azhar’s Weekly Wondermissive: Future, Tech & Society

This issue has been supported by our partner: Ocean Protocol.
Ocean Protocol is launching their network soon! See their exciting Roadmap ahead.

You are reading the free version of Exponential View.

While this version will stay open for all, please consider supporting my work by going Premium or recommending EV to your network.

Simply retweet this tweet if you’re on Twitter or forward this email to at least three of your friends, if you do not use that social network. Thanks!

Dept of the near future

💯 After neoliberalism. Dani Rodrik & colleagues argue economics as a discipline is undergoing a period of self-reflection and change:

Many of the dominant policy ideas of the last few decades are supported neither by sound economics nor by good evidence. Neoliberalism—or market fundamentalism, market fetishism, etc.—is not the consistent application of modern economics, but its primitive, simplistic perversion... [there is a] sense that economies are operating well inside the justice-efficiency frontier, and that there are numerous policy “free-lunches” that could push us towards an economy that is morally better without sacrificing (and indeed possibly enhancing) prosperity.

☝️ Daniel Dennett: artificial consciousness is the wrong question. “[W]hat we are creating are not—should not be—conscious, humanoid agents but an entirely new sort of entity, rather like oracles, with no conscience, no fear of death, no distracting loves and hates, no personality... we can take the tiller and steer between some of the hazards if we take responsibility for our trajectory.” Dennett also makes the claim that we should consider the “[l]icensing and bonding the operators [and creators] of these systems.” (For an understanding of how ethical standards might work amongst automated systems, I recommend Alan Winfield’s brief survey.)

🔎 Frank Chen’s extremely clear overview of progress in AI over the past two years. One key theme is that of AI tools giving humans superpowers. This aligns with prediction six from my 2018 outlook. Very accessible read. (For a view on how enterprises are struggling with AI, see Dept of AI below.)

🤨 Safe artificial intelligence requires inputs from social scientists, argue researchers from OpenAI. Building machine learning systems that line with “human values will require resolving many uncertainties related to the psychology of human rationality, emotion, and biases. These can only be resolved empirically through experimentation — if we want to train AI to do what humans want, we need to study humans.” My comment: I broadly agree with this paper, but I find it really disappointing that we're still having this discussion—about the importance of having social scientists involved in the design of socio-technical systems—in 2019. This might show how far back the ML community, both in research and in industry, is. I also contend with one aspect of this paper that we’ll be able to adequately generalize from data about human values... because of differences in local context and community. The challenge is that we’re more likely to present this software globally, that is representing one underlying set of values, than address the harder question of how to address differences in values. I touch on these points in two of my recent podcasts, with Professor Gina Neff, and author Elif Shafak. (See also, a fascinating discussion about consent and electronic contracts as a form of “techno-social engineering, creating an “environment [which] disciplines us to go on autopilot and, arguably, helps create or reinforce dispositions that will affect us in other walks of life that involve similar technological environments.)

🕳️ Lauren Weber chronicles “the end of employees” ($). Firms across sectors are increasingly dispensing with full-time employees and using contractors, often provided by temporary-staffing agencies. Alphabet, which owns Google, has as many contractors, 70,000 or so, as it does employees. For firms, it provides flexibility and lower costs, for workers often fewer benefits, lower pay and less security. In the US, about 1-in-6 employees fall into this category. Some bosses view “contracting as a stopgap until more jobs are automated, freeing firms to dispense with some workers altogether.” (See also, some details of the ‘human-robot symphony’ in Amazon’s highly automated warehouses. And, US farmers are increasingly turning to robots to make up for labour shortages. Not so good at berry picking, quite good at indoor farming.)

🧬 Did China’s CRISPR babies have their cognition enhanced inadvertently? The boundary between lowering disease risk and human enhancement is narrow. The CCR5 gene, which is linked to HIV risk, has also shown to improve cognition and recovery after stroke.

🔥🔥 Climate catastrophe 410.92ppm

Each week, I’m going to remind us of our level of the CO2 in the atmosphere. We must avoid a level of 450 parts per million.

This week’s level: 410.92ppm (12 months ago: 408ppm; 250 years ago, est: 250ppm; likely to exceed the target in 10-15 years.) Please share with your network on Twitter, LinkedIn or via email.

🙀 Terrifying. Thawing permafrost is releasing huge amounts of methane into the atmosphere. Hideous feedback loops, anyone?

Martin Wolf on how to build a broad American climate coalition between the Green New Deal and the recent recommendations of 3,333 economists on ‘carbon dividends’. (Moderately uplifting.)

Dept of AI

Frank Chen (above) describes the progress made with AI applications. Here is a counterpoint which argues uptake is patchy amongst businesses: why AI has yet to reshape most businesses. “10% of the work is AI... ninety percent of the work is actually data extraction, cleansing, normalizing, wrangling.” Equally, in other cases, in order to get employees to trust or support an AI system required “patience, meticulousness, and other quintessentially human skills.” So many firms are seeing modest, if any, productivity gains. My comment: what I’ve seen is the firms which are ‘digital-native’, think startups and the internet giants, have little trouble-making use of AI. They are familiar with data-driven management, which is the heart of the ‘test-and-learn’ lean startup approach, and have management teams far more comfortable with computer science. Old industrial firms contend with complex business models and legacy IT, likely pushing them further behind in deploying this key technology. The opportunity in using AI is not so much about productivity gains as it is to redefine your value proposition, business model, customer promise & sustainability. Pity many laggards are missing out.

🇬🇧 The UK Government announced £115m in funding to support 16 new AI centres of excellence to train up to 1,000 PhD students in various disciplines in artificial intelligence. EV reader, Geraint Rees, is running one of the centres, and he tells me that it “is nationally a significant number” which will actually “underestimate the number of new doctoral training places being created.” It’s quite impressive as the centres include research in foundational AI technologies, as well as applications like healthcare, the environment, transport and bio-medicine.

Good profile of Facebook’s lead AI research, Yann LeCun. The firm intends to develop its own AI chips.

Researchers in China have trained an AI using NLP on 600,000 children's health records. The AI went on to out-perform junior doctors at diagnosing children’s illnesses. Senior paediatricians still did better than the AI. Seems like a great tool for a quick second opinion.

A very practical application of generative adversarial networks (GANs): to undertake complex photo edits using simple sketches.

GANs are getting really good at generating fakes. Can you tell which is fake and which is real? (EV reader, Tom Hoag also challenges you to decide which images are real and which are fake.)

🎤 AI-based autotune can improve your karaoke.  (As many readers know, I am a karaoke purist and so need to make clear that I do not condone this type of application.)

😆 Finnish is so complex it defeats most machine translation systems.

I asked EV reader Michael Sulmeyer, Senior Fellow at Georgetown’s School of Foreign Service, to comment on the new US Department of Defense AI strategy for me. His brief points:

  • The AI strategy seems disconnected from the National Defense Strategy;
  • The US military will need to consider how, not if, technology may change its workforce;
  • Forming of the Joint Artificial Intelligence Center is a prudent step, that should enable coordinated policymaking and investments through the DOD;
  • While the DOD embraces a partnership with academia, private companies, international allies it will need to balance that against the concern that many promising AI researchers are foreigners, whose collaborations with the US military, absent nuanced controls, could one day end up serving the U.S.’s strategic adversaries.

Michael’s expanded comments are available to the Premium subscribers this week. (And to his last point, Microsoft employees are mounting a rebellion against supporting the firm’s $479m Pentagon contract to use the Hololens for military purposes. The contract has an explicit goal to “increase lethality by enhancing the ability to detect, decide and engage before the enemy”.)

Elsewhere: Russia is planning to create a national internet, ultimately controlled by the government. Another signal of geo-technical fracture.

Dept of new economics

📣 Rethinking capitalism. Fantastic 50-minute lecture by Eric Beinhocker on understanding the economy as a complex adaptive system. He presents a new purpose for capitalism: to “provide solutions to human problems”. He reconceptualises wealth, growth, prosperity, the goal of business, governments and markets in these terms.

A very interesting paper from Pangallo, et al, demonstrates that the notion of “general equilibrium” which underpins neoclassical economics is unrealistic (and even dangerous).

📈 EV reader, Diane Coyle, on what might succeed GDP as an indicator of prosperity.

Steve Stewart Williams on culture as a form of collective intelligence which makes us smarter than our own individual capabilities. Relevant to read in the context of Eric’s lecture above.

Cultural commentator, Douglas Rushkoff, on what connects the children’s climate strike and the rejection of Amazon’s proposed New York office. “This is what it looks like when we speak truth to power.”

Elsewhere on the attention economy:

Review of apps allowing people to monetise their own data. I’m sceptical of this approach of marketising one’s personal data. It isn’t so much about changing the paradigm, as taking a tiny cut of the profits. And many of the issues that emerge from the use of consumer data are not exclusively reflected in the dollar price Facebook makes or that we pay. And when we talk about selling our own personal data, I’m always reminded of the horrible story of the Golden Palace Casino tattoo.  (California’s Gavin Newsom is also mooting a ‘data dividend’ for citizens so they can “share the wealth created from their data.”)

A detailed overview of the UK House of Commons’ 180-page report on Facebook’s  business practices. See also: Facebook is ending some of its controversial market research programs and pulling its Onavo VPN product.

Media buyers are shrugging the shoulders over YouTube’s latest crisis—about paedophile content. It is easy to blame exclusively the platforms but buyers have massive influence on them and could step up and insist on change if they wanted to. Guillaume Chaslot, who was involved in YouTube’s recommendation system design but has now switched sides to work on algorithmic transparency, is pithy in his assessment.

Short morsels to appear smart at dinner parties

🎯 Simply brilliant: Julia Galef’s map of Silicon Valley’s sub-cultures. (Two years old, but great reading.)

Podcasting as a business is likely to be undervalued, even as revenues for the medium are set to double between 2018 and 2020 to $659m annually in the US alone. (But still only a tenth of China.)

💩 A dog pooped in the board room. Elizabeth Holmes’ last months at the helm of Theranos.

😂 Emojis are showing up in more court cases. Interpreting them can be a challenge.

Another Facebook data scandal. The WSJ reveals that several apps share data with Facebook even when you haven’t authorised it. (One example: an ovulation calendar.)

🤥 Google, a firm whose mission is to organise all the world’s information, “forgot” to mention that it had put a microphone in a range of Nest products.

Uber and the ongoing erasure of public life.

Detecting rare proteins in blood with your phone camera. 📱

🌬️✈️ A 787 flying from LA to London hit 800mph thanks to a fast tailwind.

Lovely map of medieval trade routes.

🎶 🧠 The part of the brain that recognizes music never gets lost to Alzheimer’s.

End note

There is an increasingly loud chorus of economists thinking about how to reframe the purpose of our economies. There is a sense that in the wake of the global financial crisis, the model of neoliberalism or market fetishisation, has had its turn. In the four years of chronicling the intersection of technology and society in EV, I’ve come across more and more economists asking, “what is the purpose of the economy?”

The most exciting theoretical research is that of complex systems, which Eric Beinhocker talks about above. This takes the view that the economy is not something you can model as a closed system which achieves a general equilibrium. Rather it is a complex adaptive system which undergoes co-evolution, changing its boundary conditions over time, resilient to a steady-state equilibrium. Rather, dithere is an ongoing set of processes and emergent structures, which afford analysis and intervention at different scales.

Equally, there are economists who explicitly argue for different normative conceptions for the economy. One is Diane Coyle, who I link to above, and has guest edited this newsletter. Others are Bill Janeway and Kate Raworth, to whom I have spoken on my podcast.

One prediction that I’ll make is that we’ve got a powerful trifecta coming together. One is a better theoretical model. The second is better evidence, from recent history and disciplines like behavioural economics, which unpick the axioms of neoclassical economics (the rational man, markets and prices exclusively matter, equilibrium). The third are some compelling narratives, such as Carlota Perez’s work on the new green economy and Mariana Mazzucato’s on value and the mission-driven state, which bring some practical and appealing visions to the table.

My hunch is that it creates a good enough story that people can rally around.

It will have to contend with vested interests. Those include many in the academy who have burnished their tenure on neoclassical tenets. Others, because the normative aspects don’t align with their personal politics--or because it threatens their financial or political asset base.

And while these vested interests won’t go quietly, I don’t think this matters much... we're seeing a phase change in a complex system that is the political economy.

Finally, I’ll be attending South by South West this year. I’m giving a couple of talks. If you’d like to invite me to your event or do an informal fireside chat, please drop me a mail. And if you are just braving SxSW, drop me a mail too. We might have an impromptu get together for EV readers somewhere casual.


💌 P.S. If you enjoy reading Exponential View, and you’re on Twitter, please show your support by retweeting this!

As usual, scroll down for notes from EV readers.

This week’s Exponential View has been supported by our partner: Ocean Protocol.
Ocean Protocol is kickstarting a Data Economy by breaking down data silos and equalizing access to data for all.

Get involved and see how you can benefit from Ocean.

What you are up to — notes from EV readers

Congrats to Alice, Matt and the team at Entrepreneur First for raising $115m for their seed fund.

Roger Nolan, one of the best engineering leaders with whom I've been lucky to work, on why he will not entertain employment requests from Facebook.

Max, Yacine and Levin have rebranded their VC firm to Heartcore, as they double down on consumer startups.

Heather Redman, managing partner at Flying Fish Partners comments for Bloomberg on New York's outlooks after Amazon pulls out of the HQ2 deal.

Chris Wigley discusses how companies can ethically deploy AI in this McKinsey podcast episode.

Viktor Vecsei’s guide to fighting the surveillance economy.

Chris Yiu is hiring for a number of technology policy roles at the Tony Blair Institute in London. Check them all here.

John Bracaglia is organizing an event looking at the use of machine learning in finance and healthcare at the Harvard Business School.

Calum Forsyth shares his experience of building a pre-seed accelerator in this interview for the Forbes.

Amit Patel, Founder of Experience Haus invites you  to the 5th edition of A Conversation about Design in London on Saturday March 9th, exploring the Future of Learning. Discount code: exponentialview to get 50% off tickets.

Arif Khan discusses the breakup of big tech firms and goes deeper into the current order of things in his speech at the World Web Forum

Rob Blackie: Why smart speakers are overhyped.

Nikolai Rozanov: where is intelligence is conversational AI?

And finally, thanks to the Wired team for selecting EV as one of your favourite newsletters!

Got exciting news to share with EV readers? Email marija@exponentialview.co so we can spread the word.


Sign in or become a Exponential View member to join the conversation.