11 Comments
User's avatar
Kate Hammer's avatar

“And the only answer then might be government, the state, the regulator, unfashionable though it may be to say. And knowing that, a decent strategy for any large tech firm is the ‘woke’ one, focus on emotive and important issues early, such as facial recognition. So that we don’t ask the really hard questions.” I believe *we* do need to ask the really hard questions, Azeem, which takes courage because governments are seen as unfashionable. Otherwise, the critical spirit is blunted and weakened much the same way as the ‘woke’ strategy you outline is weak.

The postscript is interesting. I don’t think it’s sufficient for the tech companies and their satellites/dependents alone to have the conversation about scale. I’d like us to take very seriously the threat implied in Thiel’s depiction of monopoly, because I’m not convinced that all dynamic states will engender competition.

Expand full comment
Azeem Azhar's avatar

They won't engender competition. The M&A appetite and capability of large tech firms means that potential threats can be turned into internal assets (see: Nest, Alexa/Evi, Instagram) for examples. This blunts the capability of the market to come up with new solutions that unpick the power.

The nature of the power these firms. have is more integrative, infrastructural and substantive than that of previous eras of tech firms, like 90s Microsoft or 80s IBM.

Expand full comment
Kate Hammer's avatar

I agree. So our meta-story about markets has to change. Time for a hard, clear-headed look at monopolies, including how they been helped along by so-called laissez-faire models. If we stick with Hayek’s fairytale we limit our ability to hold meaningful debates about government’s possible roles.

Expand full comment
Kevin Werbach's avatar

The key question is, “what regulation?” One pernicious impact of the libertarian reshaping of the regulatory debate that you describe is that we talk about companies being “regulated” or “regulated.” All the digital platforms are subject to a host of regulation. It’s obviously ineffectual to address important problems, as implemented today. The real problem, though, isn’t the lack of rules; it’s the intellectual poverty of regulatory visions. Even GDPR is a refinement of the basic approach staked out in the 1995 Data Protection Directive. In the US, we’re grasping back to Progressive-era antitrust, which is well and good. It’s just not going to be sufficient. We’ll need new conceptual approaches to regulation for the novelty of this environment, just as we did in earlier periods.

Also, you’re obviously right that the embrace of regulation by big tech CEOs is strategic. However, something has changed quite substantially in the zeitgeist over the past five years. The rhetoric that regulation and innovation are inherently incompatible (and innovation is obviously superior) is no longer viable in polite society. You may still hear it from Peter Thiel, but even Zuckerberg makes concessions to reality. So there is reason for hope.

Expand full comment
Azeem Azhar's avatar

Beautifully phrased: "The real problem, though, isn’t the lack of rules; it’s the intellectual poverty of regulatory visions."

The idea of agile regulation, perhaps starting in sandboxes, has been doing the rounds for years. Brad Smith talks about it as well. I can see that being a useful part of the toolkit but it still requires some key novel insight.

I deliberately brought up the point about increasing returns to scale. Much historical anti-trust existed in a world of diminishing marginal returns. Any monopolist at those times needed to have done something out of the ordinary--and ultimately becomes a rent seeker. Software, intangibles and a host of other exponential age dynamics give us increasing marginal returns. That should create an entirely novel framing for how we think about monopoly (or bigness).

What would that be? I can see a "sloughing" of layers of large companies being a possibility. Where you look like a regulated utility, could we turn you into one? Or could we simply limit the size a company can get to? (Although there are so many second-order problems with the latter approach, essentially it would likely created a shadow governance systems and set of hard-to-track patterns of control and beneficial ownership).

What else?

Expand full comment
John Elkington's avatar

Over the years, have repeatedly asked people in Silicon Valley and its global hinterland whether they have heard of Thomas Midgley, Jr. (https://interestingengineering.com/thomas-midgley-jr-the-man-who-harmed-the-world-the-most.) Only one had. In my mind, the patron saint of unintended consequences. So who's now working on AI-based regulatory design to keep AI sector on rails? Maybe they'd pay serious attention to such an approach?

Expand full comment
Gerd Leonhard's avatar

Good stuff

Expand full comment
Katie Lips's avatar

And the regulation is coming; the Competition and Markets Authority is "carrying out a market study into online platforms and the digital advertising market in the UK”.

https://www.gov.uk/cma-cases/online-platforms-and-digital-advertising-market-study#interim-report

Some of the suggestions are indeed rather blunt (I’ll not go as far as "intellectually impoverished") e.g. around interoperability; recommending Facebook data should be interoperable so it can be used “ elsewhere” (without imagining where / how), hoping that new entrants will build (similar) systems to compete with the incumbents (and not then just be acquired), and to other reader’s comments; perhaps accidentally stifling innovation that would enable new entrants in the process.

The CMA is inviting comments by the way.

My hope is that companies will self regulate, and collaborate with regulators to develop regulation that works for everyone: Consumers first. And they’ll need to do this before the natural tide of consumer opinion turns anyway.

If they don’t they’ll find themselves on the back foot, not because of regulation, but because of the wave of consumer opinion. Most of us have no idea what kind of data these companies hold on us and how they and their customers use it. Who knows what will happen when we all find out!

Expand full comment
Louise Shaxson's avatar

I hadn't read this post or the comments before my colleague and I wrote this letter to the FT about the carbon footprint of the AI revolution https://www.ft.com/content/f94e271c-3d46-11ea-b232-000f4477fbca (sorry, it's behind a paywall). In it, we argue that we can choose how we use AI, which I think links to Kevin's excellent point about 'the poverty of regulatory visions'. What does this mean in practical terms? We suggest that it's not good enough to leave it to 'the regulators': we need tech companies to engage with economic & development planners (particularly in low- and middle-income countries where I work) in way that is fully informed about the social, economic, environmental and normative risks of disruptive technologies and how they intersect with each other. Diana FC's comments on Davos in your latest newsletter would seem to bear this out: it's an issue of what strategic choices we make, so fundamentally a planning question not just a regulatory one.

Expand full comment
Kate Hammer's avatar

Louise, I'm so relieved to see you highlight that strategic choices are at the heart. Does it seem to you that regulation is always trailing invention? I wonder what happens if regulation occupies a seat at the table from a far earlier stage?

Expand full comment
Louise Shaxson's avatar

I think it is, because we often don't know what we need to regulate until we've had a good look at what the invention is and what it does, which is often unclear in the early stages. Also because govts don't want to limit innovation that could be beneficial. Regulators should absolutely have an earlier seat at the table because they will be well versed in the precautionary principle (or should be), but I think it's also about having these strategic conversations regularly, with a wide range of people, and in full view.

Expand full comment