🔮 Sunday commentary: Deep learning’s next decades

The latest edition of Stanford University's AI Index was published. A number of exponential trends are visible. Today’s commentary is dedicated to assessing the near future of deep learning.

AI patents filings are growing at 76% per year. (All the charts below are sourced from the Stanford University AI index.)

Technical progress on various lab benchmarks continues to improve, but it is best seen in longitudinal data. For example, in facial recognition, the best algorithms have a close to 100% accuracy rate on the most challenging data sets. In some areas, like question answering, state of the art models exceed human performance (again, in lab tests.)

Most illuminating, investment levels in AI companies have rocketed. In 2021, $175bn went to such firms whether by private investment, IPO, M&A or minority stakes. This is an eight-fold increase in five years. AI infrastructure layers (meaning data management, cloud computer), healthcare and finance were the top three sectors.

The market is both maturing and immaturing at the same time. We are witnessing the rapid deployment of capital to apply this new general purpose technology. As I talk to firms using these techniques, it's clear that most are still at the start of their journey.

Deep learning’s past and its future

Andrej Karpathy, one of the second-generation pioneers of deep learning and author of the first deep-learning course delivered at Stanford University, got me thinking in an essay published in the same week as the Stanford research. He mused back on the past 33-years of deep learning, from Yann LeCun's important paper which applied neural nets to handwriting recognition.

Karpathy points out that the science in the 1989 paper "reads remarkably modern", but the technology has moved on. The dataset was tiny, just 7,291 images in grayscale of merely 16 by 16 pixels. And the neural network minute with only 1,000 neurons and 9,760 tunable parameters. In 1989 LeCun had access to a state of the art Sun workstation. Karpathy ran the neural network on his consumer-grade MacBook Air, he discovered a 3,000x increase in speed.

The same, but faster

In more than three decades, "not much has changed on the macro level. We’re still setting up differentiable neural net architectures made of layers of neurons and optimising them end-to-end with backpropagation and stochastic gradient descent. Everything reads remarkably familiar, except it is smaller."

This post is for paying subscribers only

Already have an account? Sign in


Sign in or become a Exponential View member to join the conversation.