💡 Grading AI: The Hits and Misses
A decade into the artificial intelligence boom, scientists in research and industry are making incredible breakthroughs. Increases in computing power, theoretical advances and a rolling wave of capital have revolutionised domains from biology and design to transport and language analysis.
Progress has mostly been astonishing. But where has research fallen short – and how might we fix those shortcomings to make the next ten years even better?
This week’s podcast guest is perfectly placed to answer those questions.
Murray Shanahan is a senior research scientist at DeepMind, perhaps the most important AI company in the world. He’s also a professor of Cognitive Robotics at Imperial College London. Murray works at the cutting edge of industry and academia, and has a deep understanding of the field’s history – as well as where it might go next.
We discussed the recent history of AI, the current state of play, and where the major breakthroughs of the next decade might come from. You can listen to our conversation (or read a full transcript) here.
The Big Idea
A lot of the progress in AI can be traced to two linked phenomena. One can be encapsulated by the idea of the “unreasonable effectiveness of data,” which originated from a famous paper published by AI researchers at Google back in 2009. Here’s how Murray describes it, using the example of image-recognition software that's not working as well as expected:
[I]t really doesn't work very well. So you think, OK, let's scale up… so you double the amount of data you've got, and you double the amount of compute and it's still [terrible]... time goes by and even more data becomes available, even more compute becomes available, and suddenly, everything seems to work. Suddenly things take off and work amazingly well. And that's what happened in 2012.
The second breakthrough was in hardware, and underpinned that data tsunami: the repurposing of graphical processing units – computer chips initially designed to render graphics more quickly – to do rapid numerical computation:
[I]t's a sort of hack. It's repurposing of a technology that was meant for something completely different.
Those are simple ideas, but they set the stage for what amounts to an entirely new scientific method. Advances in computing power, and the AI breakthroughs they enable, are pretty much universally applicable. Developments in AI become developments in all fields.
The Slow Death of Expertise
Murray made the fascinating point that advances in machine learning mean cutting-edge research is, in a sense, becoming less reliant on domain experts. Take the example of AlphaFold, DeepMind’s system for predicting the shape of a protein from its amino acid sequence:
[T]he AlphaFold team had to bring in experts on proteins and protein folding. But it really is the brute force engineering and compute and data that has solved the problem.
The same applies to AI language translation; linguistics expertise might help, but the heavy lifting will be done by computing power and ever-improving machine learning models.
The Road to the Big Brain
Described like that, AI sounds every bit the general-purpose technology, a sort of skeleton-key for our trickiest intellectual locks. AI is most likely a general-purpose technology–I certainly argue it is in my book, but the economists probably won’t have their final say on that for years. But AI does not yet remain generally intelligent. All the AI models that can be put to work across industries are still essentially very narrow tools.
As Murray explains, there’s an increasing recognition that deep learning won’t get us there on its own. To make progress towards an artificial general intelligence, we might need to do more work on embodiment; putting AIs in physical spaces and real-world situations (or virtual versions thereof) where they can sense their surroundings and learn from them:
[I]t's only through embodied interaction with the physical world that we can learn that basic layer of common sense; understanding the ordinary, everyday world of physical objects; [understanding] objects are things that occupy space and are still there when you go away from them, and have backs, and undersides, and you can pick them up and put them down.
That objective doesn't look close. But then again, we're further along the path of progress now than we thought we would be.
Murray and I also discussed:
🧬 Why transformer models were a surprise breakthrough [13.10]
🦊 How animal cognition can help us understand AI [28.47]
🧠 Whether electric cars can “think” [41.24]
Listen to this, too
Last year, I spoke to Sam Altman, CEO of OpenAI, about the arrival of GPT-3, DeepMind’s astonishingly impressive language-generating AI. GPT-3 represents one of the most significant milestones in recent AI history. Sam and I discussed the path towards “true” artificial intelligence, how to ensure AI leads to an equitable future, and how governance needs to change to keep pace with tech. You can listen to the podcast here.