Playback speed
Share post
Share post at current time

🔮 Opening the AI black box with Richard Socher

Exploring the technological foundations of truthful and verifiable AI

One of the biggest concerns surrounding the rollout of large language models is their complicated relationship with the truth, exemplified through their tendency to hallucinate. To gain a deep understanding of this technological, philosophical and business challenge, I spoke to one of the most cited AI researchers-turned founder, Richard Socher. One of Richard’s most impactful early contributions was working alongside Fei-Fei Li1 and other researchers on the ImageNet paper, and the development of this vast, structured image database that significantly advanced computer vision.

More recently, Richard founded, an AI chatbot search engine at the forefront of truthful and verifiable AI. When Apple recently announced that iPhone users in the EU would be asked to select their default browser on iOS 17.4, made the cut as the only AI search assistant.

I’m excited about what Richard is building, but in our conversation we go beyond the latest venture. You’ll hear us discuss…

  • The nature of intelligence: making sense of the I in AI,

  • The potential of AI’s capabilities beyond human evolution,

  • The commoditisation of AI, its integration into the software industry,

  • The mission of revolutionising search behind,

  • The evolution of LLMs and future trajectory.


Full transcript

[00:00:00] Azeem Azhar2: Richard, welcome to the show.

[00:00:03] Richard Socher: Thanks for having me.

[00:00:06] Azeem Azhar: Well, 168, 000 citations. I mean, that is that's quite something. Is your family proud of that?

[00:00:12] Richard Socher: Actually, it's funny that you say that, but yes, my dad especially is very proud of that. He even brought it up at his wedding speech, right.

[00:00:22] Azeem Azhar: At your wedding. Wonderful. Wonderful. Just to really, you know, put the pressure on your partner as to the expectations that they, they have to achieve, right?

[00:00:38] Richard Socher: Mostly for fun, you know, but, you know, he was also an academic for a few years and understands that that is not very common.

[00:00:47] Azeem Azhar: That stems mostly, I guess, from the, the paper that in a way started this all off. Which was more than a decade ago, the ImageNet paper with Fei-Fei Li, who's also of course been one of my guests, I've spoken to her a few times and, you know, had that critical bit of kindling that I suppose kicked off the deep learning wave.

Do you think that's right if we look historically? Is that a reasonable place to start?

[00:01:19] Richard Socher: Yeah, so I think they're actually there's, there's one event that happened even before ImageNet and that was George Dahl and Geoff Hinton actually working on speech recognition and neural nets.

It was still, there's still some probabilistic pre training models in there, but that was sort of the first time where people say, wow, if we have more training data, now speech recognition actually is best done with a neural network. And then the ImageNet wave came, of course ImageNet was the data set, Alex Krizhevsky, Hinton again and Ilya Sutskever actually using that data set and training a large convolutional neural net.

That it was the watershed moment, I think, for most to understand, wow, it was enabled by having this data set. So it's a necessary condition for that success. But of course, the model is absolutely crucial. And then when you look at my second most highly cited paper, it's a word vector paper.

And word vectors were kind of the necessary ingredient to get natural language processing into the neural network field as well, because speech is somewhat straightforwardly put into a neural network. Images are very easy to put into a neural network. Neural networks want numbers as inputs, you know, the function f of x equals x squared or something, right?

X is the input and you get the function out like x squared, like a neural net is even more. Much more complex function and not just one number, but often thousands of numbers or millions of numbers that are fed into the neural network. And then you get some output. And so the words aren't necessarily a list of numbers.

And so having a word as a vector was a very crucial moment for, and of course, there are other ways you can put words into vectors, work with vectors, the other famous word vector. But those two papers kind of helped everyone to get to start using neural nets for natural language processing too.

And that's sort of the most of the rest of my citations.

[00:03:19] Azeem Azhar: They're pretty foundational. Now there's a key phrase that you said, which was, if we had enough data… And we're going to return to the question of data during our conversation, but maybe let's zoom out a little bit.

We talk a lot about the artificial intelligence wave, the artificial intelligence boom, dot AI is the hottest domain name that you can find these days but we often skirt over what we mean by the I in that, the intelligence. So what is intelligence?

[00:03:57] Richard Socher: That is a great question. I'll try to keep it short because we could talk about that for hours.

This post is for paid subscribers

Exponential View by Azeem Azhar
Exponential View by Azeem Azhar
Weekly deep dives on AI and exponential technologies from a global expert featured in The Economist, WSJ and Financial Times. Join investors, C-suite execs and global leaders and change how you think about the future.