How GPT-3 is shaping our AI future

My conversation with OpenAI's Sam Altman
 How GPT-3 is shaping our AI future

When OpenAI released GPT-3 in July, it marked the first time a general-purpose AI saw the light of the day. In an overview of the history of knowledge technologies that preceded it, I wrote:

Creating a generalisable tool for actually answering the question you wanted answering was hard. People believed it required lots of knowledge about the world to be explicitly encoded in specific ways. The Cyc project (now Cycorp) tackled this area for 30 years before launching commercially. [因 What GPT-3 has achieved is to encapsulate knowledge, billions of words of it, in a sufficiently parameterised model that it can give granular answers to very different types of queries, across multiple domains. It can also follow quite complex instructions across those domains. [因 In a sense, it does something the search engine doesnt do, but which we want it to do. It is capable of synthesizing the information and presenting it in a near usable form. (This demo of an answer engine that goes one step beyond Google makes the point.)

I got together with OpenAIs boss, Sam Altman to discuss GPT-3 from an insiders point of view. Sam is a realist. GPT-3 is powerful, but a small step towards the holy grail of AI research, the artificial general intelligence.

We discuss this at length, and go into

  • How AGI could be used both to reduce and exacerbate inequality
  • How governance models need to change to address the growing power of technology companies
  • How Sams experience leading YCombinator informed his leadership of OpenAI

All members will have access to the podcast transcript next week.

Listen on your platform of choice

P.S. We have some annoying delivery issues, and you can help us overcome them: add to your contact list, and move this email to your main inbox to signal you like us. Thank you!


Sign in or become a Exponential View member to join the conversation.