When OpenAI released GPT-3 in July, it marked the first time a general-purpose AI saw the light of the day. In an overview of the history of knowledge technologies that preceded it, I wrote:
Creating a generalisable tool for actually answering the question you wanted answering was hard. People believed it required lots of knowledge about the world to be explicitly encoded in specific ways. The Cyc project (now Cycorp) tackled this area for 30 years before launching commercially. […] What GPT-3 has achieved is to encapsulate knowledge, billions of words of it, in a sufficiently parameterised model that it can give granular answers to very different types of queries, across multiple domains. It can also follow quite complex instructions across those domains. […] In a sense, it does something the search engine doesn’t do, but which we want it to do. It is capable of synthesizing the information and presenting it in a near usable form. (This demo of an answer engine that goes one step beyond Google makes the point.)
I got together with OpenAI’s boss, Sam Altman to discuss GPT-3 from an insider’s point of view. Sam is a realist. GPT-3 is powerful, but a small step towards the holy grail of AI research, the artificial general intelligence.
We discuss this at length, and go into…
How AGI could be used both to reduce and exacerbate inequality
How governance models need to change to address the growing power of technology companies
How Sam’s experience leading YCombinator informed his leadership of OpenAI
All members will have access to the podcast transcript next week.
P.S. We have some annoying delivery issues, and you can help us overcome them: add azeem.azhar@exponentialview.co to your contact list, and move this email to your main inbox to signal you like us. Thank you!
