Hi all,
I know that this past year of AI developments can feel like a lot, so we assembled five key reads that will help you enter 2024 with clarity.
The key reads cover:
Why be bullish on today’s AI?
How to tackle the problem of trust and factuality in a society of AI?
All you need to know about the most powerful AI company.
Putting extinction risk into context.
And, finally, a practical guide to utilising genAI in 2024.
Enjoy!
1. A society of AI — why I am bullish on AI today
In this piece, I lay my case for today’s LLMs and how they will take us to generalised intelligence. I discuss what matters and what doesn’t as we evaluate the impact of the technology in the new year.
[W]e’re on the cusp of being a society of AI. That is a society where artificial cognitive assistance, of various capabilities, is widely available to billions of us. Given the current limitations of LLMs, and the many challenging theoretical steps that builders have identified, we might be sitting at this point for a while, even with GPT-choose-your-big-number.
🧠 AI’s first flight
When Orville Wright covered 36 metres aloft a field in Kitty Hawk 120 years ago, it was a milestone. A moment. One that we can all recognise today as the point where heavier-than-air flight becomes a reality. But what could one achieve with a single-seat, bicycle-wheeled device? Not much in truth. No cargo capacity, no range, no passengers, and fragile, damaged beyond repair after its first day.
2. Trust and factuality in the world of synthetic propaganda
As chaos merchants deploy AI tools to produce propaganda, influence elections and encourage low-rung tribalism in 2024, we have to find new ways to encapsulate the problem of trust and factuality in a way that is helpful for action and policy. Earlier this year, I proposed the concept of epistemic integrity.
It is a measure or a public good that reflects the degree to which a society maintains robust and resilient systems for ensuring the integrity of facts and knowledge.
[…]
As a public good, high levels of epistemic integrity would drive or reflect informed decision-making, interpersonal trust and civic cohesion. It would also foster higher levels of inter-state trust.
🧐 AI and epistemic integrity
There is the fog of war and there is a world where many different forces, from the need to break stories, the need for clicks, the shaping of distribution algorithms, the ease with which any side (or just random chaos merchants) can use AI tools to produce any kind of propaganda.
3. The most powerful AI company
The fallout at OpenAI laid bare many of the internal workings of the organisation, the ideologies driving its leadership, and its relationships with investors and strategic partners. What and how OpenAI — Sam Altman and the Board of Secrets — do in the new year will continue to be important. So, I’d recommend you listen to my conversation with the brilliant Karen Hao who’s been working to understand OpenAI for many years.
🔥 Dissecting the OpenAI schism with Karen Hao
Karen Hao, whose reporting on OpenAI has been the light in the fog of the OpenAI crisis in the past week, joins me to unravel what happened at OpenAI. She’s kept a close eye on AI developments over the past several years...
After this, listen to my first conversation with Sam from 2020, in which he said
When there is no public oversight, when you have these companies that are as powerful as, or more powerful than, most nations, and are governed by unelected, and to be perfectly honest, unaccountable leaders, I think that should give all of us pause as these technologies get more and more powerful.
[…]
Let’s say we really do create AGI. Right now, if I’m doing a bad job with that, our board of directors, which is also not publicly accountable, can fire me and say, “Let’s try somebody else”. But it feels to me like at that point, at some level of power, the world as a whole should be able to say, hey, maybe we need a new leader of this company, just like we are able to vote on the person who runs the country.
4. Let’s talk about extinction
The question of existential AI risk is a complex one — and in this essay I lay my case for why existential risk is *not* our biggest worry by far, and what real risks we are being oblivious to.
And yes, there could be a small chance that there is some risk that emerges from today’s AI systems that is of really large scale and severe impact. Let’s put some resources into those areas — proportionately. This research should guide AI development and institutional scaffolding. It undeniably is important but not the only agenda item. And, unclear to me, is has become the most important agenda item.
5. Generative AI for exponentialists
We did a lot of work to equip our community with practical guidance and peer-learning on generative AI this year. We encapsulated all the learnings and experiments in a single end-of-year Promptpack.
🐙 Promptpack: Your ultimate end-of-year AI guide
Getting started • The latest science of prompting • 10 of our favourite prompts of 2023 • How to tackle hallucinations and errors
…..And as a bonus, here’s how I used ChatGPT to create a board game — a process applicable to any innovation work you may do in the New Year!
🧪 Using ChatGPT in the innovation process
I’m continuing my experiments with ChatGPT. In particular, trying to figure out how it can synthesise ideas from different domains and help in the creative process. I play board games with my kids; among them are Jaipur, Azul, Pandemic, Innovation, Forbidden Island, and others. We were on the hunt for a new one (half-term is coming) and I was struggling to filter via BoardGameGeek. Heading over to ChatGPT I figured I would see where a discussion would take me.