š® The Sunday edition #511: GPT-5 roadmap; AI politics; new GLP-1s; BYD vs Tesla, jobs for agents & PC-sha++
An insiderās view on AI and exponential technologies
Hi, itās Azeem.
Iām back from a busy week in the Bay Area meeting with founders, investors and builders. The Bay is way ahead of the rest of the world yet again. It feels like a different world. I will be spending more time there in the coming months. OK, now onto the latest weekend edition: GPT-5 is coming, GLP-1 drugs are evolving and AI agents have a job board. Letās go!
Todayās edition has an audio component to it. Weāre working with the team behind PocketPod to bring you this newsletter in the form of a conversation with me. The conversation is AI-generated, but the meaning behind it is real. Let us know what you think in the comments!
GPT-5 is coming
Sam Altman confirmed OpenAIās near-term release of GPT-4.5 (codenamed āOrionā), calling it the ālast non-reasoning modelā. GPT-5 will arrive as a āmetaā model within months ā dynamically deciding how much reasoning power or specialisation to use on the fly. Pricing will tier by āintelligence levelā.
Rather than just scaling capabilities, OpenAI seems focused on integrating all its different models and features together. Some think this is proof that the company has run out of ways to boost sheer model performance, but personally, I disagree; thereās still room to scale more sophisticated reasoning.
OpenAIās latest reasoning model ā o3 ā has shown impressive results ranking among the top 200 competitors in CodeForces, a popular programming contest. The catch is that you donāt always need that level of heavyweight computation for every problem. This is where OpenAIās new focus comes in: figuring out how to let each model dynamically assign just the right amount of reasoning for a given task, rather than running at ātop 200 in the worldā intensity all the time.Ā
This adjustable intelligence level could become a design pattern. Anthropicās forthcoming AI model will similarly be a hybrid model that can switch between deeper reasoning and instantaneous responses. Developers will be able to use a slider to select the level of intelligence they want ā indicated by the number of tokens that the model will be able to reason with. It is expected to outperform OpenAIās models at practical coding tasks, which has been a consistent strength for Anthropic.
See also:Ā
Open AI is getting close to launching its own chip.
Elonās hyping up Grok-3 as āscary smartā and weeks away from launch. Leaked rumours from a (now former) employee says it is still behind OpenAIās reasoning models on coding.
Political compass for AI
Large language models may be developing coherent āvalue systemsā as they scale, rather than merely reflecting human biases ā and they seem to lean towards what we used to consider the American left. A couple of observations here: I think there isnāt a robust political theoretic basis for this analysis, any more than āThe Political Compassā is anything more than a parlour game.Ā
Itās weirdly parochial with a tip of the hat to the culture wars, to map AI systems to todayās political positions, rather than a more robust persistent framing of values (perhaps from the World Values Survey or something similar). What is interesting, is the notion that AI systems may converge on a similar set of values.
Keep reading with a 7-day free trial
Subscribe to Exponential View to keep reading this post and get 7 days of free access to the full post archives.