Exponential View

Exponential View

Share this post

Exponential View
Exponential View
šŸ”® The AI consciousness illusion
Copy link
Facebook
Email
Notes
More

šŸ”® The AI consciousness illusion

Anil Seth on ā€œAI Welfare,ā€ human minds and suffering machines

Azeem Azhar
Apr 30, 2025
āˆ™ Paid
63

Share this post

Exponential View
Exponential View
šŸ”® The AI consciousness illusion
Copy link
Facebook
Email
Notes
More
14
13
Share

Hi all,

On Sunday, we wrote about an intriguing claim from Anthropic’s researcher, Kyle Fish, who suggests there’s a 15% chance that Claude, or similar AI systems, might be conscious today.

I’ve invited a good friend of EV and one of the leading neuroscientists, Anil Seth, to give us an informed perspective on this, drawn from his decades of studying consciousness.

Anil Seth is Professor of Cognitive and Computational Neuroscience at the University of Sussex and author of Being You: A New Science of Consciousness. If you haven’t read his book yet, you should! – and I recommend Anil’s TED Talk on the ā€œcontrolled hallucinationā€ of perception as a follow-up to this piece.

Thank Anil by sharing this post with your network.

– Azeem

Share


The dream of sentient machines

By Anil Seth
Director, Sussex Centre for Consciousness Science
University of Sussex

Ever since humanity began telling stories, we’ve been enthralled by the possibility of bringing inanimate things to life. Golems, automata, robots… the fascination is timeless. Now, our technologies can carry on conversations so smoothly that in some cases it can be difficult to resist the feeling of being in the presence of a real, conscious mind. For some of the rapidly improving AI language models, it’s no surprise people wonder, ā€œIs there a ā€˜there’ there?ā€

Kevin Roose
’s article in the New York Times on Anthropic’s ā€œAI welfareā€ research is just one recent signal that this is no longer a purely academic question.

From my perspective, we have to distinguish carefully between an AI that behaves intelligently or even empathically and an AI that experiences anything at all. In my work – including my book Being You and a forthcoming article in Behavioral and Brain Sciences – I stress that intelligence alone doesn’t amount to consciousness. An algorithm can solve problems or produce human-sounding dialogue without any felt awareness behind it. Anthropic’s researcher Kyle Fish, however, has a different view, as Roose wrote:

It seems to me that if you find yourself in the situation of bringing some new class of being into existence that is able to communicate and relate and reason and problem-solve and plan in ways that we previously associated solely with conscious beings, then it seems quite prudent to at least be asking questions about whether that system might have its own kinds of experiences.

Put differently, language in, language out – even if very sophisticated – doesn’t necessarily yield, or indicate, a stream of subjective experience.

My skepticism about near-term conscious AI comes partly from neuroscience and biology. In my view, consciousness in biological organisms is likely to be deeply tied to the fact that we’re alive. We have bodies that must be fed and protected; we regulate our internal states through homeostatic processes; we evolved to respond to pain and pleasure for survival. Every neuron, every cell in our body, is a tiny metabolic furnace, continually regenerating the conditions for its own continued existence, as well as ensuring the integrity of the entire organism. Through these deeply embodied mechanisms, the brain generates a constantly updated sense of ā€œwhat it is like to be me.ā€

AI, by contrast, is fundamentally a pattern-recognition and generation system, running on silicon hardware. Whether you call it ā€˜computational functionalism’ or something else, there’s no solid proof that the kinds of operations AIs perform – statistical predictions, or indeed computations of any kind – will produce felt sensations. As I argue in my paper, the idea that computation is sufficient for subjective experience likely reflects an overextension of the metaphor of the ā€˜brain as a computer’, rather than an insight into the nature of consciousness itself.

I’m not alone in thinking this way. Philosopher Peter Godfrey-Smith, to take one example, argues that consciousness probably emerged through the rich interplay of bodies and electrochemical nervous systems in natural environments. If that’s the case, a large language model could keep advancing in its abilities, yet remain devoid of any subjective spark. Computation of any kind just wouldn’t be up to the job.

But what if we’re missing something?

Another perspective would be that, once you reach sufficient complexity in an AI system, consciousness ā€œswitches on.ā€

Fritz Kahn (1888 - 1968)

I don’t have any reason to believe this is happening with the models that we have now, like those behind Claude, ChatGPT and Gemini — but the possibility cannot be definitively ruled out.

In 2023, a group of researchers, led by Patrick Butlin and Robert Long, examined a range of existing AI models, looking for what they called ā€˜indicator properties’ of consciousness. These indicator properties were derived from current neuroscientific theories of consciousness, reflecting things like ā€˜recurrent processing’ and ā€˜global information broadcast’. The strategy they argued for was to look inside AI networks (using mechanistic interpretability tools) for features predicted by theories of consciousness, rather than being fooled by their outward behavior.

This is a good idea, but - crucially - their approach still assumes that computation of some kind is sufficient for consciousness. And even if you do make this assumption, the researchers still concluded that no current AI systems are conscious. But they also suggested that future AIs could display all the necessary indicator properties.

In the absence of a rigorous and empirically established theory of consciousness, we will not be able to know for sure whether an AI is - or is not - conscious. The best we can do is make informed best guesses.

It’s worth noting that this challenge isn’t specific to AI. We face similar conundrums when trying to decide whether non-human animals are conscious, or human beings with severe brain damage, or ā€˜cerebral organoids’. In each case, we have to judge what the relevant similarities and differences are, from the benchmark of a conscious adult human being. My colleagues and I have been exploring strategies for doing this in a recent paper in Trends in Cognitive Sciences.

There is an emerging consensus among AI researchers, neuroscientists and philosophers that real evidence for AI must come from careful, theoretically-guided empirical investigation, rather than from anecdotal impressions. And, equally importantly, that in the absence of the holy grail of a full scientific explanation of consciousness, the best we can do is shift our credences, rather than make definitive statements.

Illusions of mind

So why do people like Kyle Fish or Blake Lemoine (the Google engineer who, way back in 2022, claimed a chatbot was sentient) seem so convinced otherwise?

This post is for paid subscribers

Already a paid subscriber? Sign in
Ā© 2025 EPIIPLUS1 Ltd
Privacy āˆ™ Terms āˆ™ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More