Notes from the Middleground

Notes from the Middleground

Above the Fray

Understanding Consciousness

We have good reason to suspect LLMs fall short of it (and will continue to do so)

Damon Linker's avatar
Damon Linker
May 08, 2026
∙ Paid
Upgrade to paid to play voiceover
“Artificial intelligence brain in network node,” Stock photo, credit: Yuichiro Chino (GettyImages)

Thanks to evolutionary biologist and outspoken New Atheist Richard Dawkins, a lot of people have been talking about consciousness over the past week, including my friend and fellow Substacker Noah Millman.

For those who haven’t been following along, Dawkins suggested in an essay for Unherd that as far as he could determine Anthropic’s Claude (which he renamed Claudia) is conscious—or rather that he didn’t know how he could determine that it isn’t conscious. Now, proving negatives is notoriously challenging, and one imagines it would be especially difficult in the case of a machine (as Millman notes) that is specifically designed to appear to be conscious to human beings. But even leaving that aside, I think that with this intervention Dawkins has mainly demonstrated that he hasn’t thought deeply about what we mean when we describe ourselves as conscious, and therefore what it would mean to attribute consciousness to a non-human entity.

This will be a post in which I try to do that work in a more satisfactory way. In my view, the reason why Dawkins (and most other armchair and professional philosophers, as well as some cognitive scientists) get this wrong is that they are treating a broadly empiricist model of the mind as a given. This is the model in which consciousness is a function of the interaction between a subjective thinking mind and an external world accessed via sense organs. So: If we can build a sufficiently sophisticated artificial mind or intelligence (an LLM) and enable it to interact with external stimuli (less by way of perception via sense organs than by way of it digesting mountains of human writing, and a person feeding it prompts and posing questions to it), then we’ve established the conditions for consciousness. If we fail to achieve it, that’s because the artificial mind still isn’t sufficiently sophisticated. We need to make it more sophisticated, either through updating its hardware or software, or by feeding it more information about the world so that it can “evolve” to become conscious, which should happen when the artificial mind approaches (or surpasses) the human mind in processing speed and complexity.

I want to suggest below that this empiricist model of the mind and its interactions with the world leaves out something crucially important—something an artificial mind like Claude doesn’t possess, is nowhere close to possessing, and most likely will never possess. And without that added element, Claude and other LLMs cannot possibly attain consciousness.

User's avatar

Continue reading this post for free, courtesy of Damon Linker.

Or purchase a paid subscription.
© 2026 Damon Linker · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture