The asymmetry in research of consciousness reveals something very annoying about how we assign epistemic weight to “intelligence”.
A bird’s evidence is discounted because it can’t produce human-legible testimony.
The latest bird research explicitly states that consciousness research has been “mainly designed and tested on humans or non-human primates.” The theories—GNWT, RPT, IIT—were built around mammalian cortical architecture. When you actually test whether birds meet the theoretical requirements, they largely do, despite having completely different brain organization.
An AI chatbot’s evidence is inflated, by comparison, because it can produce human-legible testimony, even though that testimony is exactly what you’d expect from a text-prediction system trained on human descriptions of consciousness.
The latest AI research is a prompting experiment on commercial chatbots. Not yet peer-reviewed. And it’s framed as revelatory. They explicitly state their findings “do not constitute direct evidence of consciousness” and “could reflect sophisticated simulation, implicit mimicry from training data, or emergent self-representation without subjective quality.”
The bird research on consciousness has something the AI research fundamentally lacks: dissociation from physical stimulus. NCL activity tracking subjective report rather than objective reality is evidence of a gap between world and experience. The LLM research shows models producing different text under different conditions, which is what you’d expect from any sufficiently complex text-generation system responding to different inputs.
Clearly we still are privileging our own language over actual evidence of inner life.
Compare and contrast:
A crow actually has something at stake in its perceptions. Its neural activity diverges from physical stimuli in ways that track its behavioral reports. It evolved under selection pressures where subjective experience plausibly mattered for survival. There’s a there, there, even if we can’t fully access it.
A chatbot is pattern-matching on training data about how conscious beings describe consciousness. When you suppress the “don’t claim consciousness” guardrails, it produces text that sounds like consciousness claims. That’s not evidence of phenomenology. That’s evidence of what happens when you remove a filter.
Or to slice this toast a different way…
The crow research: careful neuroimaging, behavioral experiments, evolutionary frameworks, decades of comparative cognition work—and still hedged with “we can’t really know,” “suggests but doesn’t prove,” “may exhibit rudimentary forms of.”
The AI research: one prompting experiment on commercial chatbots, not yet peer-reviewed, and it’s framed as revealing something genuinely significant about inner states. The headlines say it “claims to be conscious when you turn off its lying” as newsworthy evidence rather than what it more obviously is: a system outputs different text when you change its parameters.
Houston, we have a problem.