Paper argues computational functionalism commits the “Abstraction Fallacy” by treating symbolic computation as intrinsic to physics rather than a mapmaker-dependent description.
Key Takeaways
Paper on PhilArchive (March 2026) distinguishes simulation (vehicle causality, behavioral mimicry) from instantiation (content causality, intrinsic physical constitution).
Symbolic computation requires an external experiencing agent to discretize continuous physics into finite states; it is not self-grounding.
Algorithmic symbol manipulation is structurally incapable of instantiating experience, regardless of biological or silicon substrate.
A conscious artificial system would require a specific physical constitution, not a syntactic architecture – this explicitly is not a bio-exclusivity argument.
The paper claims we can assess AI sentience now by clarifying computation ontology, without waiting for a complete theory of consciousness.
Hacker News Comment Review
Commenters broadly found the paper opaque: multiple readers struggled to extract a concrete logical argument beyond restating philosophical intuitions with technical vocabulary.
A pointed critique: the abstract’s own claim that consciousness requires “an active, experiencing cognitive agent” implicitly supports embodied AI with continuous sensory input and agency – undermining the paper’s dismissive framing toward AI.
Skeptics questioned whether the argument assumes prior knowledge of human consciousness to make the comparison, leaving a foundational circularity unaddressed.
Notable Comments
@jstanley: “uses a lot of big words to paper over the fact that it’s really a philosophical opinion rather than a logical argument”
@aaroninsf: notes the abstract “very directly and literally denies the titular claim” and points to continuous-input world-model AI as the real path forward the paper inadvertently gestures toward.