A builder who reverse-engineered DOOM into QR codes can’t shake the ethical weight of lab-grown neurons trained to play DOOM via reward mechanisms identical to LLM training.
Key Takeaways
A company grew ~200,000 human neurons in a lab and trained them to play DOOM using reinforcement-style reward signals, outperforming the author.
200,000 neurons exceeds the neuron count of jellyfish and worms, blurring the threshold arguments used to dismiss consciousness concerns.
The system feeds visual data to neurons that must interpret it; the author asks whether this constitutes “seeing” in any meaningful sense.
Commercial incentives are real: biological neural tissue offers higher storage density, potentially better retrieval, and far lower power draw than silicon.
No regulatory or ethical framework currently governs biocomputing consciousness thresholds; the author sees no conclusion, only discomfort.
Hacker News Comment Review
The technical setup is more modest than headlines suggest: the neuron chip is wrapped in a full PyTorch stack, raising questions about how much work the neurons actually do versus the surrounding ML scaffolding.
Commenters split on consciousness: one camp cites Mark Solms arguing consciousness originates in the brainstem via embodied emotion signals, making a petri dish of cortical neurons an unlikely candidate; another notes no scientific theory can reliably distinguish conscious from non-conscious systems at all.
A recurring thread questions whether the neurons could be swapped for a random number source and still produce similar gameplay, echoing the methodology critique from the qday prize context.
Notable Comments
@pjs_: Points to the actual GitHub repo showing a full PyTorch wrapper around the neuron demo, suggesting the neurons’ causal role is unclear.
@rolph: Links visual system neuroscience literature arguing the current setup is a reflex circuit, not a perceptual system, with significant distance to traverse.