The AI Zombification of Universities

· ai · Source ↗

TLDR

  • A UChicago philosophy student documents campus-wide LLM capture: take-home tests, problem sets, student newspapers, and now faculty lectures show signs of AI generation.

Key Takeaways

  • A 40-point gap between take-home and in-person Logic exam scores signals take-home assessments are no longer producing meaningful academic signal.
  • AI use spread from low-rigor business-econ classes outward to standard econ, humanities, and student journalism before reaching faculty lecture prep.
  • The University of Chicago’s Maroon published two fully AI-written articles undetected for months; the author sees similar patterns across other campus publications.
  • UChicago and peer institutions (Harvard, Yale, Columbia) are committing $50M-$150M+ to expand AI in pedagogy, accelerating the dynamic the essay critiques.
  • The Scott Alexander “Whispering Earring” model frames progressive LLM offloading as a dependency loop, not discrete cheating events.

Hacker News Comment Review

  • Commenters broadly agreed the root problem predates AI: credential-seeking over learning means the cheating vector shifts but the incentive structure stays broken.
  • Several engineers pushed back on the framing as an AI-specific crisis, noting that in-person exams with no technology were the norm and remain a straightforward structural fix professors can choose today.
  • Debate split on whether offloading cognitive work is categorically different from prior shortcuts like cramming; no consensus on where the line is between tool use and skill atrophy.

Notable Comments

  • @dgellow: Notes the 40pp gap makes take-home interviews, not just exams, unreliable, extending the problem beyond academia.
  • @paulorlando: Reports that no-laptop in-class handwritten quizzes on readings have meaningfully reduced the problem in his own courses.

Original | Discuss on HN