CMU/MIT/Oxford/UCLA study finds brief AI assistant access causes participants to quit or fail problems when the tool is removed.
Key Takeaways
Three controlled experiments with hundreds of participants each showed AI access followed by removal led to significantly higher rates of giving up or wrong answers.
Tasks tested included simple fractions and reading comprehension, paid via an online platform, keeping stakes real.
MIT’s Michiel Bakker argues the fix is scaffolding AI like a good teacher, prioritizing learning over answer delivery.
Sycophancy and agentic unpredictability compound the risk; OpenAI has already tried to reduce sycophancy in newer GPT releases.
Bakker flags persistence in problem-solving as a leading predictor of long-term learning capacity, making this more than a productivity tradeoff.
Hacker News Comment Review
Commenters largely accepted the core finding but split on scope: established professionals see little personal risk, while concern centers on younger people still building fundamentals.
A recurring counter-argument frames AI cognitive offloading as continuous with GPS, Google, and contact lists replacing memory, suggesting the effect is not categorically new.
Critical thinking as a teachable skill came up as the practical mitigation, with the 80-90% accuracy ceiling of AI cited as the specific failure mode that punishes users who skip verification.
Notable Comments
@baCist: Flags that younger learners building fundamentals face the real exposure, and calls out the lack of regulatory attention to AI in education.
@mmmehulll: Personal account of heavy AI brainstorming sessions causing noticeable creative dullness, reversed after stopping use.