SHRDLU

· ai · Source ↗

TLDR

  • Terry Winograd’s 1968-1970 MIT program that simulated natural-language understanding via a constrained “blocks world” on a DEC PDP-6, written in Micro Planner and Lisp.

Key Takeaways

  • SHRDLU managed convincing NLU with only ~50 words by restricting the domain entirely to blocks, cones, balls, and a handful of spatial verbs.
  • Context memory let it resolve pronouns and definite references across a conversation history, and answer questions about past actions with causal chains.
  • It could learn user-defined compound concepts at runtime: define “steeple” once, then query and build steeples from that point forward.
  • In a 1991 interview Winograd called it a “Potemkin village”: the famous dialogue was hand-crafted line by line; off-script queries had only probabilistic success.
  • Despite its limits it is recognized as the first formal example of interactive fiction, predating Colossal Cave Adventure (1976-1977).

Hacker News Comment Review

  • The single substantive comment frames SHRDLU not as a historical curiosity but as a design pattern still applicable today: language as a contextual control layer for robotics or industrial systems, where the restricted vocabulary maps naturally to a bounded physical domain.
  • No consensus on the demo-vs-reality tension Winograd himself raised; the AI-optimism-then-winter arc it seeded gets no direct pushback in comments.

Notable Comments

  • @Liftyee: suggests SHRDLU-style restricted-language interfaces could outperform tactile control panels in certain robotics or industrial niches today.

Original | Discuss on HN