Scoring Show HN submissions for AI design patterns

· hn top · Source ↗

TLDR

  • A scoring rubric identifies visual AI-generation tells in Show HN submissions; 301 points and 214 comments debate whether HN’s proof-of-work signal has collapsed in 2026.

Key Takeaways

  • The rubric catalogs recurring AI frontend patterns – icon-topped feature card grids, rounded rect dashboards – into a scorable, reusable taxonomy.
  • 301 upvotes and 214 comments signal this “AI tell” detection problem resonates broadly across builders, judges, and platform maintainers.
  • Proof-of-work value of shipped code is degrading: AI output now makes volume and polish unreliable proxies for effort or correctness.
  • The scoring approach has direct application for hackathon judges and technical recruiters trying to assess genuine authorship.

Hacker News Comment Review

  • Commenters broadly agree AI-generated UIs have recognizable, scorable visual signatures; rounded rect grids are the most-cited pattern not fully captured by the post’s own list.
  • Sharp debate on evaluation standards: the core tension is applying 2016-era effort expectations to 2026 AI-assisted output, with no community consensus on where the bar should move.
  • dang links HN’s showlim policy (restricting new-account posting) as a distinct cause of the submission downtick visible in the post’s chart, separating it from AI quality degradation.

Notable Comments

  • @simonw: headline misuses “vibe coded”; actual post content is a taxonomy of visual design traits in AI-generated frontends, not a coding workflow critique.
  • @onetimeusename: non-working and unattributed AI Show HN submissions make GitHub as a resume signal “basically gone.”
  • @seism: “Every hackathon should use this” – the rubric has an immediate practical audience beyond HN itself.

Original | Discuss on HN