Stanford 2025 study of 2.2B posts found only 3% of users post severely toxic content, yet Americans estimate 43% – a distortion driven by engagement algorithms.
Key Takeaways
On Twitter/X, toxic tweets get ~86% more retweets; 0.3% of users shared 80% of contested news; 6% produce 73% of political tweets.
The misperception causes five compounding failures: majority self-censorship, false consensus in the vocal minority, cross-partisan caricature, politicians optimizing for perceived extremes, and inflated hostility.
Both Democrats and Republicans overestimate the other side’s support for political violence by 3-4x; correcting that single misperception measurably reduced partisan hostility for a full month.
The proposed fix, “Community Check,” is an open-source design layer showing statistically representative poll data beneath contentious posts – distinct from fact-checking or engagement-based audience polls.
Common knowledge problem: individual awareness of the distortion is insufficient; the correction must be publicly visible so everyone knows that everyone knows.
Hacker News Comment Review
Commenters debated whether the “silent majority” framing is accurate: some argued the majority stays quiet not from fear but from lacking motivation to broadcast moderate views, which undermines the self-censorship narrative.
Platform incentive misalignment drew skepticism – the distortion is a product feature, not a bug, so grassroots interventions like Community Check face structural resistance without regulatory pressure or a browser-extension end-run.
Technical reviewers flagged two open problems: bot accounts could corrupt the “random sample” baseline, and poll trustworthiness depends heavily on who controls sampling and question phrasing.
Notable Comments
@energy123: proposes banning recommendation engines on large platforms in favor of chronological follow-feeds as a structural alternative to overlay interventions.
@MatrixMan: “Could be a browser extension” – notes Community Check needs no platform permission to ship.
@robot-wrangler: flags the spec’s open-questions section as simulation and theorem targets, pointing to camel-ai/agent-trust and model checkers with common-knowledge primitives as relevant tooling.