Algorithmic visibility now rewards confident performance over competence, inverting professional incentives and making honest work economically uncompetitive.
Key Takeaways
Frankfurt’s 1986 “On Bullshit” framing applies directly: bullshitters optimize for appearing competent, not for truth, making them more corrosive than liars.
A 2024 study of 6,500+ U.S. state legislators found distributing low-credibility information correlated positively with platform attention.
LLMs collapsed the marginal cost of producing convincing, on-topic text, industrializing grift the way shipping containers industrialized trade.
Graeber’s “bullshit jobs” thesis is empirically contested (only ~8% of European workers reported pointless work), but the artifact-for-artifact-audience dynamic holds.
Careful professionals who refuse to fake demos or overclaim expertise get out-competed for funding, speaking slots, and visibility by loud performers.
Hacker News Comment Review
Discussion is thin but pointed: one commenter immediately extended Frankfurt’s definition to LLMs, noting that by that logic all language models are structurally bullshitters.
The proposed fix from commenters is de-globalization into smaller, reputation-ranked networks like Discord rather than platform-level reform.
Notable Comments
@dmitrygr: “by definition, all LLMs are bullshitters” – applies Frankfurt’s framework directly to language models.