A Canadian gov-tech worker lays out a personal framework for opting out of ChatGPT, Copilot, Claude, and Gemini entirely, borrowing the vegetarian metaphor.
Key Takeaways
“Generative AI vegetarianism” means disabling optional AI settings (Copilot, Gemini, Apple Intelligence), not re-sharing AI-produced content, and preferring products that skip generative AI features.
The author distinguishes generative AI from spam filters, facial recognition, OCR, and algorithmic playlists – those are fine; LLM-based text/image generation is the target.
Nine distinct objections are listed: bias with no user control, skill atrophy, tendency toward cliche, job displacement for writers and artists, power concentration, scraping that disincentivizes open knowledge sharing, vendor lock-in, accountability sinks, and energy and resource use.
The author cites “accountability sink” as a systemic risk: harmful decisions get obscured behind AI systems, making it harder for civil society or media to trace responsibility.
Choosing a categorical opt-out is framed as simpler than case-by-case evaluation – the same ergonomic logic vegetarians use.
Hacker News Comment Review
The naming took the most heat: commenters rejected the vegetarian metaphor as inaccurate and sanctimonious, proposing alternatives including “slop-free,” “organic software,” and “GenAI-free” as cleaner labels.
One commenter asked whether the framework implies room for “ethically-sourced AI” – p2p compute contribution for open models – a question the article does not address.
No commenter engaged with the accountability sink or vendor lock-in arguments; discussion stayed almost entirely at the branding level.
Notable Comments
@feral_coder: proposes “slop-free” with a dry usage example – the sharpest alternative label in the thread.
@orangebread: asks whether the framework leaves room for ethically-sourced, p2p-trained open models – a gap the article ignores.