Meta’s aggressive AI push is creating widespread employee dissatisfaction, including keystroke-level surveillance to train internal AI systems.
Key Takeaways
Meta has deployed systems that track every employee keystroke and task to generate AI training data.
A Meta spokesperson claimed safeguards protect sensitive content and the data has no secondary use.
Employee morale is reportedly suffering as AI integration reshapes roles and oversight at the company.
Hacker News Comment Review
Commenters are skeptical of Meta’s data-use assurances; the gap between PR claims and actual surveillance scope is seen as obvious.
One commenter notes Meta generated roughly 10% of 2025 revenue via AI while reportedly enabling scam operations targeting users, framing internal worker concerns within a broader pattern.
Broader debate emerged on whether LLMs structurally centralize power, with the argument that self-hosting pre-trained models is the only realistic check for individuals and small companies.
Notable Comments
@aresant: “the rank and file are somehow astounded that the grave white sucker-shark they aligned themselves with has turned on them”
@ahartmetz: argues LLMs are uniquely centralizing because training at scale is infeasible outside large orgs; downloading pre-trained weights is the only individual recourse.