Ars Technica published its AI editorial policy: AI is allowed for research navigation and background summarization, banned for generating or attributing claims to named sources.
Key Takeaways
Reporters may use vetted AI tools to navigate large document volumes and search datasets, but must disclose all AI use to editors before publication.
AI must not generate, extract, or summarize any material attributed to a named source – no direct quotes, paraphrases, or characterizations of someone’s views.
AI-generated images, audio, and video are prohibited as authentic documentation; synthetic media used in AI-topic reporting must carry a disclosure adjacent to the material.
Accountability is explicitly non-transferable: authors bear full responsibility for accuracy regardless of which tools assisted them.
The policy predates public release; Ars is publishing it so readers can see the rules directly, not just trust they exist.
Hacker News Comment Review
The policy dropped weeks after Ars fired a reporter over fabricated AI-generated quotes and issued a formal retraction; commenters read the disclosure as reactive damage control, not proactive standards-setting.
The research-summarization carve-out drew sharp criticism: allowing AI to summarize background documents creates the same chain-of-custody failure LLMs are known for, even if the resulting claim is not formally “attributed.”
The “human-directed AI visuals” carve-out was broadly dismissed as meaningless – commenters argued editorial prompting does not operationally distinguish Ars’s output from any other AI-image publisher.
Notable Comments
@legitster: Frames the systemic risk as AI consuming original content while reducing the economic incentive to produce it – a training-data feedback loop problem.
@defrost: Flags Crikey as a parallel: outlet banned AI in 2024, then retracted an AI-assisted article in 2026, showing policy publication does not prevent policy failure.