AI-assisted code faces three simultaneous legal exposures: no copyright, employer ownership via work-for-hire, and invisible GPL contamination from training data.
Key Takeaways
The US Copyright Office (Jan 2025) and Supreme Court (Mar 2026, Thaler appeal denied) settled that AI-generated code without meaningful human authorship is not copyrightable.
“Meaningful human authorship” means directing architecture and rejecting output, not just prompting an objective; accepting verbatim Claude Code output likely leaves code unprotected.
Broad IP assignment clauses that include “company-licensed tools” may give employers a claim over side projects built with Claude Code, Cursor, or Copilot even on personal time.
GPL contamination is live risk: the chardet rewrite dispute (2026) tested whether an AI rewrite of an LGPL library produces a clean MIT-licensed output – emerging consensus says no.
Doe v. GitHub (Ninth Circuit, active Apr 2026) is pushing acquirers to run AI codebase license scans via FOSSA, Snyk Open Source, or Black Duck before closing.