Who Owns the Code Claude Wrote?

2 min read
ai-codingcopyrightopen-source-licensingwork-for-hiregplclaude-codeip-assignmentdue-diligence
View as Markdown
Originally from legallayer.substack.com
View source

My notes

Summary

Code generated by AI coding tools (Claude Code, Cursor, Copilot) sits in three legal grey zones: it may be uncopyrightable due to lack of human authorship, it may already belong to your employer through broad IP-assignment clauses, and it may carry hidden GPL contamination from training data. The gaps mostly bite during M&A due diligence and institutional fundraising rather than day-to-day operations, but documentation habits set today determine outcomes later.

Key Insight

  • Human authorship doctrine (settled). US Copyright Office + DC Circuit (Thaler) consistently hold: pure AI-generated work is uncopyrightable. SCOTUS denied cert March 2026, position is stable but not nationally final. No case has yet applied this directly to AI-generated code.
  • The “meaningful human authorship” line is unquantified. Specifying an objective (“build a rate limiter”) is not enough. What counts: choosing architecture, rejecting outputs, restructuring to fit a design. Allen v. Perlmutter (pending) on 600+ Photoshopped prompts will likely set the bar.
  • Zarya of the Dawn precedent: human-authored elements of a mixed work can be separately protectable. So ADRs, prompt logs, design docs, and substantive commit messages may be protected even if the generated code is not.
  • Work-for-hire absorbs everything by default. Employer ownership doesn’t care if a human or Claude wrote it. Watch for clauses like “any software created with the assistance of company-licensed tools”, that’s the phrase that captures personal side projects if you used a work-licensed Claude/Cursor seat.
  • GPL contamination is the silent killer. Verbatim copying of GPL code violates the license regardless of source. Whether AI reproducing training-data patterns counts as “verbatim” is unsettled, but lawyers advising on M&A assume yes. License scans now standard in due diligence.
  • The chardet community dispute (2026), developer rewrote LGPL library via Claude, re-released MIT, claimed “clean room.” Did not resolve legally but framed the open question.
  • Doe v. GitHub (Ninth Circuit), live appeal on whether Copilot reproduces licensed code. Already triggered Copilot duplicate-detection filters and standardized AI license scans in due diligence.
  • Anthropic’s plan tiers matter. Free/Pro = narrow indemnification. API/Enterprise = output assignment + defense against copyright claims. Neither covers downstream GPL contamination, that’s your governance problem.
  • Striking detail: Anthropic’s lead engineer publicly stated his Claude Code contributions were AI-written. The 31 March 2026 leak (512k lines, mirrored to GitHub, “claw-code” hit 100k stars) compressed every open question into one news cycle. If Anthropic can’t cleanly assert copyright over its own AI-assisted code, neither can anyone else.