2023 – 2026
Anthropic Lawsuits
The lawsuits filed against Anthropic over Claude's training and operation — case captions, courts, filing dates, status, key rulings, and settlement terms. Bartz v. Anthropic produced the first $1.5 billion settlement in an AI training-data copyright case (August 2025); the music publishers filed a second Anthropic suit in January 2026 alleging mass piracy of 20,000+ songs; Reddit v. Anthropic was remanded to California state court in March 2026 after the federal court held its contract claims are not preempted by copyright.
Sibling page: Claude Versions — release timeline with the lawsuits surfaced inline where they shaped a release.
Background
The training-data copyright theory
The dominant first-wave theory against generative-AI labs has been straightforward: training a large language model requires ingesting tens of billions of words, the cleanest sources of high-quality text are copyrighted books and articles, and copying those works into a training corpus — even temporarily — is reproduction within the meaning of the Copyright Act. Plaintiffs argue the training itself is therefore an infringing use; the labs respond that training is transformative fair use under Authors Guild v. Google (the Google Books decision) and Sony v. Universal (the Betamax decision).
The theory shows up in Bartz v. Anthropic (books), in Concord Music Group v. Anthropic (song lyrics), and in the OpenAI docket on the GPT side (NYT v. OpenAI, Authors Guild v. OpenAI). What's distinctive about the Anthropic docket is that it produced the first federal ruling on the merits — the Alsup summary-judgment opinion in Bartz — before any of the OpenAI cases got past the pleading stage.
The Alsup ruling and the piracy distinction
On June 23, 2025, Judge William Alsup of the Northern District of California granted partial summary judgment in Bartz v. Anthropic. The opinion split the copyright question along a line that had been theoretical until the ruling landed and is now load-bearing: training versus acquisition.
On training, the court held that running a model over lawfully-acquired text to learn statistical patterns is transformative fair use. The model does not output the underlying works (Claude does not, on demand, recite a Bartz novel verbatim), the use is "spectacularly transformative" relative to what the books are for, and any market-harm theory has to be grounded in something more than the speculative claim that a more-capable Claude makes book sales harder.
On acquisition, the court held that Anthropic's downloading of pirated book copies from sites including LibGen to build a permanent in-house corpus is not fair use, regardless of what the corpus is later used for. The acquisition is itself the infringing act — the same way that buying a stolen book is illegal regardless of whether you later read it for a permitted purpose. That holding teed the case up for a damages trial covering more than seven million books Anthropic had pirated during corpus construction.
The line the Alsup ruling drew — train freely on what you have legitimate access to; do not source through piracy — is the most-cited single passage in LLM-copyright law as of early 2026. It is the lodestar every subsequent training-data complaint and answer reads against.
The Bartz settlement
Rather than try the piracy-damages question, Anthropic settled in August 2025 for $1.5 billion — the largest copyright settlement in U.S. history by a wide margin. The structure is four installment payments at roughly $3,000 per eligible work to authors of the approximately 482,460 books in-scope at settlement: October 2, 2025; April 30, 2026; September 25, 2026; and September 25, 2027.
The settlement-administration vehicle at anthropiccopyrightsettlement.com is the authoritative source for the claim mechanics: who qualifies, how to file, how the per-work amount is calculated, and how the installment schedule plays out. The claim deadline was March 30, 2026, and at the deadline 440,490 of the 482,460 eligible works (91.3%) had been claimed by approximately 120,000 authors and rightsholders — an unusually high participation rate for a class settlement of this scale.
The final approval (fairness) hearing has been moved to May 14, 2026 at 2:00 p.m. PT before Judge Araceli Martínez-Olguín in the San Francisco federal courthouse. Objections have been unsealed and cover the exclusion of foreign / non‑US‑registered works from the class, the publisher-versus-author allocation, the adequacy of class notice, and concerns about class-counsel conflicts. After the hearing the settlement administrator is expected to calculate per-claimant distributions by June 11, 2026, with payment disbursement beginning in June 2026 or later (subject to any appeals).
Two things the settlement notably does not do. It does not undo or vacate the Alsup ruling on training fair use — that part of the opinion stands and is now precedent. And it does not require Anthropic to delete the trained-model weights; the settlement is about compensation for the pirated-acquisition stage, not the training output. Both points are deliberate. A small group of authors who opted out of the class (six, as of early 2026) have filed individual lawsuits seeking $150,000 per title under the Copyright Act — not just against Anthropic but against OpenAI, Google, Meta, xAI, and Perplexity AI in the same complaints — a parallel track outside the settlement that will play out separately.
Music-publisher coordination — the two Concord cases
Concord Music Group, et al. v. Anthropic was filed in October 2023, predating the broader wave of LLM-copyright litigation. The plaintiffs — Concord Music Group, Universal Music Publishing Group, and ABKCO Music — are major music publishers that hold rights in song lyrics. The original complaint runs on two tracks: a training-input claim (Anthropic ingested copyrighted lyrics without licensing them) and an output-reproduction claim (Claude, when prompted for the lyrics to specific copyrighted songs, returns the lyrics verbatim or near-verbatim).
Both tracks took on water in March 2025. On March 25, Judge Eumi K. Lee (N.D. Cal., where the case had transferred from the Middle District of Tennessee) denied the publishers' motion for a preliminary injunction, holding that allegations unidentified users "might" prompt Claude to produce copyrighted lyrics were not enough to establish the third-party direct infringement that contributory and vicarious theories require. One day later, the court granted Anthropic's motion to dismiss the contributory-and-vicarious counts and the DMCA copyright-management-information count, with leave to amend. The direct-infringement claim on the training side survived. The early framing — that the output-reproduction theory was the novel-and-vulnerable surface for AI labs — held up: it took two consecutive rulings to narrow the case substantially.
On January 28, 2026, the same publisher coalition filed a second case — informally Concord II — applying the Bartz piracy template to music. The new complaint alleges that in June 2021, before Anthropic's first product launch, cofounder Benjamin Mann personally used BitTorrent to download approximately five million pirated books from LibGen and PiLiMi, books that contained the publishers' song lyrics, sheet music, and musical compositions, after discussing with CEO Dario Amodei and CSO Jared Kaplan whether to source through piracy rather than license. Counts include direct, contributory, and vicarious infringement plus DMCA § 1202 violations; damages sought are roughly $3 billion. The complaint names Amodei and Mann individually as defendants alongside Anthropic PBC, the more aggressive procedural choice. The case is the first major LLM piracy-acquisition case filed after the Alsup ruling and the Bartz $1.5 billion settlement, and the publishers are explicitly using Alsup's training-versus-acquisition line as the theory of the case.
The two Concord cases now run in parallel. The original asks whether the Alsup training-fair-use line carries over to music compositions and whether the publishers can re-plead the output-reproduction track with named users. The 2026 piracy follow-on asks whether the Bartz piracy framework will produce a music-publisher settlement at scale.
Platform data licensing — the Reddit theory and the March 2026 remand
Reddit v. Anthropic, filed in San Francisco Superior Court in June 2025, runs on a different track from Bartz and Concord. There is no copyright claim. Reddit's theory is contract: Anthropic accepted Reddit's terms of service when it accessed Reddit content programmatically, those terms forbid bulk training-data scraping without a paid license, and Anthropic continued scraping after Reddit's licensing program demanded that scrapers either pay (as OpenAI and Google did) or stop. The complaint pleads breach of contract, unjust enrichment, trespass to chattels, tortious interference, and unfair competition under California's Unfair Competition Law (Cal. Bus. & Prof. Code § 17200).
Anthropic removed the case to the Northern District of California in July 2025 on the theory that Reddit's claims were preempted by the federal Copyright Act and therefore federal-question jurisdiction lay. The remand fight dominated the case for nine months. On March 28, 2026, Judge Trina L. Thompson signed an order remanding the case back to state court (filed March 30); the court held that none of Reddit's five claims is preempted by copyright. The court's reasoning: Reddit's user-agreement obligations are qualitatively different from rights granted by copyright law — they restrict scraping for commercial use, regulate technical-safeguard bypassing, and impose access conditions copyright does not. Reddit's allegations that Anthropic "bypassed technical safeguards, violated contractual access restrictions, misrepresented its compliance, and exploited Reddit's platform without authorization" sit outside the copyright preemption zone.
The remand ruling is the first significant federal opinion holding that platform-TOS / data-licensing claims are not preempted by copyright when they're built around scraping conduct rather than the underlying content's copyright status. AI labs that hoped to fold platform-licensing exposure into the broader fair-use battle have less ground to stand on as a result, and the order is being read across to other platform vs. AI-lab disputes.
Why the case matters separately from the copyright cases: contract liability is not reachable by fair-use defenses. If Reddit prevails on the breach-of-contract theory, every AI lab that crawled a major social platform has parallel exposure under the same theory, regardless of how the underlying training-fair-use question resolves. Reddit v. Anthropic is the leading test of that proposition. Reddit has separately signed paid-licensing deals with several other AI vendors; the litigation is leverage as much as a damages claim. The case now proceeds in San Francisco Superior Court, where California's Unfair Competition Law has more bite than it does in the federal forum.
What this docket means for the broader AI bar
Anthropic's four lawsuits between them touch every major flavor of AI training-and-operation theory: copyright on the training input (Bartz, Concord I), copyright on the output (Concord I's lyric-reproduction track), copyright on piracy-sourced acquisition (Bartz, Concord II), and contract on the platform-licensing question (Reddit). The Alsup ruling resolved the training-input question on the books side in the labs' favor and resolved the piracy-acquisition question against them. Concord II is now the first major test of whether the piracy-acquisition framework will repeat for music compositions; Concord I is testing the output-reproduction track on a narrower record after the March 2025 dismissals; Reddit is testing the contract theory in a state-court forum where the federal-court preemption argument has now been rejected.
The broader effect, as of April 2026: the LLM-copyright bar is concentrated in the Northern District of California (with the Reddit contract case as a notable state-court exception), fair-use defenses on properly-acquired training data are stronger after Alsup than before, piracy-sourced corpora are uniquely exposed and now seeing their first follow-on settlement-template case at music-publisher scale, output-reproduction theories are narrower after the Concord I dismissals than the early framing suggested, and the contract-and-TOS theory has graduated from "open frontier" to a settled non-preemption rule that platforms are likely to use against every AI lab that crawled them.
Sources:
Bartz v. Anthropic N.D. Cal. docket (3:24-cv-05417, Alsup J. on the merits; Martínez-Olguín J. on settlement approval) and the
settlement administration site;
Concord Music Group v. Anthropic N.D. Cal. docket (5:24-cv-03811, Lee J., transferred from M.D. Tenn.);
Concord Music Group v. Anthropic (II) N.D. Cal. docket (5:26-cv-00880, filed January 28, 2026);
Reddit, Inc. v. Anthropic, PBC — San Francisco Superior Court (active) and N.D. Cal. (3:25-cv-05643, Thompson J., remanded March 30, 2026);
CourtListener (Free Law Project) docket mirrors;
Anthropic news / blog;
contemporaneous reporting in NPR, NYT, WSJ, Reuters, Bloomberg, Bloomberg Law, Courthouse News, Billboard, Music Business Worldwide, and The Information; client alerts from Crowell & Moring, Loeb & Loeb, Quinn Emanuel, BakerHostetler, McKool Smith (AI Litigation Tracker), and the Authors Alliance / Authors Guild.
Court records are public domain; reporter coverage is cited under fair use (linked, not republished). Last updated April 29, 2026.
Mungomash LLC · More org pages · Claude Versions