Mungomash LLC
Claude Lawsuits

2023 – 2026

Claude Lawsuits

The lawsuits filed against Anthropic over Claude's training and operation — case captions, courts, filing dates, status, key rulings, and settlement terms. Bartz v. Anthropic produced the first $1.5 billion settlement in an AI training-data copyright case (August 2025); Concord Music Group and Reddit v. Anthropic are working through related but distinct theories.

Sibling page: Claude Versions — release timeline with the lawsuits surfaced inline where they shaped a release.

Status

Settled — case has ended in a settlement; payments or terms specified
Active — pending; in motion practice, discovery, or trial-track
On Appeal — judgment entered but under review
Dismissed — closed without recovery (voluntary or involuntary)

Anthropic litigation timeline

Case
Bartz v. Anthropic
N.D. Cal. · 24-cv-05417 (Alsup, J.)
Copyright
N.D. Cal.
Settled
Aug 2024
Authors' class action over training-data sourcing including pirated copies. Settled August 2025 for $1.5 billion — the largest copyright settlement in U.S. history.

Plaintiffs. Authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, on behalf of a class of book authors whose works were ingested.

Theory of liability. The complaint alleges Anthropic copied millions of copyrighted books from sources including pirate repositories like LibGen ("Library Genesis") to train Claude, infringing the authors' exclusive reproduction right under the Copyright Act. Anthropic raised fair use as its primary defense.

The Alsup summary-judgment ruling (June 23, 2025). Judge William Alsup split the question into two: training and acquisition. He held that training a generative model on lawfully-acquired books is fair use — the use is transformative, the model does not output the underlying works, and the market harm to authors is speculative. He held that acquisition of pirated copies is not fair use — downloading from a pirate library to build a permanent in-house corpus is straightforward infringement, regardless of what the corpus is later used for. The ruling set the case up for a piracy-damages trial covering more than seven million books Anthropic had acquired through pirated sources.

Settlement (August 2025). Anthropic agreed to pay $1.5 billion — the largest copyright settlement in U.S. history — structured as four installments through September 2027 at roughly $3,000 per eligible work. Approximately 500,000 books were initially identified as in-scope, with a claims process administered through the settlement-administration vehicle at anthropiccopyrightsettlement.com. The claim deadline was March 30, 2026.

Why it matters. The Alsup fair-use ruling on training is the most-cited piece of LLM-copyright law to date. It established a federal-trial-court line that training on properly-acquired works is fair use while sourcing through piracy is not — a distinction every other AI-copyright defendant now reads for guidance.

What to watch next. The four-installment payment schedule runs through September 2027; settlement-administration milestones (claim verifications, payment distributions, any objections) will continue through that window. The ruling itself is being cited in active dockets including Concord Music Group, NYT v. OpenAI, and the Authors Guild v. OpenAI consolidated litigation.

Case
Concord Music Group v. Anthropic
M.D. Tenn. · transferred to N.D. Cal.
Copyright
M.D. Tenn.
Active
Oct 2023
Music publishers including Universal Music Publishing Group sued over Claude reproducing copyrighted song lyrics. $75M plus injunctive relief sought.

Plaintiffs. Concord Music Group, Universal Music Publishing Group, and ABKCO Music. Filed October 18, 2023 in the Middle District of Tennessee.

Theory of liability. Two distinct flavors of copyright claim. First, that Anthropic ingested copyrighted song lyrics during training without licensing them (the same training-corpus theory as Bartz, but for music compositions rather than books). Second, that Claude as deployed reproduces those lyrics in response to user prompts — an output-side claim, harder to make on the books side because Claude does not typically reproduce books verbatim, but reachable for short-form lyrics. Plaintiffs sought $75 million plus an injunction.

Procedural posture. A partial preliminary injunction was entered early in the case directed at Anthropic's lyric-output behavior (guardrails on the model's responses to lyric-prompt patterns). Venue was contested; the case has moved toward the Northern District of California where the broader AI-copyright bar is concentrated.

Why it matters. The lyrics theory reaches the same fair-use question the Bartz ruling resolved on the training side, but the output-reproduction claim is novel: it tests whether a model that can emit verbatim copyrighted text on demand is itself an infringing instrument. The answer matters for every model with a memorization surface.

What to watch next. Whether the court reads the Alsup distinction (training fair use, piracy not) into the music context, and whether the output-reproduction theory survives summary judgment. Settlement is plausible but not signaled.

Case
Reddit v. Anthropic
Cal. state ct. · removal & remand fight
Contract / TOS
Cal. state
Active
Jun 2025
Breach-of-contract suit alleging Anthropic continued scraping Reddit user posts after access was revoked. The leading test of platform-data-licensing theories against AI labs.

Plaintiff. Reddit, Inc. Filed June 4, 2025 in California state court.

Theory of liability. Not copyright. Reddit alleges Anthropic violated Reddit's user-agreement and developer-terms by continuing to scrape Reddit content after Reddit's commercial-data licensing program (which OpenAI and Google have signed onto) demanded that scrapers either pay or stop. The complaint pleads breach of contract, unjust enrichment, and unfair competition under California's Unfair Competition Law (Cal. Bus. & Prof. Code § 17200).

Procedural posture. Anthropic removed the case from state court to federal court; Reddit moved to remand. The remand fight has been the dominant procedural action since filing — the venue question matters because California state courts apply California's Unfair Competition Law more aggressively than the federal forum, and a remand keeps the case in Reddit's preferred court.

Why it matters. This is the highest-profile of the platform-data-licensing suits to date. The Bartz / Concord theories run through copyright; Reddit v. Anthropic runs through contract. If Reddit prevails, the result is a parallel track of liability that fair-use defenses do not reach — every AI lab that crawled a major social platform has terms-of-service and rate-limit-evasion exposure under the same theory.

What to watch next. The remand ruling. After that, motion-to-dismiss practice on the contract and unjust-enrichment counts. Reddit has separately settled with several other AI vendors on a paid-licensing basis — the litigation is leverage as much as it is a damages claim.

Background

The training-data copyright theory

The dominant first-wave theory against generative-AI labs has been straightforward: training a large language model requires ingesting tens of billions of words, the cleanest sources of high-quality text are copyrighted books and articles, and copying those works into a training corpus — even temporarily — is reproduction within the meaning of the Copyright Act. Plaintiffs argue the training itself is therefore an infringing use; the labs respond that training is transformative fair use under Authors Guild v. Google (the Google Books decision) and Sony v. Universal (the Betamax decision).

The theory shows up in Bartz v. Anthropic (books), in Concord Music Group v. Anthropic (song lyrics), and in the OpenAI docket on the GPT side (NYT v. OpenAI, Authors Guild v. OpenAI). What's distinctive about the Anthropic docket is that it produced the first federal ruling on the merits — the Alsup summary-judgment opinion in Bartz — before any of the OpenAI cases got past the pleading stage.

The Alsup ruling and the piracy distinction

On June 23, 2025, Judge William Alsup of the Northern District of California granted partial summary judgment in Bartz v. Anthropic. The opinion split the copyright question along a line that had been theoretical until the ruling landed and is now load-bearing: training versus acquisition.

On training, the court held that running a model over lawfully-acquired text to learn statistical patterns is transformative fair use. The model does not output the underlying works (Claude does not, on demand, recite a Bartz novel verbatim), the use is "spectacularly transformative" relative to what the books are for, and any market-harm theory has to be grounded in something more than the speculative claim that a more-capable Claude makes book sales harder.

On acquisition, the court held that Anthropic's downloading of pirated book copies from sites including LibGen to build a permanent in-house corpus is not fair use, regardless of what the corpus is later used for. The acquisition is itself the infringing act — the same way that buying a stolen book is illegal regardless of whether you later read it for a permitted purpose. That holding teed the case up for a damages trial covering more than seven million books Anthropic had pirated during corpus construction.

The line the Alsup ruling drew — train freely on what you have legitimate access to; do not source through piracy — is the most-cited single passage in LLM-copyright law as of early 2026. It is the lodestar every subsequent training-data complaint and answer reads against.

The Bartz settlement

Rather than try the piracy-damages question, Anthropic settled in August 2025 for $1.5 billion — the largest copyright settlement in U.S. history by a wide margin. The structure is four installment payments running through September 2027, distributed at roughly $3,000 per eligible work to authors of the approximately 500,000 books in-scope at settlement.

The settlement-administration vehicle at anthropiccopyrightsettlement.com is the authoritative source for the claim mechanics: who qualifies, how to file, how the per-work amount is calculated, and how the four-installment schedule plays out. The claim deadline was March 30, 2026.

Two things the settlement notably does not do. It does not undo or vacate the Alsup ruling on training fair use — that part of the opinion stands and is now precedent. And it does not require Anthropic to delete the trained-model weights; the settlement is about compensation for the pirated-acquisition stage, not the training output. Both points are deliberate.

Music-publisher coordination — the Concord case

Concord Music Group, et al. v. Anthropic was filed in October 2023, predating the broader wave of LLM-copyright litigation. The plaintiffs — Concord Music Group, Universal Music Publishing Group, and ABKCO Music — are major music publishers that hold rights in song lyrics. The complaint runs on two tracks: a training-input claim (Anthropic ingested copyrighted lyrics without licensing them) and an output-reproduction claim (Claude, when prompted for the lyrics to specific copyrighted songs, returns the lyrics verbatim or near-verbatim).

The output-reproduction track is the more novel of the two. Bartz-style training claims have been heavily litigated; output-side claims have been harder to bring against general-purpose LLMs because the models do not, in the ordinary case, emit copyrighted long-form text. Lyrics are an exception: short, well-known, and within the model's memorization window. A partial preliminary injunction was entered early in the case directed at the model's lyric-emission behavior — Anthropic-side guardrails were tightened in response.

Venue moved from the Middle District of Tennessee toward the Northern District of California, where the broader AI-copyright bar is concentrated. The case remains active. Whether the Alsup training-fair-use line carries over to music compositions, and whether the output-reproduction theory survives summary judgment, are the two questions to watch.

Platform data licensing — the Reddit theory

Reddit v. Anthropic, filed in June 2025, runs on a different track from Bartz and Concord. There is no copyright claim. Reddit's theory is contract: Anthropic accepted Reddit's terms of service when it accessed Reddit content programmatically, those terms forbid bulk training-data scraping without a paid license, and Anthropic continued scraping after Reddit's licensing program demanded that scrapers either pay (as OpenAI and Google did) or stop.

The complaint pleads breach of contract, unjust enrichment, and unfair competition under California's Unfair Competition Law (Cal. Bus. & Prof. Code § 17200). The procedural fight since filing has centered on whether the case stays in California state court (Reddit's preferred venue, where the UCL has more bite) or moves to federal court (Anthropic removed; Reddit moved to remand). The substantive merits questions are paused on that procedural posture.

Why this case matters separately from the copyright cases: contract liability is not reachable by fair-use defenses. If Reddit prevails on the breach-of-contract theory, every AI lab that crawled a major social platform has parallel exposure under the same theory, regardless of how the underlying training-fair-use question resolves. Reddit v. Anthropic is the leading test of that proposition. Reddit has separately signed paid-licensing deals with several other AI vendors; the litigation is leverage as much as a damages claim.

What this docket means for the broader AI bar

Anthropic's three lawsuits between them touch every major flavor of AI training-and-operation theory: copyright on the training input (Bartz, Concord), copyright on the output (Concord's lyric-reproduction track), and contract on the platform-licensing question (Reddit). The Alsup ruling resolved the training-input question on the books side in the labs' favor and resolved the piracy-acquisition question against them. The other two cases are testing whether those distinctions hold up across the music and platform contexts.

The broader effect, as of early 2026: the LLM-copyright bar is concentrating in the Northern District of California, fair-use defenses on properly-acquired training data are stronger after Alsup than before, piracy-sourced corpora are uniquely exposed, output-reproduction theories are alive but not yet tested at trial, and the contract-and-TOS theory is the open frontier.

Follow these cases

Court records are public domain. The links below are the authoritative places to read the dockets and settlement materials directly — what appears in news coverage is downstream of these primary sources.

Bartz v. Anthropic

N.D. Cal. docket; the August 2025 settlement and its administration.

# Settlement administration — claim status, payment schedule, FAQ
https://www.anthropiccopyrightsettlement.com/

# Free Law Project (CourtListener) docket mirror
https://www.courtlistener.com/  # search: "Bartz v. Anthropic"

# Anthropic's own response posts
https://www.anthropic.com/news

Concord Music Group v. Anthropic

M.D. Tenn. (transferred to N.D. Cal.); music-publisher copyright over song lyrics.

# CourtListener — search the case caption
https://www.courtlistener.com/  # search: "Concord Music Group v. Anthropic"

# PACER — authoritative federal docket access (fee-based)
https://pacer.uscourts.gov/

Reddit v. Anthropic

California state court; breach of contract, unjust enrichment, unfair competition.

# California courts case-search portal
https://www.courts.ca.gov/  # search: "Reddit, Inc. v. Anthropic"

# If removed to federal court — CourtListener mirror
https://www.courtlistener.com/