The class-action lawsuit against the company centers on Anthropic’s use of potentially pirated books to train its large language model, Claude, and could leave the company on the hook for billions of dollars’ worth of damages.
According to court filings, the company downloaded millions of copyrighted works from shadow libraries like LibGen and PiLiMi to train AI models and build a “central library” of digital books that would include “all the books in the world” and preserve them indefinitely. The plaintiffs—who include authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson—allege that millions of these works were obtained from piracy websites in direct violation of copyright law.
The judge presiding over the case, William Alsup, has recently ruled that training AI models on lawfully acquired books qualifies as “fair use,” and that AI companies do not need a license from copyright holders to conduct such training, a decision that was viewed as a major win for the AI sector.
However, the still unresolved issue is how Anthropic obtained and stored the copyrighted books. The judge drew a distinction when it came to the use of pirated materials, advising Anthropic that a separate trial “on the pirated copies” and “the resulting damages” would be forthcoming.
“The problem is that a lot of these AI companies have scraped piracy sites like LibGen … where books have been uploaded in electronic form, usually PDF, without the permission of the authors, without payment,” Luke McDonagh, an associate professor of law at LSE, told Fortune.
The plaintiffs are unlikely to prove direct financial harm, such as lost sales, and are likely to instead rely on statutory damages, which can range from $750 to $150,000 per work. That range depends heavily on whether the infringement is deemed willful. If the court rules that Anthropic knowingly violated copyright law, the resulting fines could be enormous, potentially in the billions, even at the lower end of the scale.
The number of works included in the class action and whether the jury finds willful infringement is still a question mark, but potential damages could range from hundreds of millions to tens of billions of dollars. Even at the low end, Lee argues, damages in the range of $1 billion to $3 billion are possible if just 100,000 works are included in the class action. That figure rivals the largest copyright damage awards on record and could far exceed Anthropic’s current $4 billion in annual revenue.
Lee estimated that the company could be on the hook for up to $1.05 trillion if a jury decides that the company willfully pirated 6 million copyrighted books.
Anthropic did not immediately respond to a request for comment from Fortune. However, the company has previously said it “respectfully disagrees” with the court’s decision and is exploring its options, which might include appealing Alsup’s ruling or offering to settle the case. A trial, which is the first case of a certified class action against an AI company over the use of copyrighted materials, is currently scheduled for Dec. 1.
The verdict could determine the outcomes of similar cases, such as a high-profile ongoing battle between OpenAI and dozens of authors and publishers. While the courts do appear to be leaning toward allowing fair use arguments from AI companies, there’s a legal divergence regarding the acquisition of copyrighted works from shadow sites.
The two judges also diverged on whether AI-generated outputs could be deemed to compete with the original copyrighted works in their training data. Judge Chhabria acknowledged that if such competition was proved it might undercut a fair use defense but found that, in the Meta case, the plaintiffs had failed to provide sufficient evidence of market harm, whereas Judge Alsup concluded that generative AI outputs do not compete with the original works at all.
The legal question around AI companies and copyrighted work has also become increasingly political, with the current administration pushing to allow AI companies to use copyrighted materials for training under broad fair use protections, in an effort to maintain U.S. leadership in artificial intelligence. McDonagh said the case against Anthropic was unlikely to leave the company bankrupt, as the Trump administration would be unlikely to allow a ruling that would essentially destroy an AI company.
Judges are also generally averse to issuing rulings that could lead to bankruptcy unless there is a strong legal basis and the action is deemed necessary. Courts have been known to consider the potential impact on the company and its stakeholders when issuing rulings that could result in liquidation.
“The U.S. Supreme Court, at the moment, seems quite friendly to the Trump agenda, so it’s quite likely that in the end, this wouldn’t have been the kind of doomsday scenario of the copyright ruling bankrupting Anthropic,” McDonagh said. “Anthropic is now valued, depending on different estimates, between $60 and $100 billion. So paying a couple of billion to the authors would by no means bankrupt the organization.”