Managing AI litigation risk
Last Thursday, Anthropic resolved perhaps the most significant ongoing AI litigation by earning a preliminary approval of its settlement with the Bartz class action plaintiffs in California. While many enterprise clients were relieved that the case settled, it leaves open several critical questions that may determine the future of the largest frontier models – and their viability as businesses with rapid growth yet stalling adoption. There are several steps firms can take now to mitigate risk and even build a competitive advantage as this technology spreads.
The Bartz litigation settlement.
Anthropic settled a class action lawsuit related to alleged copyright infringement in its use of copyrighted books to develop Claude. The $1.5B settlement value is approximately 30% of the company’s estimated 2025 revenues - reportedly the largest copyright settlement in history. While some of the negative facts in the case are specific to Anthropic (such as the documented use of BitTorrent to collect large files of books after seeing the difficulty in licensing them individually), the other frontier models were also trained using similar, wide-reaching scraping and copying. And the relatively small per-work settlement value ($3K vs. potential statutory damages of up to $150K) is unlikely to satisfy authors or advocates for the publishing, media, or entertainment industries. So, we expect more litigation, with plaintiffs’ lawyers testing creative new theories of liability, which will not be limited to copyright. The OpenAI/NY Times case remains ongoing, though it seems to be the sort of dispute that should be settled with a creative licensing deal.
AI risk vs. data risk.
Data teams are now often tasked with managing AI tools, but the risks are quite different. Although there are fundamental similarities in sourcing risk (e.g., training on third-party data, distributing datasets, etc.), AI tools present more intensive operational risks to firms that depend on them. Data teams should consider the expansive risk profile presented by an enterprise Anthropic account, for example: had Claude shut down or changed in response to the Bartz case, which teams would have been impacted? And how much (data) budget was allocated to that single tool?
Proactive steps to mitigate future AI risk.
We acknowledge that race to adopt new AI tools in financial services seems to have slowed in recent months (outside of software developers), as businesses focus on proven returns and value. That said, we often see businesses using at least two different frontier models regularly with high-cost, enterprise accounts; typically, from Anthropic, OpenAI, and Gemini in the United States. To avoid worst-case scenarios that could result from litigation such as the Bartz case, there are a few straightforward vendor management strategies to follow:
Spread usage and integration across (vetted) models and reduce firm-wide dependencies on any single provider, which may require coordinating with teams that do not regard AI tools as a priority or responsibility,
Set contractual protections such as SLA terms with clear refund rights that anticipate major disruptions to these models, and
Train users to follow firm policies and procedures on AI tools, including instructions to respect copyrighted material and not use these tools as a work around for generating such material (see the recent Disney suit against Midjourney).
We note that one additional factor to consider is geography and server locations. As countries pursue different approaches to regulation and case law develops, one major market may be impacted more or less than others. For example, the Italian Parliament passed regulations on September 17th, 2025 that prohibit scraping/copying copyrighted material generally – but perhaps with important differences from the rules being articulated in litigation in the US.
Each of these risks and mitigations will require coordination across teams – not just within data teams that may be expected to manage AI provider relationships – making a firm’s AI policies and procedures uniquely important for periodic training.
If you’re interested in seeing samples of our recent scenario analyses of litigation in data and AI, please reach out via our site or email info@glaciernetwork.co with the subject line “Sample.”
Don D'Amico
Founder & CEO, Glacier Network
Further reading
©2025 Glacier Network LLC d/b/a Glacier Risk (“Glacier”). This post has been prepared by Glacier for informational purposes and is not legal, tax, or investment advice. This post is not intended to create, and receipt of it does not constitute, a lawyer-client relationship. This post was written by Don D’Amico without the use of generative AI tools. Don is the Founder & CEO of Glacier, a data risk company providing services to users of external and alternative data. Visit www.glaciernetwork.co to learn more.