A US federal judge has ruled that AI giant Anthropic can use the works of authors to train its models without consent.
The ruling comes in response to a lawsuit filed by three authors: Andrea Bartz (We Were Never Here, The Last Ferry Out), Charles Graeber (The Good Nurse: A True Story of Medicine), and Kirk Wallace Johnson (The Feather Thief).
The authors claimed that their works were used to train Anthropic’s AI chatbot Claude, without legal permission or payment.
Judge William Alsup rejected Anthropic’s request to dismiss the case, however in the final ruling in favour of the AI firm, said the books were “exceedingly transformative” and so permissible under US law.
If the authors were claiming the model training has led to knock-offs of the books, the judge said, then that “would be a different case”.
Despite this, the Judge Alsup said Anthropic has violated the authors’ rights by saving pirated copies of their books in a ‘central library’ of over 7 million books.
The First In a Series of Lawsuits
Since the introduction of AI chatbots to the public, there has been a contentious relationship between creators and AI companies – particularly over the issue of copyright.
This recent verdict in favour of Anthropic paves the way for a number of high profile cases regarding the use of creative content by AI companies. With there likely to be numerous lawsuits in the months and years to come.
This month, Disney and Universal Studios said it would be filing a lawsuit against image generation tool Midjourney, claiming the tool enables the illegal recreation of its copyrighted characters.
The BBC might also be filing a lawsuit against AI chatbot Perplexity, claiming the company shows BBC content word-for-word in its query answers. In response, Perplexity accused the BBC of attempting to uphold Google’s “illegal monopoly”.



