Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI The Courts

Thomson Reuters Wins First Major AI Copyright Case In the US 40

An anonymous reader quotes a report from Wired: Thomson Reuters has won the first major AI copyright case in the United States. In 2020, the media and technology conglomerate filed an unprecedentedAI copyright lawsuit against the legal AI startup Ross Intelligence. In the complaint, Thomson Reuters claimed the AI firm reproduced materials from its legal research firm Westlaw. Today, a judge ruled (PDF) in Thomson Reuters' favor, finding that the company's copyright was indeed infringed by Ross Intelligence's actions. "None of Ross's possible defenses holds water. I reject them all," wrote US District Court of Delaware judge Stephanos Bibas, in a summary judgement. [...] Notably, Judge Bibas ruled in Thomson Reuters' favor on the question of fair use.

The fair use doctrine is a key component of how AI companies are seeking to defend themselves against claims that they used copyrighted materials illegally. The idea underpinning fair use is that sometimes it's legally permissible to use copyrighted works without permission -- for example, to create parody works, or in noncommercial research or news production. When determining whether fair use applies, courts use a four-factor test, looking at the reason behind the work, the nature of the work (whether it's poetry, nonfiction, private letters, et cetera), the amount of copyrighted work used, and how the use impacts the market value of the original. Thomson Reuters prevailed on two of the four factors, but Bibas described the fourth as the most important, and ruled that Ross "meant to compete with Westlaw by developing a market substitute."
"If this decision is followed elsewhere, it's really bad for the generative AI companies," says James Grimmelmann, Cornell University professor of digital and internet law.

Chris Mammen, a partner at Womble Bond Dickinson who focuses on intellectual property law, adds: "It puts a finger on the scale towards holding that fair use doesn't apply."

Thomson Reuters Wins First Major AI Copyright Case In the US

Comments Filter:
  • by linuxguy ( 98493 ) on Tuesday February 11, 2025 @04:56PM (#65160209) Homepage

    If the lawsuits block US AI companies, what is to prevent Chinese AI companies from completely ignoring them?

    I can see the US AI companies facing serious challenges in the not-so-distant future. The Chinese already have a labor and cost advantage. I am reminded of what the Japanese did to the US electronics industry.

    • If we're now allowed to punch neighbors we disagree with, then what prevents the mafia from doing this? In other words, the attitude that one should be allowed to break the law because other people are doing it doesn't hold up.

  • by evanh ( 627108 ) on Tuesday February 11, 2025 @04:57PM (#65160211)

    Either AI companies are in bullshit dreamland, or copyright falls.

    • I think you're right but I'm still sad about it.

      This is the beginning of the end for chatbots that simply tell you what you want to know, instead of constantly skewing the results or trying to distract you in order to turn a buck.

  • by dgatwood ( 11270 ) on Tuesday February 11, 2025 @04:59PM (#65160219) Homepage Journal

    This involved human beings taking summaries from a database, rewriting them negligibly, and then loading that modified content into generative AI, and doing so in explicit violation of rules saying not to use the database for that purpose, with the explicit intent to build a service that competes against the service provided by the original database. This isn't a particularly broad decision.

    This is basically the phone book copying case, but without the factuality defense. Had they started with the original case notes instead of the other company's summaries, they probably would have been in the clear, but they did not.

    • by thczv ( 541683 )
      I think the mistake this judge made is inferring copying because the summaries are similar. Access plus similarity equals copying. But similarity is inevitable whenever multiple parties are summarizing the same content. The shorter the summary, the more similar it will be to other summaries (as long as they are all accurate). I don't think this is the kind of thing the Copyright Act was trying to protect against. If you call this infringement, you basically give preference to the first summarizer, which ten
  • by Shaitan ( 22585 ) on Tuesday February 11, 2025 @05:01PM (#65160227)

    To companies that release fully uncensored and unrestricted open models and weights for free use and redistribution.

  • by dfghjk ( 711126 ) on Tuesday February 11, 2025 @05:03PM (#65160235)

    The article didn't even say what the alleged copyright violation was. The fair use doctrine doesn't matter until we know what the use actually is.

    Artificial neural networks are trained by "reading" text, just like our brains are trained by reading text. If the claim is that training an AI by reading text is not "fair use" because training the AI is an intention to create a competitor, then that text cannot be read by humans either. Now, it could be argued that humans do not intend to "create a competitor" by reading text, but law students certainly do. Can law students, or attorneys, NOT read legal documents because it is not "fair use"? The usage here is very important, if the argument is simply that training is not fair use then there is going to be a big problem.

    • by dfghjk ( 711126 )

      I looked at the actual decision, it seems really crappy to me. It doesn't appear to me to say what the "use" actually was. It would seem that the actual "copyright violation" was made by a third party prior to involvement by any AI at all. No consideration of the AI itself is even made. Instead, the judge only seems to care that copyrighted materials were fed into a system intended to be a commercial product. Sure, but attorneys practicing law do that too. My comment remains, can law students read cop

    • So read a lot of copyright works, memorize it exactly, then you regurgitate it almost exactly and try to sell that. This is not fair use, even for a human. At the very least, summarize without copying. And yes, in legal arenas this means summarize the summaries if the summaries are copyrighted.

    • I would suggest that once you process with automation there is a difference.

      Adding the automation fundamentally changes the scale of how far and wide and fast an idea can go.

      I think its pretty obvious that harvesting information on an industrial scale with intent to monetize is quite obviously not fair use.
  • It has never been intended for companies, business of any kind, or any individual redistributing to another party.

    Perhaps libraries and such also have fair use protections, but that is about it.
    • Fair use very often about something bigger than the individual. Fair use does cover parodies, even if those parodies are for other people and those parodies make money. Also being allowed to copy a couple of sentences into a news article or your term paper.

  • by supabeast! ( 84658 ) on Tuesday February 11, 2025 @05:36PM (#65160331)

    Elon Musk is the biggest donor to the Republican Party. He also owns an AI company. Heâ(TM)s best buddies with the President and the GOP congresscritters defer to him. There will be a law passed allowing AI companies to do whatever they want with copyrighted works. Theyâ(TM)ll tell us that itâ(TM)s to keep China from gaining AI supremacy blah blah national security blah blah and people will lap it up.

    • I fear this is all very accurate. Why would trump care about fair use?

      Funny thing is elon's Ai will probably be a dud and get stomped by the better companies.

    • by gweihir ( 88907 )

      And, let me think, you expect this "law" to apply globally? Why would it? That is if it is found to be constitutional in the first place. And they are up against Disney here.

      This ruling essentially predicts what is going to hold all over the place, namely that training data was obtained in a massive criminal piracy campaign by almost all model creators.

  • by Mirnotoriety ( 10462951 ) on Tuesday February 11, 2025 @08:28PM (#65160563)
    Quite right too. The AI companies claim to only train their models using publically availiable data. But in fact are free-boarding on the works of others.
  • it's obviously been time to eliminate the entire intellectual property and patent system, or at least reduce its use case to ART AS WAS INTENDED, and stop letting these greedy fuckheads run ripshod over us for decades now. Them getting rich is not worth us getting fucked out of the ability to do things right.

Machines have less problems. I'd like to be a machine. -- Andy Warhol

Working...