Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
The Courts AI Slashdot.org

Sarah Silverman Hits Stumbling Block in AI Copyright Infringement Lawsuit Against Meta (hollywoodreporter.com) 93

Winston Cho writes via The Hollywood Reporter: A federal judge has dismissed most of Sarah Silverman's lawsuit against Meta over the unauthorized use of authors' copyrighted books to train its generative artificial intelligence model, marking the second ruling from a court siding with AI firms on novel intellectual property questions presented in the legal battle. U.S. District Judge Vince Chhabria on Monday offered a full-throated denial of one of the authors' core theories that Meta's AI system is itself an infringing derivative work made possible only by information extracted from copyrighted material. "This is nonsensical," he wrote in the order. "There is no way to understand the LLaMA models themselves as a recasting or adaptation of any of the plaintiffs' books."

Another of Silverman's arguments that every result produced by Meta's AI tools constitutes copyright infringement was dismissed because she didn't offer evidence that any of the outputs "could be understood as recasting, transforming, or adapting the plaintiffs' books." Chhabria gave her lawyers a chance to replead the claim, along with five others that weren't allowed to advance. Notably, Meta didn't move to dismiss the allegation that the copying of books for purposes of training its AI model rises to the level of copyright infringement.
In July, Silverman and two authors filed a class action lawsuit against Meta and OpenAI for allegedly using their content without permission to train AI language models.
This discussion has been archived. No new comments can be posted.

Sarah Silverman Hits Stumbling Block in AI Copyright Infringement Lawsuit Against Meta

Comments Filter:
  • Now AI can digest immature, boring fart jokes.

    • by Revek ( 133289 )
      and let them pass?
    • I dislike her stand up for that reason, nothing but fart and talking vagina idiocy. But her book is hysterical, and The Sarah Silverman Program was one of the funniest things I've ever watched! I seriously loved both!

      • by Potor ( 658520 )
        I teach ethics, and when I get to Kant, I always make sure to pronounce "duty" as "doody," and tell the class that is a Sarah SIlverman joke. Nobody gets it.
  • by chill ( 34294 ) on Tuesday November 21, 2023 @05:25PM (#64022383) Journal

    Author: You trained your AI on my books!
    Meta: We bought your book and our AI read it. So?

    And...where do we go from here? If they purchased a copy of the books, the AI "reads" the books and can quote limited amounts. Just like people. WTF is the issue?

    • by jvkjvk ( 102057 )

      There are different licenses for commercial use or "reading the book" over personal use. They likely need a license to use a book for commercial purposes, which they are clearly doing.

      • Maybe. It depends on what the court rules and it's not looking good for this Silverman lady so far.
      • by dfghjk ( 711126 )

        If a literature major reads a book, is that commercial use? Or only after they get a degree?

        Prove that an LLM ever "uses" its "reading the book". Prove it even knowing how training data is used.

        • by jvkjvk ( 102057 )

          Commercial Use: Any use of copyrighted material for commercial purposes, such as selling, distributing, or using it in a business context, typically requires permission from the copyright owner. Commercial use without permission may lead to legal consequences.

          I'll leave it to you to decide the difference between a literature major and the major corporation OpenAI.

      • If I were a budding standup comic and bought Silverman's books to expose myself to her style of comedy, do I need to buy it under a different kind of license?

        No. Silverman would have to sue me if/when I create material that's an infringing reproduction of her work and include it in my Netflix special.

        • It would be easier to go full Carlos Mencia, and just repeat her jokes, and pretend they're yours.
    • AI is not human not sentient. A program that uses copyrighted property to build its functionality without compensation to the rights holder is committing theft.

      • If you use her book to fuel a small steam engine and take some tourist on a ride.. Would this be theft as well?

        What is sentient is still needs to be defined. NNs work similar to human brains, so ....

        We do not charge writers extra for books just because they can learn from what they read.

        • Yea, but we also don't give human rights to machines. And human rights include being fairly compensated for your work... be careful where you tread with this argument, it's going to loop back on itself and eat your point.

          • True. Software is not humans. However, books are for reading to begin with. There is no clause that they are only for reading by humans. I guess, if you have are reading robot that can read aloud from a printed book, that would be OK to use it, right? So, if somebody, or something, read (consumed) a book and learnt from it. It looks like intended use scenario.

            Truly, I do not want to spiral down a hole here. Laws are complicated thing. But considering that NNs do not explicitly duplicate the information, bu

      • by dfghjk ( 711126 )

        Well, you got one thing right. I guess you think sentient beings do not commit theft but non-sentient ones do?

        • Come on, the computer didn't go shopping for books to read on its own. It was fed them by a human who knew full well he was breaking the law.

    • If I buy her book, then go on a comedy tour using her jokes in that book....that's an issue.

    • by AmiMoJo ( 196126 )

      Ask Disney. They are not too happy that Bing's AI image generation can produce Disney characters and the Disney logo. Bing tried to stop it by blocking the word "Disney" from prompts, but if you just say "a popular children's cartoon company whose name starts with D and ends in Y logo" and "a cartoon mermaid with red hair and a shell bikini" you get a picture of Ariel with their logo on it.

      The key to these lawsuits is going to be showing that if you give the AI the right prompt, it produces copyright infrin

      • There are claims Github's Copilot exactly reproduces chunks of code it was fed as training data. Lawsuit is still on-going

  • US Constitution, Article I, Section 8, Clause 8.
    To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.

    Preventing training using copyrighted materials would be counter to the stated purpose of US Copyright.

  • by MikeDataLink ( 536925 ) on Tuesday November 21, 2023 @05:32PM (#64022415) Homepage Journal

    If I buy a book and keep it my brain and use the knowledge from it to make decisions I don't owe the author any additional money.

    For that matter if I start an autoshop repairing cars and read library books to gain the knowledge I STILL don't ow the author a dime.

    AI vs HUMAN is 100% irrelevant.

    • I agree and offer another example:

      When I do research, I most often use the Internet, where I find enormous amounts of information that does not belong to me. I may quote snippets to serve as a reference or I may make my own conclusions based directly on source material, be it news, chatbots, YouTube, or others.

      Because presentations are based entirely on what I glean from the Internet, I'm sure I'm violating some IP pseudo-copyright if the law is stretched far enough.

      I think this person's search for copyrigh

    • by tlhIngan ( 30335 )

      If I buy a book and keep it my brain and use the knowledge from it to make decisions I don't owe the author any additional money.

      For that matter if I start an autoshop repairing cars and read library books to gain the knowledge I STILL don't ow the author a dime.

      AI vs HUMAN is 100% irrelevant.

      If I read the Linux code and write my own operating system, is it under the GPL?

      If an AI reads open-source code and then spits out a program I use in a proprietary program, am I violating any open source licenses?

      If co

      • If I read the Linux code and write my own operating system, is it under the GPL?

        That depends. The question you and the OP posing are fundamentally different. The OP has done nothing with their knowledge. You have produced something with yours. Now there's a specific case to investigate. Does what you *produced* specifically violate copyright. So far no claims have been made that any production has violated a copyright in the books case, rather the claim is being made that the act of reading is the copyright infringement.

      • If I read the Linux code and write my own operating system, is it under the GPL?

        If an AI reads open-source code and then spits out a program I use in a proprietary program, am I violating any open source licenses?

        Modality is irrelevant. Whether or not a work is derivative or not depends on the judgement of the court.

        If by unlucky circumstance you happen to produce something someone else has copyrighted whether you had any prior knowledge, whether it came from your mind or an AI model, whether you previously had any exposure to the work are all mostly irrelevant. If the work is deemed copyrightable and a court deems your work close enough as to be judged as a copy/derivative you lose.

        • by dryeo ( 100693 )

          Actually to be convicted of copying something in a copyright case does take being exposed to the work that you are accused of copying. For common works it can be a hard defence to claim that you never were exposed to it. Consider how the IBM PC BIOS was reversed engineered by Phoenix using a clean room approach. To quote https://en.wikipedia.org/wiki/... [wikipedia.org],

          Clean-room design (also known as the Chinese wall technique) is the method of copying a design by reverse engineering and then recreating it without infri

      • >If I read the Linux code and write my own operating system, is it under the GPL?

        Only if you, I don't know, release it as GPL. Or actually use code from the GPL'd Linux project...

        If I felt masochistic enough, in theory I could re-write Linux from scratch, implementing all of the functions of the kernel and OS without using actual GPL'd code, I could then release it as software that is as closed source as Windows is. You know, exactly like Linux was originally written as a clone of UNIX...

        >If an AI rea

    • by dfghjk ( 711126 )

      According to /. logic, if you repair that car using TOOLs, then because the tools are not sentient, the tools are committing theft.

    • There are two problems with this comparison; - In a sense a deep learning algorithm is just data compression. See this paper; https://arxiv.org/abs/2309.106... [arxiv.org] . So when is a compressed copy not copy protected ? - AI does not generate anything new. So what would be the incentive of people to publish anything if it just can be adapted and reproduced in a slightly different form by an AI algorithm ?
    • If I buy a book and keep it my brain and use the knowledge from it to make decisions I don't owe the author any additional money.

      That is sane and currently correct; however, look at what is going on around you. We are leaving the enlightened era and moving into very dark times. The progeny of your progeny will have to pay the author every time that knowledge is used. (of course, that author will have been long dead, but a corporation will be there to collect the dues)

      Somewhere along the line, humanity sold its soul (and not for rock and roll either!). Boring and depressing.

    • If I build a machine that uses the knowledge of books I've bought to make decisions based on the information in that book, then sell the machine to someone else, do I owe the creator of the information my machine used anything? Without that information used to create it, it would be useless and no one would buy it.

  • by u19925 ( 613350 ) on Tuesday November 21, 2023 @06:11PM (#64022493)

    Meta AI model is equivalent to a library of congress where all pages (or even words, sentences, paragraphs) are stored in random order and a massive index to query them (this is a just a crude analogy to only illustrate the copyright issue discussed here).

    What court seems to imply is that storing information this way by itself is not a copyright infringement and the authors have not demonstrated that it is possible to query in a way to get results containing more than what fair copyright laws allow.

    Could someone craft a query which can result in getting output which can be viewed as copyrighted material from her books? In that case, you can prove the harm done.

    • by taustin ( 171655 ) on Tuesday November 21, 2023 @06:58PM (#64022585) Homepage Journal

      Meta AI model is equivalent to a library of congress where all pages (or even words, sentences, paragraphs) are stored in random order and a massive index to query them (this is a just a crude analogy to only illustrate the copyright issue discussed here).

      Actually, that's a very accurate description of Authors Guild, Inc. v. Google, Inc. [wikipedia.org] (which Google won at trial (and won on appeal as well) on the basis that a) they weren't returning parts of books large enough to not qualify as fair use, and b) the searchable index was not the same thing as the books themselves, and was thus transformative (which does not require permission), not derivative (which does)).

      I suspect that case was cited by the defense. Extensively.

    • Could someone craft a query which can result in getting output which can be viewed as copyrighted material from her books? In that case, you can prove the harm done.

      Could someone set up a series of rules to produce copyrighted material from a dictionary?

      Or as @pjayevans put it:

      Nice book. Too bad it was all plagiarized from the dictionary.

    • by Ichijo ( 607641 )

      Meta AI model is equivalent to a library of congress where all pages (or even words, sentences, paragraphs) are stored in random order and a massive index to query them (this is a just a crude analogy to only illustrate the copyright issue discussed here).

      And therefore, it's impossible to "remove" a single book from the model, just as when you calculate an average of 10 numbers, discard the original 10 values and then want to recalculate the average with one of the numbers removed. Right?

    • by dfghjk ( 711126 )

      It's such a crude analogy that it's worthless. It's worse than worthless, it's wrong.

      Also, if an amount that was reproducible didn't matter, there could be nothing left to copyright. 7 bit ASCII would cover all possible works.

  • If Microsoft Word scanned every page of Stephen King's library of works, and used it to formulate help articles and suggest sentences while you type, Mr. King is entitles to compensation.

    Fair use does not apply when intellectual property is used for profit, as Vanilla Ice found out in the David Bowie lawsuit.

    • by dfghjk ( 711126 )

      "If Microsoft Word scanned every page of Stephen King's library of works, and used it to formulate help articles and suggest sentences while you type, Mr. King is entitles to compensation."

      False, unless "Mr. King" actually wrote the suggestions.

      "Fair use does not apply when intellectual property is used for profit..."

      Yes it does.

      "...as Vanilla Ice found out in the David Bowie lawsuit."

      LOL now we know not only that you're wrong, but why you're wrong.

  • Everyone who has written a book read books first, that doesn't make them copyright infringers.

    Same goes for artists appreciating art.

    All works somehow build upon what came before and that's a good thing. Get over it already!

  • Imagine if you made a thing (so it's been created by you)... and that makes it artificial. Sarah Silverman was created by her parents.
    Imagine if the think you made isn't sentient, self-aware, or capable of it. That means it's not intelligent. Sarah Silverman is not intelligent.

    How can one non-AI claim another non-AI is somehow infringing rights a non-AI has?

    Boggles the mind. Good thing the judge is intelligent and not artificial.

  • Model collapse will teach the thieving scum at Meta.

  • Perhaps also, when kids are doing their homework, they can look at what all the other kids are writing, copy it, and claim that they were just using the other kids' work for "training."
    • by ET3D ( 1169851 )

      Well, if the machine produced an exact copy, or even approximate copy, that argument might have been relevant. But that's not what's happening.

      A kid's homework about a book (or any other resource) would include reading the book, summing it up and quoting parts from it. That's completely within fair use. If it wasn't, the school system wouldn't have been able to exist. That's also what the AI is doing, reading the source material and using it both to learn language and produce answers.

"I've finally learned what `upward compatible' means. It means we get to keep all our old mistakes." -- Dennie van Tassel

Working...