Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI The Courts

OpenAI Offers To Pay For ChatGPT Customers' Copyright Lawsuits (theguardian.com) 27

Blake Montgomery reports via The Guardian: Rather than remove copyrighted material from ChatGPT's training dataset, the chatbot's creator is offering to cover its clients' legal costs for copyright infringement suits. OpenAI CEO Sam Altman said on Monday: "We can defend our customers and pay the costs incurred if you face legal claims around copyright infringement and this applies both to ChatGPT Enterprise and the API." The compensation offer, which OpenAI is calling Copyright Shield, applies to users of the business tier, ChatGPT Enterprise, and to developers using ChatGPT's application programming interface. Users of the free version of ChatGPT or ChatGPT+ were not included. [...] Getty Images, Shutterstock and Adobe have extended similar financial liability protection for their image-making software. The announcement was made at the company's first-ever developer conference today, where Altman said there are now 100 million weekly ChatGPT users. The company also announced a platform for making custom versions of ChatGPT for specific use cases -- no coding required.
This discussion has been archived. No new comments can be posted.

OpenAI Offers To Pay For ChatGPT Customers' Copyright Lawsuits

Comments Filter:
  • by Joe_Dragon ( 2206452 ) on Monday November 06, 2023 @06:37PM (#63985850)

    must use lawyerGTP to make use of this offer!

  • by HBI ( 10338492 ) on Monday November 06, 2023 @06:47PM (#63985872)

    And absent that word, this is a scam.

  • Huge advancement (Score:5, Insightful)

    by mysidia ( 191772 ) on Monday November 06, 2023 @06:49PM (#63985876)

    A software vendor agreeing to take on a warranty regarding Liability for the operation of their product... that is Huge. If only more software vendors would do so; for security issues, etc...

    Rather than remove copyrighted material from ChatGPT's training dataset,

    They talk like that's even an option. ChatGPT would Not be useful if it didn't come as a model trained with such broad data.

    The training process should all fit within Fair Use, however, And people who aren't reproducing copyright works or a clear derivative work of a copyright work ought not to face such legal claims in the first place.

    • Re:Huge advancement (Score:4, Interesting)

      by evanh ( 627108 ) on Monday November 06, 2023 @07:25PM (#63985934)

      "The training process should all fit within Fair Use"

      I'd agree with this if the DMCA, or equivalents, didn't exist. As it stands, these are entirely opposing positions. The two can't coexist.

      • by tlhIngan ( 30335 )

        "The training process should all fit within Fair Use"

        I'd agree with this if the DMCA, or equivalents, didn't exist. As it stands, these are entirely opposing positions. The two can't coexist.

        Open source too.

        Because an AI trained on F/OSS code is under similar complexities. Would the AI generate code covered under the GPL? If it's fair use for the AI to be trained on Linux, then the code it spits out. despite deriving from Linux, could be used freely.

        This would mean if I wanted a driver LInux has, but don't

        • by gweihir ( 88907 )

          Exactly. Case law and regular laws need time to catch up, but AI cannot be used to launder copyrighted material into free material of similar nature. The copyright always stays in place if the "cleaned" material is similar enough to the original in purpose and characteristics. So, "hey, ChatGPT, write me a kernel like the Linux kernel" will get you a GPL2ed result, even if it may require a court to say that at this time. But it will happen. For parts, they may or may not hold, depending on size and complexi

          • by mysidia ( 191772 )

            The copyright always stays in place if the "cleaned" material is similar enough to the original in purpose and characteristics.

            That's not a given. If the cleaned material does Not contain Protected elements of the original work, then it won't be infringing. The method of analysis courts use to determine if works are similar for copyright purposes involves what is called Abstraction-Filtration-Comparison.

            The original work is abstracted and Non-protected elements are excluded from the comparison.

            Which means

        • by mysidia ( 191772 )

          if I wanted a driver LInux has, but don't want to obey the GPL, I could feed that driver into an AI as "fair use" and have it emit something similar, but since it was fair use, it's not covered by the GPL

          Yes. If you Don't obtain a patent covering the driver you invented, then you run this risk. You could also just read the code for each Unit of the existing driver (to gain the factual knowledge of what that unit does) and write new code for that Unit that accomplishes the same result.

          It's obviously l

      • by AmiMoJo ( 196126 )

        It's in AI developer's interests to come to some kind of truce with the people they rely on for training data, because those people are starting to fight back by poisoning their media in ways undetectable to humans but problematic for AIs.

        The real issue though is not the source of the training data, it's that AIs aren't aware of copyright rules in the same way most humans are. Most humans know that you aren't allowed to copy large sections of text verbatim, or copy someone else's composition and style and t

    • Comment removed based on user account deletion
      • by Anonymous Coward

        What part of "Fair Use" covers using the entirety of someone else's work without attribution or remuneration?

        Actually, US copyright law as written does allow that: the amount of the work used is only one of a number of considerations and nothing prevents the entire work from being used, without compensation or even attribution, and still qualifying for fair use if the other considerations are sufficient.

        Read the actual text carefully and you'll see this. Make sure you understand the exact meaning of the words used or you'll get it wrong ...

        Hence, in practice the law is really vague as to what it allows.

        That's why

      • by mysidia ( 191772 )

        What part of "Fair Use" covers using the entirety of someone else's work without attribution or remuneration?

        Fair Use does allow you to disseminate and analyze entire works.

        Nobody has shown that an AI regularly published reproductions of the entirety of anyone else's work.

        Attribution or renumeration are Irrelevent in regards to the question of infringement.

    • It's not huge. It's a PR stunt.

      The big cost to a company in court is not the few million in legal costs. It's the potential hundreds of millions when they lose a case.

      The huge word you're looking for is "indemnity" which is not what they're offering.

  • OpenAI's margins must be off the charts for them to shoulder liability for users. They would not be doing this if it didn't help their bottom line. It seems like the data they are collecting from users is so valuable that they want to make sure people use ChatGPT no matter what.
    • You could also read it as an expression of OpenAI's confidence in their practices, that they think they're on solid ground, legally.

      The notion that the data is "so valuable" is countered by the fact that they let you turn it off at will: "Conversations that are started when chat history is disabled wonâ(TM)t be used to train and improve our models, and wonâ(TM)t appear in the history sidebar," the company said in the announcement. "When chat history is disabled, we will retain new conversations

      • Or you could read it as a PR stunt because they are offering to pay a few bucks in legal fees but not providing true indemnity if the case is lost.

        The big expense in court for a company isn't a few million in legal fees. It's when they lose that the costs hit potentially hundreds of millions and certain tens.

      • They have statistics on what Tony fractions of people exercise that option. Again, they are doing this because it helps their bottom line.
  • Endemic corporate corruption isn't a bug, it's a feature!

    Purdue Pharma and the Sackler family murder for profit scheme is clearly a winning business model, and OpenAI is following in their footsteps.

    1. Find a loophole that enables vast exploitation. Illegal activity is no problem.

    2. ???

    3. Profit at the billionaire level with no accountability.

    Lawsuits over copyright are chump change and an easily absorbed cost of doing business. What comes after that will be the real dumpster fire. Greedy and incompe

  • by gweihir ( 88907 ) on Monday November 06, 2023 @09:58PM (#63986272)

    Your lawyers being paid does not mean a lot if you are on the hook for damages. And damages are not legal costs. You will also be required to _stop_ using that infringing data and that will also not be covered.

    There is not way they will actually cover everything.

  • ... the better lawyers and everyone can kick out epic lawsuits for a few pennies. LOL!

    Perhaps OpenAI is close to this which is why they can be comfortable making this offer.

  • The reason this is offered is because they're going to win handily. Current court cases do not look good for the rentseekers who derive their parasitic income from copyright and IP laws, with most claims being dismissed and the last claims now having a judge that says "please demonstrate where the AI produces art that is a copy of your work". The plaintiffs can't do that, and they know it, so the court cases are going to vanish in a few months.

    And that's not even counting specific exemptions from co

Disks travel in packs.

Working...