New York Times Denies OpenAI's 'Hacking' Claim In Copyright Fight 25
An anonymous reader quotes a report from Reuters: The New York Times has denied claims by OpenAI that it "hacked" the company's artificial intelligence systems to create misleading evidence of copyright infringement, calling the accusation as "irrelevant as it is false." The Times in a court filing on Monday said OpenAI was "grandstanding" in its request to dismiss parts of the newspaper's lawsuit alleging its articles were misused for artificial intelligence training. The Times sued OpenAI and its largest financial backer Microsoft in December, accusing them of using millions of its articles without permission to train chatbots to provide information to users.
The newspaper is among several prominent copyright owners including authors, visual artists and music publishers that have sued tech companies over the alleged misuse of their work in AI training. The Times' complaint cited several instances in which programs like OpenAI's popular chatbot ChatGPT gave users near-verbatim excerpts of its articles when prompted. OpenAI responded last month that the Times had paid an unnamed "hired gun" to manipulate its products into reproducing the newspaper's content. It asked the court to dismiss parts of the case, including claims that its AI-generated content infringes the Times' copyrights. "In the ordinary course, one cannot use ChatGPT to serve up Times articles at will," OpenAI said. The company also said it would eventually prove that its AI training made fair use of copyrighted content.
The Times replied on Monday that it had simply used the "first few words or sentences" of its articles to prompt ChatGPT to recreate them. "OpenAI's true grievance is not about how The Times conducted its investigation, but instead what that investigation exposed: that Defendants built their products by copying The Times's content on an unprecedented scale -- a fact that OpenAI does not, and cannot, dispute," the Times said.
The newspaper is among several prominent copyright owners including authors, visual artists and music publishers that have sued tech companies over the alleged misuse of their work in AI training. The Times' complaint cited several instances in which programs like OpenAI's popular chatbot ChatGPT gave users near-verbatim excerpts of its articles when prompted. OpenAI responded last month that the Times had paid an unnamed "hired gun" to manipulate its products into reproducing the newspaper's content. It asked the court to dismiss parts of the case, including claims that its AI-generated content infringes the Times' copyrights. "In the ordinary course, one cannot use ChatGPT to serve up Times articles at will," OpenAI said. The company also said it would eventually prove that its AI training made fair use of copyrighted content.
The Times replied on Monday that it had simply used the "first few words or sentences" of its articles to prompt ChatGPT to recreate them. "OpenAI's true grievance is not about how The Times conducted its investigation, but instead what that investigation exposed: that Defendants built their products by copying The Times's content on an unprecedented scale -- a fact that OpenAI does not, and cannot, dispute," the Times said.
Re: (Score:3)
Re: (Score:3)
That's about as likely as Trump giving a respected speech about astrophysics.
That's relative. Trump could give a speech about nuclear powered Doritos and if the audience consisted of his fans I'm sure they'd love it.
Re: (Score:3)
Trump could give a speech about nuclear powered Doritos and if the audience consisted of his fans I'm sure they'd love it
In 2016, he came close [youtube.com] to doing just that [snopes.com].
This is the 4th AI article in a row (Score:5, Insightful)
AI here, AI there, AI everywhere.
What a fucking bore. I'd rather read news about goat hearding on Slashdot and everywhere else at this point...
Surely there are other interesting tech topics. Give it a fucking rest already.
Re: (Score:2)
I asked Google Bard^WGemini
"Generate a Slashdot story that isn't about AI."
Head in the Clouds: First Commercial Helium-3 Mining Operation Begins on the Moon (288 comments)
story
By cryptonaut (4988 points) | discuss (288)
Hold onto your spacesuits, nerds! LunaCorp, the controversial space mining startup, has officially begun the first commercial extraction of Helium-3 on the lunar surface. This light isotope of helium, a potential game-changer for clean fusion energy, has long been theorized to exist on the M
Just a few? (Score:3, Interesting)
Please, tell us more. Just a few words or sentences sounds like it was most likely sentences. And the, just how many sentences did you nudge it with? How derivative were the original articles to begin with? And, is it possible the articles themselves were, in whole or part, composed by AI?
Re:Just a few? (Score:5, Insightful)
How are those questions taken seriously?
Even if one of their nudges included the entire first half of an article, ChatGPT duplicating a second half verbatim would be strong indication that the original was part of the training data. Which is the entire claim of the lawsuit.
By definition original articles are not derivative, but even if the NYTImes were basing their work on something from a third party, this would have no bearing on the lawsuit, except that maybe the third party would also have grounds to sue.
The lawsuit was filed in Dec 2023, just over a year after ChatGPT was made publicly available. In order for the last question to even be possible, the NYTimes would have to had to somehow build an AI system capable of not only writing articles but also plausibly fact-checking them, years *before* ChatGPT was available to the public and then kept it secret in order to... what... sell newspapers? And then OpenAI comes along and copies their AI from the NYTimes super secret AI, but the only thing the NYTimes is mad about is the article contents? This wouldn't even hold up in a Back To the Future reboot.
Re:Just a few? (Score:4, Interesting)
I think the NYT did exactly that with their Time Machine just like the editors of the hitchhikers guide,
"Its editors, having to meet a publishing deadline, copied [some] information off the back of a packet of breakfast cereal, hastily embroidering it with a few footnotes in order to avoid prosecution under the incomprehensibly torturous Galactic Copyright Laws. It's interesting to note that a later and wilier editor sent the book backwards in time, through a temporal warp, and then successfully sued the breakfast cereal company for infringement of the same laws".
Re:Just a few? (Score:4, Insightful)
Additionally, if you can reproduce the rest of the article by providing the first few lines, that's a paywall bypass.
Re: (Score:2)
Indeed. I think this thing boils down to commercial copyright infringement on mass-scale. If so, criminal law comes into play.
Re: (Score:2)
Indeed. ChatGPT is just trying to confuse the issue because that is all they have and they _know_ they did a grat, big, illegal piracy campaign to get their training data. They are basically fucked and are just trying to postpone the inevitable.
I don't know what the courts will say... (Score:4)
... but this defence sounds like "we didn't steal it and anyway they had to look under our bed to find it".
Wrong problem (Score:3)
If NYT wants to make any real progress here they need new legislation to address their real issue: they are spending money to create information which OpenAI and others are feeding into their model without remuneration (or at least paying no more than an individual would pay) yet redistributing the information on a massive scale. It would be hard to craft new legislation to this effect, since most news outlets work this same way, with only NYT and a handful of other national papers actually spending on original reporting. But that is their real and lasting problem.
Re: (Score:2)
Not sure what they are trying to achieve in the lawsuit, TBH. They've already fixed the model, so their prayer for relief can't be that. Damages? Doubt, given that it took some major backflips for NY to create the duplicates, and they were likely the only such dups... unless something turns up in discovery, I guess.
Re: (Score:2)
Not really. Once something is in the model, you cannot get out out anymore. You can make it less likely for some types of triggers you can think of, but other triggers will still produce the material. You can also only do this for a small amount of training material, or you will totally screw up the model.
Its all about capitalism (Score:2)
End stage capitalism allows an entity to charge other people to use its IP. IP that can be reproduced infinitely at (virtually) no cost.
Almost overnight, the principles and tenets of early stage capitalism are thrown out.
The consequence is a wealth divide between the haves and have nots, where the have is 'have IP'.
This fight is all about maintaining that disfunctional status quo, despite the world bleeding wealth from every or
Re: (Score:2)
The IP is not the electronic bits.
The IP is the words themselves and the cost to put those words together.
The storage medium and mechanism is irrelevant in IP discussions.
Re: (Score:1)
End stage capitalism
Socialism
It's called Socialism.
Takes fewer keystrokes too.
In related news (Score:2)
Re: (Score:2)
Animism is stupid. AI does not "learn". That is just marketing speech. AI is _trained_ and that is fundamentally different.