The Nation's Oldest Nonprofit Newsroom Is Suing OpenAI and Microsoft (engadget.com) 16
The Center for Investigative Reporting (CIR), the nation's oldest nonprofit newsroom, sued OpenAI and Microsoft in federal court on Thursday for allegedly using its content to train AI models without consent or compensation. CIR, founded in 1977 in San Francisco, evolved into a multi-platform newsroom with its flagship distribution platform Reveal. In February, it merged with Mother Jones.
"OpenAI and Microsoft started vacuuming up our stories to make their product more powerful, but they never asked for permission or offered compensation, unlike other organizations that license our material," said Monika Bauerlein, CEO of the Center for Investigative Reporting, in a statement. "This free rider behavior is not only unfair, it is a violation of copyright. The work of journalists, at CIR and everywhere, is valuable, and OpenAI and Microsoft know it." Bauerlein said that OpenAI and Microsoft treat the work of nonprofit and independent publishers "as free raw material for their products," and added that such moves by generative AI companies hurt the public's access to truthful information in a "disappearing news landscape." Engadget reports: The CIR's lawsuit, which was filed in Manhattan's federal court, accuses OpenAI and Microsoft, which owns nearly half of the company, of violating the Copyright Act and the Digital Millennium Copyright Act multiple times.
News organizations find themselves at an inflection point with generative AI. While the CIR is joining publishers like The New York Times, New York Daily News, The Intercept, AlterNet and Chicago Tribune in suing OpenAI, others publishers have chosen to strike licensing deals with the company. These deals will allow OpenAI to train its models on archives and ongoing content published by these publishers and cite information from them in responses offered by ChatGPT.
"OpenAI and Microsoft started vacuuming up our stories to make their product more powerful, but they never asked for permission or offered compensation, unlike other organizations that license our material," said Monika Bauerlein, CEO of the Center for Investigative Reporting, in a statement. "This free rider behavior is not only unfair, it is a violation of copyright. The work of journalists, at CIR and everywhere, is valuable, and OpenAI and Microsoft know it." Bauerlein said that OpenAI and Microsoft treat the work of nonprofit and independent publishers "as free raw material for their products," and added that such moves by generative AI companies hurt the public's access to truthful information in a "disappearing news landscape." Engadget reports: The CIR's lawsuit, which was filed in Manhattan's federal court, accuses OpenAI and Microsoft, which owns nearly half of the company, of violating the Copyright Act and the Digital Millennium Copyright Act multiple times.
News organizations find themselves at an inflection point with generative AI. While the CIR is joining publishers like The New York Times, New York Daily News, The Intercept, AlterNet and Chicago Tribune in suing OpenAI, others publishers have chosen to strike licensing deals with the company. These deals will allow OpenAI to train its models on archives and ongoing content published by these publishers and cite information from them in responses offered by ChatGPT.
Boot meet other foot ... (Score:5, Interesting)
This free rider behavior is not only unfair, it is a violation of copyright. The work of journalists, at CIR and everywhere, is valuable, and OpenAI and Microsoft know it."
Well, this is entertaining, watching soulless mega corporations being dragged into court over copyright violations for a change. Usually the boot is on the other foot.
Re: (Score:2)
If it's on the net, it's fair game. Isn't that one of the excuses people use when stealing music, videos, and software?
Re: (Score:1)
Re: (Score:2)
This free rider behavior is not only unfair, it is a violation of copyright. The work of journalists, at CIR and everywhere, is valuable, and OpenAI and Microsoft know it."
Well, this is entertaining, watching soulless mega corporations being dragged into court over copyright violations for a change. Usually the boot is on the other foot.
Thing is, the giant corporations just have to shuffle a little of the budget this way rather than that, and all these lawsuits disappear in a puff of lawyer farts. We can be upset about it, but until somebody with the bankroll to really stand these bastards down comes along with the same sentiment, it's just going to keep rolling. Though I must admit, just seeing it happen at all is pretty entertaining in the moment.
It's not illegal if no one stops us! (Score:5, Insightful)
Well, they're not wrong .. (Score:2)
Re: (Score:1)
"OpenAI and Microsoft started vacuuming up our stories to make their product more powerful, but they never asked for permission or offered compensation, unlike other organizations that license our material,"
Have you heard about YouTube yet?
Re: (Score:3)
YouTube allows fully cited extracts from media as part of a review. ClippyAI uses other peoples material. without citation or compensation.
Reminds me of Aaron Swartz (Score:1)
https://en.wikipedia.org/wiki/... [wikipedia.org]
Kind of a "yawn" ? (Score:2, Insightful)
I mean, yeah - they could have opted to try to strike a licensing deal with Microsoft. A bunch of news outlets did. Or, obviously, they can go the lawsuit route.
I feel like when you're a non-profit news source to begin with though? You're going to have a tougher time arguing about losses they caused you. It looks to me like they're basically letting people read/access their articles for free under a Creative Commons license. I realize there's a difference between letting people access what you share for per
Re: (Score:2)
This argument always comes up, but there's a fundamental flaw with it that none of the proponents like to talk about. Most of the content that the artists and authors complain is being used against their will was not shared by them directly. Some rando shared it illegally somewhere (aka warez or sloppy licensee) and some spider read it (wink wi
Re:Kind of a "yawn" ? (Score:5, Informative)
I feel like when you're a non-profit news source to begin with though? You're going to have a tougher time arguing about losses they caused you.
I don't see why. Nothing about being a non-profit implies you don't have revenue streams or that you might wish to protect them, especially if those revenues are what enable you to keep doing what you do.
This is like how the internet changed surveillance (Score:4, Insightful)
Re: (Score:2)
Surveillance used to take time, people, and effort, then the internet came along and now surveillance can done on a mass scale with low cost. Before AI people could learn from musical works, learn from authors books and articles, learn from painters, film makers, photographers, etc., but it took effort and it was not scaleable, one person could not emulate 100 film makers. Now AI has brought scale to mimicing art, so one software program can mimic the art of all artists. Copyright was not made for this situation. Just like with surveillance we need new laws about what is allowed and what is not allowed. The nature of society will be decided by this. We probably won't get it right the first time, it will have to gradually change over time, and the lawmakers will have to understand what the problems are.
Your right in that scalability is not relevant to copyright law. The intent is to protect the works of people not ensure their gainful employment. This is merely one in a long line of situations where people are upset about change coming for their jerbs.
I suppose there could be a popular uprising against technology yet the typical pattern has been some jerbs going out of style while the rest of society looks on indifferently and shrugs. I believe what is most likely here is the lost court cases pile up a
Napster of Muppets feat. Don't copy that floppy! (Score:3)