OpenAI, Microsoft, Google, Meta and Amazon Pledge To Watermark AI Content For Safety, White House Says (reuters.com) 47
Top AI companies including OpenAI, Alphabet and Meta Platforms have made voluntary commitments to the White House to implement measures such as watermarking AI-generated content to help make the technology safer, the Biden administration said on Friday. From a report: The companies -- which also include Anthropic, Inflection, Amazon.com and OpenAI partner Microsoft -- pledged to thoroughly test systems before releasing them and share information about how to reduce risks and invest in cybersecurity.
The move is seen as a win for the Biden administration's effort to regulate the technology which has experienced a boom in investment and consumer popularity. Since generative AI, which uses data to create new content like ChatGPT's human-sounding prose, became wildly popular this year, lawmakers around the world began considering how to mitigate the dangers of the emerging technology to national security and the economy.
The move is seen as a win for the Biden administration's effort to regulate the technology which has experienced a boom in investment and consumer popularity. Since generative AI, which uses data to create new content like ChatGPT's human-sounding prose, became wildly popular this year, lawmakers around the world began considering how to mitigate the dangers of the emerging technology to national security and the economy.
Local models? (Score:4, Insightful)
I don't think you can watermark text in a meaningful way that survives copy/paste. So this is probably more about images.
But there are local models for Stable Diffusion, so how is this even relevant?
Re: (Score:3)
> I don't think you can watermark text in a meaningful way that survives copy/paste.
You could create a canary trap that embeds a serial number based on words within the response.
What's more likely though is they will just embed a disclaimer.
Re: (Score:2)
You can't actually do that, because generative Big Data ML AI output is not predictable at the backend. That's why censorship efforts at the LLM levels proved futile, and what is used instead is a top level database that incepts all queries and if query has a collision with input in the database, database outputs a canned response. It's basically the pre-LLM chatbot system that sits on top of the LLM.
Re: (Score:2)
> You can't actually do that, because generative Big Data ML AI output is not predictable at the backend.
It's possible to put map traps into large models. There are phrases already you can use to determine if they are using ChatGPT.
But that aside, you just modify the output that comes back from the LMM before it goes to the end user.
There was a /. some years back about an App that would do this with your email, so you could track leaks in a large organisation. Just don't have the link off hand. :/
Re: (Score:2)
...how is this even relevant?
Simple, it's 'tethics.' Has your company signed the pledge yet?
Re: (Score:2)
I don't watch crappy shows on HBO.
Re: (Score:3)
Re: (Score:2)
For text, there's quite a bit that can be done with Unicode lookalikes [github.com] and invisible characters [invisible-characters.com] that would easily "survive" copying and pasting.
But there are local models for Stable Diffusion, so how is this even relevant?
Just having 'watermarking' as the default will make a big difference. Criminals are lazy and stupid, after all. Even malicious state actors can make dumb mistakes like forgetting to disable watermarking on their local setup.
The majority of non-malicious users won't even know about the 'watermark', let alone bother to remove it. This will help researches easily avo
Re: (Score:2)
What does criminality have to do with any of this? Most of the complaints are about people doing their work on AI and not disclosing it. Nothing criminal about that, it just makes discriminating against AI content harder which is what most of the whining is about.
As for "invisible characters", cool story bro. Ever used google lens or any number of similar programs for copying and pasteing text?
Because if watermarking becomes widespread enough, tools to defeat it will rapidly be made from existing software.
Re: (Score:2)
You seem to be under the impression that if a solution isn't 100% perfect, then it's useless. You see this same nonsense every time someone mentions anything related to security. It's incredibly foolish.
What does criminality have to do with any of this?
There are malicious actors and non-malicious actors. Criminals fall into the 'malicious actors' category. There is little reason for non-malicious users to want to remove invisible watermarks. It doesn't hurt them, and they benefit from being able to easily identify some generated content. For malicious
Re: (Score:1)
You seem to be under the impression that if a solution isn't 100% perfect, then it's useless. You see this same nonsense every time someone mentions anything related to security. It's incredibly foolish.
You seem to be under the impression that if it's not 100% useless then it's worth pursuing. The idea of watermarking text is just stupid. It will accomplish so little as to be a waste of time. Depending on how they try to do it, it could also just end up hurting the models in question. We already see evidence of human guidance making the models dumber.
There are malicious actors and non-malicious actors. Criminals fall into the 'malicious actors' category. There is little reason for non-malicious users to want to remove invisible watermarks. It doesn't hurt them, and they benefit from being able to easily identify some generated content. For malicious actors, such as criminals and hostile state actors, having their generated content watermarked isn't in their best interest. Consequently, most people who would want disable watermarking or remove existing watermarks are going to be criminals and hostile state actors.
If you're not doing anything wrong, you have nothing to worry about, wink wink. I don't think you're on the right website here friend. Most of us here v
Re: (Score:2)
you're wrong that people removing watermarks must be criminals or hostile state actors.
Good thing I didn't say that then. Learn to read.
Re: (Score:2)
For text, there's quite a bit that can be done with Unicode lookalikes [github.com] and invisible characters [invisible-characters.com] that would easily "survive" copying and pasting.
Every time I copy/paste something from a document or a web site, I do it in two steps: copy the original into a plain ASCII text editor to get rid of all formatting and Unicode weirdness, and then copy from the plain text editor into the place where I need it.
I have been doing it like this for years (yes, I know some programs allow Shift-Ctrl-V or Ctrl-Alt-V or Super-V (a.k.a. Windows key-V) to paste without formatting, but the simple fact that I have to mention 3 different key-combinations shows that this is not standardized, and not all software supports this; and yes I also know there are dedicated apps for this such as PureText).
Re: (Score:2)
Anyway, the point I wanted to make is that it is very easy to get rid of text watermarks, and this will probably become standard procedure when text watermarks become prevalent as those would cause many glitches.
Re: (Score:2)
There is more to the world than 7-bit ASCII.
those would cause many glitches.
Only in broken Unicode implementations, which are quickly becoming a thing of the past.
Re: (Score:2)
"There is more to the world than 7-bit ASCII."
"Only in broken Unicode implementations, which are quickly becoming a thing of the past."
You *do* know you're posting on Slashdot, right?
Re: (Score:2)
Re: (Score:2)
I don't think you can watermark text in a meaningful way that survives copy/paste.
You can: https://arxiv.org/abs/2301.102... [arxiv.org] and it seems to be resistant according to the paper(especially if you don't want to destroy the quality of the output). I won't be surprised if they are doing this already, at least to avoid training on their own generated content when scraping the web. I remember A86, an assembler, fingerprinting its generated code by choosing specific sequences of instructions (doing an add with a negative number instead of a sub this kind of things. ) https://en.wikipedia.org/wi [wikipedia.org]
Re: Local models? (Score:1)
Re: (Score:2)
Haven't you read the Da Vinci Code?
Just embed some secret numeric sequences in the text, and you'll have the QAnon folks running around in circles for decades!
Worthless (Score:5, Interesting)
A 'corporate pledge' isn't worth the PR hack who issued the press release.
Legislate it as a requirement, with significant sanctions for violations, or it's meaningless.
Re: (Score:1)
THINK! The ONLY thing you can do is control legitimate sources which are limited in number; it's impossible to control the infinite fakes.
All legit sources must digitally sign their identity to their content. If you don't sign your photo or news, then I will assume it's all FAKE until you do so. If you are a liar, I'll know your content came from you and to not trust it because I don't trust you. Simple, if you think about it.
If you want people to believe your content is actually from you, then you have
Re: (Score:2)
This is correct. It's all about trust and always has been, digital or not.
Won't solve any problem (Score:4)
wtf? It's not like the Getty images watermark prevented those images from being used to train AI to the point where some generated images included blobs of white in a similar pattern. So... this will do what exactly?
define "safer" (Score:4, Insightful)
Talk is cheap (Score:3, Informative)
Ouroboros (Score:2)
Re: (Score:2)
While their primary goal is not anti-competitive; that is always a factor skewing their actions -- and the big players have similar tactics (and often share employees over time; or at the same time in the case of a 3rd party lobbyist.) The big players don't want upstart threats; just innovative small players they can crush or buy out who take most the risks.
How do you watermark text? (Score:2)
The title track on that album was an instrumental.
I may be cynical, but ... (Score:2)
"The companies -- which also include ... Microsoft -- pledged to thoroughly test systems before releasing them"
Well that will be something new !!
"...and share information about how to reduce risks..."
Something like, buy $$$$$ worth of add-ons to patch the 'thoroughly tested' systems we release today !
This will just make the stupid people happy (Score:2)
There is ABSOLUTELY NOTHING these companies can do to put the toothpaste back into the tube and they know it. The idiots in congress don't, and I'm not even sure the president knows what day it is, or even his favorite ice cream flavor anymore...
AI is a boon to the world. It's going to make life a lot better for everyone. And if you're REALLY worried, go become a plumber, electrician, or stone worker. Seriously, people are charging 20,000 dollars for a 12x10 deck off your house these days... Go do somet
Re: (Score:2)
You're rather dramatically overestimating the capabilities of current AI. What we have now isn't going to significantly change the way we live and work.
I'm not even sure the president knows what day it is
He's doing just fine. [independent.co.uk]
AI is a boon to the world.
That's a bold claim. The true believers usually hedge with a "will be".
Re: (Score:1)
No, he's not 'doing just fine'. Have you watched or listened to him try to bumble through 3 sentences? He's an embarrassment. Put it this way, if aliens showed up tomorrow and said 'Take us to your leader', would you want them having a chat with Joe Biden?
And, AI IS a boon to the world, right this second. I use it ever day to help with odd things. It's great with git workflows, for example. It's a lot more concise than reading through stack overflow. It has internalized the whole internet (up to 2021
Re: (Score:2)
Reality [politifact.com] differs from the selectively edited clips you're being fed.
That's a boon enough for me, personally.
We've gone from "boon to the world" to a "boon" for just you. That's quite a shift!
It has internalized the whole internet (up to 2021) and can intelligently discuss complex things.
That very obviously isn't true. Though if you really believe that, it does explain a few things...
Re: (Score:1)
It's called extrapolation, nitwit.
Re: (Score:2)
It takes a lot to get me to post a link to an XKCD comic [xkcd.com], but your post is just that stupid.
Re: (Score:1)
We'll see it all on display if Biden's handlers let him have a debate with anyone. I'd be happy to see him chat with RFK. I think RFK would make a much better candidate. Biden is walking us right into WW3. We need someone with unquestionable mental faculties to get us out of this bind. And, yeah, AI is a boon to the world. When you combine 'big data' with the ability to textually embed it and put a human-like 'front end' on it, that's a game changer. If I personally can embed mountains of PDF files and
Re: (Score:2)
Oh, are you in for a world of disappointment!
You can't say I didn't warn you.
Worthless (Score:2)
This means nothing. The minute one of the primary AI providers starts embedding watermarks within their content, the very next minute someone will generate a GAN to detect and remove watermarks. That's, uhm, the whole point of GANs - overcome some arbitrary limitation or restriction to generate output that is deemed as having value. Watermark-stripped content will be deemed as having value.
Gmail sentence completion? (Score:2)
When Gmail completes a sentence, that's AI generated text. How will that be watermarked? An X-AI-Generated email header?