Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Government

OpenAI Supports California AI Bill Requiring 'Watermarking' of Synthetic Content 30

OpenAI said in a letter that it supports California bill AB 3211, which requires tech companies to label AI-generated content. Reuters reports: San Francisco-based OpenAI believes that for AI-generated content, transparency and requirements around provenance such as watermarking are important, especially in an election year, according to a letter sent to California State Assembly member Buffy Wicks, who authored the bill. "New technology and standards can help people understand the origin of content they find online, and avoid confusion between human-generated and photorealistic AI-generated content," OpenAI Chief Strategy Officer Jason Kwon wrote in the letter, which was reviewed by Reuters.

AB 3211 has already passed the state Assembly by a 62-0 vote. Earlier this month it passed the senate appropriations committee, setting it up for a vote by the full state Senate. If it passes by the end of the legislative session on Aug. 31, it would advance to Governor Gavin Newsom to sign or veto by Sept. 30.
This discussion has been archived. No new comments can be posted.

OpenAI Supports California AI Bill Requiring 'Watermarking' of Synthetic Content

Comments Filter:
  • A good idea (Score:4, Interesting)

    by Baron_Yam ( 643147 ) on Monday August 26, 2024 @03:55PM (#64737174)

    Any time it is potentially ambiguous... AI output should be identified as such. If it's a voice on a phone system, there should be beeps. If it's a video, a disclaimer. If it's text, there should be an attribution.

    • by dbialac ( 320955 )
      And never mind that you can just take real photos and add the watermark to make them appear to be false.
      • by unrtst ( 777550 )

        Personally, I'm starting to look forward to how the youngest generation is going to view all media.

        There was a time when, "It's a photo, so you know it happened for real," was nearly always true, and that wasn't that long ago. For kids growing up now, there is little difference between raw video, video with filters, real video in cinema, CGI enhanced video in cinema, full CGI video in cinema, and now all the AI convenience and absurdities. They're probably going to grow up to assume that all 3rd party media

      • Or, inversely, you get an AI running on a Russian server farm to output your propaganda and then claim "see, no watermark, it must be true!"

    • So...what about people that install open source AI on their personal computers, and generate all they want unhindered by need to watermark anything?

      These are getting easily within reach of the average "geek".

      And hell, one can still fall back on good old photoshop type editing, if you're good enough...maybe even combining the two....you can generate stuff good enough to fool most people .....

      • by narcc ( 412956 )

        "People can break the law, so having a law is pointless!"

        It's not a great argument.

    • And any time you are hacking or doing some other nefarious thing on the internet, make sure you set the Evil Bit [wikipedia.org] on your packets.

      • I'm not talking about stopping that kind of thing, but setting a standard for legal use. We're not that far off from a future where you won't know if you're interacting with a human or not unless you're physically in the same place. I'd like to know.

        • What scenarios do you imagine where this would be beneficial?

        • Fun fact, the signals from your lying eyes and ears will one day be interceptable and changed in real time by computers. Should the computer be required to tell you that it's altering what you're allowed to see and hear? You and many others will probably say yes to that question, but we all know your answer will fall on death ears turning a bind eye to your plights.
  • Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.

    -- First Amendment to the Constitution of the United States

    • This is California we're talking about. The First Amendment only applies to speech that State approves of. Anything that poses a threat to the apparatus of the State its enablers, and its sycophants is considered "misinformation" and is not protected by the First Amendment.

      • Bruh, the Constitution supersedes state law. Therefore, California could not operate as you fantasize. Even if it tried, any such act would immediately be challenged and overruled, especially by the conservative Supreme Court. Your conclusion and yo mamma therefore lack credibility. But keep brainwashing yourself with lies to rationalize your jealousy of the most awesome state whose GDP is 5th in the world.
      • by narcc ( 412956 )

        Have any evidence for that bizarre fantasy?

  • >OpenAI supports regulation that would hamper their competitors and help them maintain their rapidly diminishing edge, now that they've benefited from growing at a time when there was 0 AI regulation

    Fuck them, and Sam Altman, so hard

  • Any watermarking is easily removed and easily faked. The tools will be out there for both good and bad actors. Even enacting laws that punish falsifying such watermarks won't stop it. Somebody is going to try to solve it with blockchain provenance, but that will just be a complex mess and not catch on. Truth and falsehood are due for some serious scrutiny; let's hope our growing pains aren't too traumatic. We've got plenty of other issues to deal with that handling this poorly could make a lot worse.
    • Any watermarking is easily removed and easily faked. The tools will be out there for both good and bad actors. Even enacting laws that punish falsifying such watermarks won't stop it. Somebody is going to try to solve it with blockchain provenance, but that will just be a complex mess and not catch on. Truth and falsehood are due for some serious scrutiny; let's hope our growing pains aren't too traumatic. We've got plenty of other issues to deal with that handling this poorly could make a lot worse.

      Eventually, we'll have to have our Butlerian Jihad to free us from the "thinking" machines. I know they don't actually think yet, but the spewing of false info at scale is problematic. And the cat is DEFINITELY out of the bag on that front. Unfortunately, nearly every aspect of modern society depends on the machines. Our divorce from the technology we're slowly watching ourselves lose control of may very well be a violent one, simply for the fact that we're totally dependent on them today.

      We're smart enough

  • I make art and music. I'll purposely watermark *everything* and ideally, and mark it with keys that make it appear like it was generated by North Korea or other evil empires. That'd be sooooo coool!

  • 1. Zealously enforce watermarking for AI generated content on the normies/regular businesses/what most of the public will see. 2. Three letter agencies all over the planet continue to use it with impunity with no watermarks to sway public opinion 3. Those accused of spreading "disinformational" videos will be charged with stripping watermarks from AI videos, though the videos are real, and convicted of felonies. 4. Truth would be dead.
  • But then someone will just train AI to remove watermarks!
  • What counts as AI generated? World is full of tools that enhance and modify images, many of those technologies should be counted as AI based . Are we finally getting means to identify all those photoshopped selfies of celebrities? Thought not.

    Back in the days one could patent any known process by applying a "...with a computer" as the methodology. Are we now getting similar distinction between old and new world with rules applying to things done "...with a large transformer model"?

  • I've wanted this for decades in documentaries, but think it should be ubiquitous: a visible abbreviation in the corner of images/video when something isn't real or was significantly altered. Once CGI became common, actual video of a capsule in space can't be distinguished from a mere simulation, so, in a documentary no less, we can't tell what we're looking at!
  • I have already contacted my congressman's office and I have since heard of others voicing similar ideas but for some reason society isn't catching on very quickly...
    We cannot succeed by watermarking fake videos. We must do the opposite: We watermark "verified" videos instead.
    A "verified" video will have data encoded from the camera hardware into the image/frames that can be used as a checksum to verify that the content hasn't been modified post capture.
    If modifications happen post capture, like by YouTube

  • As someone who's spent time navigating through AI-generated content, I think transparency is key. I’ve used AI writing tools before, but nothing beats the human touch. For instance, when I needed a well-researched paper, I turned to a reliable service like https://edubirdie.com/research-papers-writing-services [edubirdie.com] rather than relying solely on AI. This experience taught me that real human insight makes a difference. If you're looking for that human touch, check out this research paper writing service. It

((lambda (foo) (bar foo)) (baz))

Working...