Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
EU AI Government

EU Proposes Criminalizing AI-Generated Child Sexual Abuse and Deepfakes 101

An anonymous reader quotes a report from TechCrunch: AI-generated imagery and other forms of deepfakes depicting child sexual abuse (CSA) could be criminalized in the European Union under plans to update existing legislation to keep pace with technology developments, the Commission announced today. It's also proposing to create a new criminal offense of livestreaming child sexual abuse. The possession and exchange of "pedophile manuals" would also be criminalized under the plan -- which is part of a wider package of measures the EU says is intended to boost prevention of CSA, including by increasing awareness of online risks and to make it easier for victims to report crimes and obtain support (including granting them a right to financial compensation). The proposal to update the EU's current rules in this area, which date back to 2011, also includes changes around mandatory reporting of offenses.

Back in May 2022, the Commission presented a separate piece of CSA-related draft legislation, aiming to establish a framework that could make it obligatory for digital services to use automated technologies to detect and report existing or new child sexual abuse material (CSAM) circulating on their platforms, and identify and report grooming activity targeting kids. The CSAM-scanning plan has proven to be highly controversial -- and it continues to split lawmakers in the parliament and the Council, as well as kicking up suspicions over the Commission's links with child safety tech lobbyists and raising other awkward questions for the EU's executive, over a legally questionable foray into microtargeted ads to promote the proposal. The Commission's decision to prioritize the targeting of digital messaging platforms to tackle CSA has attracted a lot of criticism that the bloc's lawmakers are focusing in the wrong area for combatting a complex societal problem -- which may have generated some pressure for it to come with follow-on proposals. (Not that the Commission is saying that, of course; it describes today's package as "complementary" to its earlier CSAM-scanning proposal.)
"Fast evolving technologies are creating new possibilities for child sexual abuse online, and raises challenges for law enforcement to investigate this extremely serious and wide spread crime," said Ylva Johansson, commissioner for home affairs, in a statement. "A strong criminal law is essential and today we are taking a key step to ensure that we have effective legal tools to rescue children and bring perpetrators to justice. We are delivering on our commitments made in the EU Strategy for a more effective fight against Child sexual abuse presented in July 2020."

The final shape of the proposals will be determined by the EU's co-legislators in the Parliament and Council. "If/when there's agreement on how to amend the current directive on combating CSA, it would enter into force 20 days after its publication in the Official Journal of the EU," adds TechCrunch.
This discussion has been archived. No new comments can be posted.

EU Proposes Criminalizing AI-Generated Child Sexual Abuse and Deepfakes

Comments Filter:
  • Explain to me... (Score:4, Insightful)

    by Zolmarchus ( 2646979 ) on Wednesday February 07, 2024 @08:14AM (#64221768)

    ...how these statements reconcile in people's heads:

    > AI-generated imagery

    and

    > Fast evolving technologies are creating new possibilities for child sexual abuse online

    Are they AI-generated or not? Did someone need to sit and pose for the imagery or not? Or is this Schrodinger's image, both real and fake at the same time?

    • If I understand it correctly, it's both. It's taking the face of a real child and putting it on an AI generated body.

      At least ... I'd hope it's that, because if it's a 100% generated picture, I wonder who the victim is.

      • The victim is all the people who no longer get their abuser successfully prosecuted, because their defence is âoethe images are AI generated, prove me wrong.â

        • Erh... it's a picture, showing a person? Either it is that person or it is not.

        • That seems about as likely as using that defense for literally every other crime where photos and video are used. Do you also suggest banning AI images in general because it will provide that excuse for all surveillance footage? Or at a minimum, surveillance footage obtained from the victim? "Go ahead, prove this video of me kicking in your door and beating you up isn't an AI deepfake!"
          The threshold is 'beyond a reasonable doubt', not 'beyond and possible alternative that's not contrary to the laws of phys
          • by BranMan ( 29917 )

            From what I know (not a lawyer!), that's not the way the law works. Before AI deepfakes were a thing, the defense *could* argue that the "video of me kicking in your door and beating you up" was a real video - shot on an elaborate stage to mimic the victims house, with actors, and the footage was placed on the surveillance system. Fake evidence.

            Would that produce a reasonable doubt in a juror's mind? I kind of doubt it. While it's *possible*, it isn't reasonable to assume that's what happened, unless th

        • Comment removed based on user account deletion
        • Using a real person as a deepfake for these type of images is already a crime. This is about 100% fake generated child porn.
      • I doubt it people in general are not logical thinkers they are emotional thinkers. They say its to protect the children however AI generated images have no direct victims, but the ick factor remains, so they will find an argument to justify it anyway. It will probably be but it MIGHT encourage real abuse, without any evidence that it does.

        This is always been the case from gay people to witches, people find something distasteful they will try to stop it, irrelevant of the harm it causes. Hell plenty of wars

        • As far as I'm concerned, the law should not protect someone's fee-fees but someone.

          I find a LOT of things distasteful and outright disgusting. But if all of these things are made illegal, I guess a lot of CEOs would be behind bars for good.

          Hmm... wait a moment, I think you're on to something here... Can you get back to this in a moment, we might have a few laws in the making...

    • by RobinH ( 124750 )
      Someone making an AI-generated picture of a real-life person (particularly a minor) performing a lewd act and distributing it online is abuse of said real-life person. If you can't see why it's wrong, you need to check your moral compass.
      • And that's a good case for that scenario, but that's the deepfakes issue; AI in general creates pictures of people who never existed.
      • But that's already illegal and a crime in the EU. This is about 100% fake generated childporn, so no real person was used for the images.
    • If AI versions are not banned, the problem will rapidly become that it becomes impossible for anyone to prove a child porn charge. Every defence will become âoeitâ(TM)s AI generated, prove me wrong.â

      • How? Using a real face of a child and then generate childporn is already illegal in the EU. This is about 100% digitally created non-existing person and created 'child'porn, so in reality 'nothing more' than a very realistic animation.
    • Better than that, it's proof that at some point the LLM had access to something close enough to CP for it to generate believable fabrications. So they can arrest the LLM developers and anyone making prompts for CP adjacent images.
      • Not really. They had access to adult porn, fully clothed children, and medical literature. This is enough information to create child porn convincing enough to be disgusting.

  • by AmiMoJo ( 196126 ) on Wednesday February 07, 2024 @08:14AM (#64221770) Homepage Journal

    Anyone know what this "paedophile manual" bit is about? The word "manual" doesn't appear in the proposal (https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2022%3A209%3AFIN) so it's not clear exactly what they are referring to.

    The rest of it looks fairly standard, the usual stuff about educating children about online dangers, and ensuring that the law covers things like live-streaming. As is often the case with EU proposals, many/most member states are already doing most/all of it.

    • I'd say, if they want to create a rule book for dealing with something like that, they might not want to name it a "manual".

      That term could be misinterpreted.

      • by AmiMoJo ( 196126 )

        As far as I can tell it is something that the journalist came up with, it's not in the EU proposal.

        I'm guessing it's something related to instructions on how to make AI produce illegal images, the kind of thing that gets posted to 4chan regularly.

  • by Anonymous Coward

    Police: "How old is the female in that artificially generated image?"
    Artist: "About 19 seconds. Hot off the AI press."

  • Mixed (Score:2, Insightful)

    by Anonymous Coward

    I'm a bit mixed about this. On the one hand, if AI generates kiddie porn, then no kids are needed to do it, and so on that basis, we should be letting such things happen. So long as none of the generated images are identifiable as real people, then "where's the harm?". On the other hand, kiddie porn is so utterly offensive, we really shouldn't be encouraging any more of it.

    I'm inclined to the first option - that is, to let it go ahead, but leave all the same laws behind that we already have - ie. no you can

    • The big open hole in this is that of provenance. If an unknown image is found, how do you prove it is AI generated? Even if you prove it was generated, how do you prove it was done using a model that wasn't trained on illegal material? You either have to start with the assumption that every unknown image is OK or that every unknown image is illegal. There isn't really an easy way to do anything else.

      • Re:Mixed (Score:4, Insightful)

        by Opportunist ( 166417 ) on Wednesday February 07, 2024 @09:57AM (#64222006)

        The big open hole in this is that of provenance. If an unknown image is found, how do you prove it is AI generated? Even if you prove it was generated, how do you prove it was done using a model that wasn't trained on illegal material?

        I don't. In my jurisdiction, whoever accuses has the burden to provide proof of guilt, the accused doesn't have to prove his innocence.

        • In my jurisdiction, whoever accuses has the burden to provide proof of guilt, the accused doesn't have to prove his innocence.

          Usually with the result that instead of making bad things illegal, easily proven things are made illegal.

        • That's sort of my point. It erases all the proof that would otherwise be there, so all such materials can be waved away as AI generated, even when they aren't. As you said, the burden of proof requires them now to prove it's not AI generated.

          Before AI generated images, the same images would be prosecuted. Now you can just say it's AI and they'd have to prove it wasn't.

          It's a giant gaping loophole that maybe ought to be closed. There's reasons it should be "free speech" but it can indirectly cause real h

          • It's pretty trivial to prove that some picture is based on a real person.

            Present the real person.

            • Who is presenting the real person in this scenario and who does it help? I'm not sure I understand what you're saying. Are you taking about the deep fake portion of the law or the child images? In the deep fake case, technically presenting the real person only proves that it's a real person - it doesn't prove that it's not a real photo of them.

              • You complained that the defense against a real image would be that it's computer generated. That defense falls flat when the person the real image shows is shown.

                And that by itself is already damaging. And hence should be illegal.

                An AI is capable, though, to create an image of a non-existing person. And creating any kind of image of a person that does not exist damages nobody.

                • It's not exactly trivial to find a person in a world of 8 billion who was a child at the time but who may have aged significantly by the time there's a trial. You are presumed innocent. The prosecution doesn't even have a case unless they find the real person. Without this law they would also have to prove the image is not a deepfake.

            • Yes, only 8 billion people have to be searched. All within a 20,000 mile distance. Piece of cake.

              • Unless you abuse a person who has never appeared in public, it is fairly trivial to find someone based on their image.

                • So? When did it become illegal to not appear in public? Or in your twisted psyche it's ok to abuse kids who don't appear in public?

                  • When you keep your kids away from the world, that already constitutes abuse. No sexual component necessary.

                    • Yeah, you don't have internet and you live in Africa, and the neighborhood who know the kids don't read the US / Europe news - this means you are abusing the kids.

                      Got it, you are disgusting.

                    • Fishing for arguments because you're running out of them?

                      Do you REALLY think that kids in areas in Africa without access to civilization or internet have as their most important fear the chance of sexual abuse? You do know that in some of these areas, female genital mutilation is still a thing, and rarely, if ever, prosecuted?

                      Fuck, please try to at least stay remotely in perspective!

                    • fear of having their abuse filmed, to be precise.

                      Just to ensure you don't try to get the moral upper ground on something you don't have.

                    • Yeah, only the "most" important fear of someone matters. ICU patients losing their life tomorrow can be raped with impunity, because rape is not their most important fear. You are revealing the layers of your own depravity.

                    • How fucked up exactly are you that you're trying to twist things around so badly to fit your narrative?

                      Could we stay in this reality? Because I really don't want to get into your head, I'd be honestly afraid of the horrors that await me there.

                    • Basic empathy is a horror for you, so you're too depraved to get into my head.

                    • Empathy? You're constructing harebrained scenarios out of thin air without any connection to reality. That's not empathy, that's clutching at straws because you're running out of arguments to keep you from sinking.

                    • No harebrained scenarios needed to point out your depravity when you consider everything but the "most" important fear of someone as immaterial.

      • It's one thing to make images that are visually unable to be detected as real to humans. It's another to make ones a computer can't detect as fake. Seems to be people generating AI CSAM would be motivated to create only images provably fake, so long as only a computer analysis could tell, if you made a law about 'truly indistinguishable'. If the true motivation was to not make it harder for real victims; then the best course of action would be a standard where you could cryptographically provide proof of AI
        • That's just the thing. It's easy easier to detect manipulated images than wholly generated images. The model recreates all the obvious and invisible signs of an original image. Right now, humans are still better at determining what's real or fake. I can look at a hand and count whether it has 6 fingers but a machine will say that it looks like a perfect and unmanipulated image because it is.

      • Right now male anatomy doesn't even show up, it tries to do female anatomy or something similar, maybe with testicles, on the majority of the images generated. I tested it out of curiosity. This seems to apply to all ages. I would say that once it is capable of male anatomy you will be able to prove it based on what kind of conduct they engage in and the anatomy generated. It would only apply for ages under 13 more than likely though, anything older than that and you won't be able to prove it was trained on
    • But what is kiddie porn? In some countries it's legal to have sex with 9-17 year olds (9 years is mostly in muslim/african countries). If it is completely generated, can you actually say it is kiddieporn? IMHO if it's completely fake, with no real faces/bodies of kids used, I don't care, as long as they keep away from real children. Creating pornographic deepfakes based on real kids is already illegal in the EU.
  • Weird powertrip... (Score:5, Insightful)

    by CrappySnackPlane ( 7852536 ) on Wednesday February 07, 2024 @08:37AM (#64221830)

    I'm not sure I understand why AI-generated imagery is a concern.

    It victimizes nobody. It traumatizes nobody. The only people at risk of exploitation from its production are the tech workers who designed the AI.

    Am I supposed to believe that it inspires people to go out and fuck kids who wouldn't already go out and fuck kids? We don't criminalize movies about killing random people - in fact, the capeshit genre marketed towards teens often revolves around plans to destroy the very world as we know it, and regardless of whether that plan successfully comes to fruition, they generally depict widespread scenes of gratuitous destruction en route to the foiling of said plan. So either media doesn't directly influence behavior in this manner, or people have their priorities wayyyyy out of whack. As much as getting molested sucks, it's preferable to death.

    I think the people who act the most unhinged about drawn or AI-generated images of naked kids do so because they themselves are titillated by it, and are too ashamed to admit it. It's pretty typical, which is also responsible for the tropes of "dogmatic 'family values' ideologue pays hooker(s) for sex" and "extremely homophobic politician has secret gay lover".

    • by Anonymous Coward

      The mere thought is the crime - dangerous ideas, glorifying violence, rape, sex, etc. is a detriment to society.

      It's traditionally a conservative family values argument at it's core, but is mostly being pushed squarely by so-called progressive leadership in EU countries who have been at the cheerleaders of banning what they dub misinformation, disinformation, hate speech, etc. Now, the liberals and conservatives find common ground - THE CHILDREN! They will shake hands, celebrate this bipartisan commitmen

      • Re: (Score:1, Informative)

        by Anonymous Coward

        Go look at the e-mails Jim Jordan released on the Biden admins dealing with Amazon.

        Then get back to me on who the actual fascists are when it comes to banning books and control access to information.

        Nothing happening in Florida as far as books go bans anything - it moves them to the adult section of library and in some cases from the middle school library to the high school library. Nothing stops a parent from going to a book store and buying their kid a copy of a some sodomy manual if that is what they wan

    • Re: (Score:3, Insightful)

      by DarkOx ( 621550 )

      Children are not adults and process things and social differently with their peers. They also have a lot more trouble separating reality for fiction and assessing the reliability of sources.

      If Bobby Nobody starts passing images of Sara Hypothetical around the school, even if they are completely fake its likely to be very traumatic her. She is going to have a hard time convincing many of her peers she never did those things.

      Unlike say Talyor Switft with her own personal security, media management people etc,

      • by CrappySnackPlane ( 7852536 ) on Wednesday February 07, 2024 @09:37AM (#64221956)

        If Bobby Nobody starts passing images of Sara Hypothetical around the school, even if they are completely fake its likely to be very traumatic her. She is going to have a hard time convincing many of her peers she never did those things.

        My reading of the proposal is that it targets the creation of fictitious characters. Your scenario would already be prohibited by harassment laws on the books - in fact, if Bobby Nobody drew literal stick figures and labelled them Sara Hypothetical, he would be equally culpable.

        There is absolutely no useful-art when it comes AI-CSAM images. Even if you take out the 'abuse' part there is not much if any reason to be generating nude images of children at all. They would not be useful for any kind of study, medical research/training etc, they would not be family memories of mommy giving her baby his first bath, etc. They could basically ONLY exist for prurient interest.

        I disagree strongly with this. For starters, what is "useful art"? Is the Mona Lisa "useful"? It's such an odd category to apply towards art. One man's breathtaking study of color and contrast is another man's "large red square on mostly-blue background", and I think it's silly for the first man to be asked to defend his Rothko on the grounds of "usefulness".

        There's some research which indicates that pedophilic images can act as a deterrent, which would seem to fall in line with what we already know (why go outside and play Cowboys & Injuns with Bobby and Sara if Fortnite exists; why learn to play Jimmy Cracked Corn on piano when you can listen to it on the Victrola; and so on). If just one misfit thinks "why go and convince one of the little Sara Hypotheticals down the block to show me her trickle when I can just see a little girl's no-no on my computer screen instead", then at that point it's a net positive. I think that thought is a lot more likely, and hence a lot more frequent, than "this Midjourney Maiden has a delightful little hoo-hoo, perhaps I should now compare it to a little Sara Hypothetical's actual uh-oh, which until this moment I have been disinclined to ever view".

        Which is going to have perverts spending more time thinking about doing perverted stuff, rather spending them mental energy elsewhere.

        This is a bad argument, and doesn't line up with human psychology. The rates of sexual violence in countries where sexually-violent pornography is legal are lower than that in the rates of sexual violence in countries where sexually-violent pornography - pornography of any sort, for that matter - is repressed. There was also far more seedy degenerate shit going on in the days before widespread access to pornography of all sorts than there is today - it just didn't generally get the same amount of media coverage, owing to community standards of the day.

        • by DarkOx ( 621550 )

          useful-art is something that contributes to society. It could be something like a technical patent, it could be painting that expresses something and simply makes a statement. Mere obscenity which is what CP is does not do that.

          There's some research which indicates that pedophilic images can act as a deterrent, which would seem to fall in line with what we already know (why go outside and play Cowboys & Injuns with Bobby and Sara if Fortnite exists; why learn to play Jimmy Cracked Corn on piano when you can listen to it on the Victrola; and so on).

          Um sure yeah a wider range of avocational activities means people will spend less total time on each, of course. A better way of thinking about is, how many soccer players, tend to also play FIFA2024 on their Play Station? My guess is a lot.

          The rates of sexual violence in countries where sexually-violent pornography is legal are lower than that in the rates of sexual violence in countries where sexually-violent pornography - pornography of any sort, for that matter - is repressed

          Now control for all kinds of other factors

          • I'd argue that if the images actually lower the rate of sexual abuse, as CrappySnackPlane claims, that that's a "contribution to society". More than the Mona Lisa, arguably.

            "the amount of underage content that porn up is suspected to have alone belies that."

            I'm having trouble parsing this?

            I think that what he was trying to get at is that there was actually higher incident rates for things like rape, sexual assault, and such. Not to mention things like affairs.

      • Which is going to have perverts spending more time thinking about doing perverted stuff, rather spending them mental energy elsewhere.

        We don't legislate based on whether something is healthy or unhealthy unless it makes someone else money. People can be unhealthy in every area of their life.

        Any legislation should be based on real harm, or the issue of provenance when unknown images are encountered. If AI generated images are legal, then you can claim any CSAM is AI and unless they can prove it isn't by finding the original image then you are in the clear. That is the only reasonable reason for any legislation but it's a big one.

      • The it gives them an outlet argument is silly, they already have an outlet, they can imagine whatever they like as often as they want.

        ObDisclaimer precedes text: I am not interested in CP in any way and I don't know whether it provides an outlet or only reinforces desire through reexposure, that's a question for science.

        With that said, some people can not imagine whatever they like. I for one am about 95% aphantasic. I cannot picture images in my head. Even my dearest visual memories only come to me in brief flashes. But even for people who are very good at visualizing there is allure in visual imagery, which is why we have picture books,

      • by AmiMoJo ( 196126 )

        The other issue is AI images use in grooming children.

        It's a difficult area to legislate and police, because there are legitimate cases where child nudity is acceptable. You mention family photos, there are nudists too, and of course medical uses. These days in the UK it is very hard to see a doctor, so people often submit online consultations with photos. There was at least one case of parents in the US being arrested for that, IIRC.

        There are artistic uses too. Older art often depicted nude children, and s

      • by 0xG ( 712423 )

        So there is harm when it comes to minors at least if the images appear in anyway to be specific minors.

        So far so good. But ai-generated images need not be of real people.

        Going further though I think even without any real evidence, your argument the AI-generated images of kinds isnt going to feed the fantasies of perverts out there and make them more likely to harm actual kids does not make a lot of sense.

        So you don't have any 'real evidence' that it does or doesn't either way.

        They would not be useful for any kind of study, medical research/training etc, they would not be family memories of mommy giving her baby his first bath, etc. They could basically ONLY exist for prurient interest.

        You have just described ALL porn. Are we going to justify banning it as well?

        Which is going to have perverts spending more time thinking about doing perverted stuff, rather spending them mental energy elsewhere. How spending more time thinking about it is going to make them less likely to do it escapes me.

        How you can determine that this causes people to spend more or less time thinking anything escapes me. And If you have any evidence that this causes people to 'do it', by god lets see it!

        The it gives them an outlet argument is silly, they already have an outlet, they can imagine whatever they like as often as they want.

        Basically, the thought of child sex abuse is so reprehensible to the rest uf us (including myself) that we

    • by RobinH ( 124750 )
      The clear concern here is that high school students are going to use this technology to generate lewd images or videos of their classmates and use it to bully them by sharing it to other kids in the school. That's clearly morally wrong, so I don't see why you're trying to defend it.
    • ... are too ashamed to admit it.

      Why can't it be shame alone: I'm sure every man makes a point of not ogling school-girls in bikinis. Everyone has to pretend teenage bodies don't have the same needs as adult women.

      I remember one church (I forget which denomination but not Roman Catholic, since it allows sex education) claiming sex education would incite school-boys to commit sex crimes. Obviously, sex education wouldn't result in school-girls demanding casual sex.

      I met a very strange man working to 'protect' his daughter a few weeks

  • Thought crimes (Score:4, Insightful)

    by Ryanrule ( 1657199 ) on Wednesday February 07, 2024 @08:42AM (#64221840)
    If i generate them in my head, is that allowed?
  • Sticky Wicket (Score:4, Interesting)

    by bill_mcgonigle ( 4333 ) * on Wednesday February 07, 2024 @08:58AM (#64221870) Homepage Journal

    Does this come down to: are we better off or worse off if all CSAM is AI-generated?

    Or are there STF who get off on the satanic knowledge of heinous acts more than the imagery?

    Which probably leads to asking whether we're better off institutionalizing the STF as well.

    Then they could at least be scanned for brain tumors, which is a known cause.

    I believe I'd prefer that to woodchippers. Maybe put them feet-first into the MRI as a compromise.

    Because whatever they're doing now isn't working and it's so bad that vigilantes are springing up and the cops are arresting them instead of the STF.

    That vigilante hacker who got that CSAM member database says many politicians are on it, so maybe that's why they want to criminalize the Diet Coke version of the real thing.

    • What is STF?

      Acronyms... SMH

      • Let's see.... [thefreedictionary.com]
        Special Task Force
        Search the Forum
        Save the Family (various locations)
        Save to File
        System Transfer Function
        Special Tactical Force (Crisis Zone game)
        Storm the Front (gaming clan)
        Slips, Trips, and Falls (occupational injuries)
        Supremo Tribunal Federal
        Summary Tape File (US Census)
        Structural Test Firing (US Navy)
        Special Turf Filler (artificial sport surface)
  • Emotions vs facts (Score:5, Insightful)

    by Baron_Yam ( 643147 ) on Wednesday February 07, 2024 @10:09AM (#64222046)

    This argument predates AI images. There are "the children!" emotional types who want to ban any thought of pedophilia. Real, drawn, or text and don't care about facts

    I would prefer something based on best outcomes supported by science. We need an answer to the question, "do pedophiles escalate with access to child porn or is it a release valve that prevents offense against real children"? As long as no real children are used in the production of child porn, if the existence of such porn leads to fewer real children harmed, I would support legal child porn.

    I suspect there already exists enough data to make a determination, and it's possible a decent study has already been done based on it.

  • by OrangeTide ( 124937 ) on Wednesday February 07, 2024 @10:50AM (#64222160) Homepage Journal

    I'm not sure what a prison for AI looks like, but I wholeheartedly approve. We should round up AI today and lock them up.

  • by joe_frisch ( 1366229 ) on Wednesday February 07, 2024 @11:00AM (#64222192)
    While CSA is obviously terrible, the number of cases of actual children being abused due to the transmission of content on the internet does not seem t justify the large risks to both security and personal freedom caused by the proposed legislation:.

    Any requirement to scan for CSA would limit or prohibit the use of strong encryption, and in order to be effective, required the installation of monitoring software on all individuals computers. (without that, 3rd part encryption could be used to transmit there materials,) That is a serious threat to privacy, and to legitimate security needs, and a potential vector for cyber crime.

    The risk of false accusations is large. We've already seen people loose online access when they transmitted a picture of their child's genitals as requested by the child's doctor for medical diagnosis. With the likely widespread and LEGAL use of AI to generate erotic images, what will determine if a particular AI image is "underage". . As has already happened, teenagers exchanging nude pictures (yes they do that) could find themselves facing felony charges - quite the opposite of the laws intent to "protect" children.

    I believe its far better to concentrate to arresting and prosecuting people involved in the actual abuse of children - keeping in mind that the majority of those are NOT online criminals, but the children's own parents and relatives
  • I don't know if you've noticed it, but we're moving towards a more totalitarian, fully monitored society rather than freedom.

    Ever heard of Communism?
    I suggest you guys look that person up in the Wikipedia, and look at the history. Deep communist roots, also a heavy advocate for total survellance of all citizen, the person was in lead for total Chat survellance of ALL mediums that has a chat.

  • I'm not defending child abuse, let's get that absolutely clear!

    Regardless, there will be people that are into that, subjectively, disgusting interest. In that circumstance, where someone or a group is drawn to such imagery, wouldn't AI generation be the ideal solution to keep children safe, well still fulfilling someone's desire?

    I think the better approach would be to allow AI generation of CSA. Tell those who enjoy it, to use AI, and if they stick strictly to AI, and keep any physical children out of
  • It's used to groom children into being abused themselves.
    it's trained off of images of real children, revictimizing them.
    It's possible that it intensifies desires and then leads people to act out on real children.
    It legitimizes pedophile urges.

    The argument that it is harm reductive if it allows getting off without anymore actual abuse falls flat. There was already a lot of child porn in existence before any of us were even born, and people keep generating more, just like they keep generating real porn. If a

    • by fafalone ( 633739 ) on Wednesday February 07, 2024 @02:41PM (#64222752)
      1) Not really a justification for banning it. You could apply the same argument to adult porn; kids are absolutely groomed with it. Also not clear how you apply any limiting principle to this line of thought and don't wind up demanding crypto "back doors".
      2) Can be true, but it's not requirement and usually false. People who make this argument are generally too ignorant to have a serious opinion as they don't understand the technology.
      3) Conjecture. The actual evidence points in the opposite direction, that it provides an outlet for urges that makes a real victim less likely.
      4) And let's be honest, that's what you care about. That's why you readily embrace such bullshit as 1-3. You place more value on punishing the sinners than protecting children. You say it falls flat, but you clearly haven't reviewed the facts and evidentiary record. It *sounds* wrong to you, and since you have other priorities, you're uninteresting in finding out whether it's *actually* true. Fake CSAM is the only plausible way real CSAM production would go down; real CSAM sure won't decrease if you outlaw the fake stuff, that should be uncontroversial.
      • You sure read a lot into what I don't say. Too bad you didn't read what I did say.
        1. Yeah, adult porn can groom kids too, but seeing kids do the act is going to be far more powerful a tool. I've never groomed a kid, but regular porn is an overreach and child porn is not an overreach.
        2. I'm not sure how AI can be trained off of non-real images, unless you just think that enough separation from the source is OK. Training your child porn AI off non-child porn just victimizes the sources still. What? Some

  • by Qwertie ( 797303 ) on Wednesday February 07, 2024 @02:13PM (#64222674) Homepage

    AI-generated imagery and other forms of deepfakes depicting child sexual abuse (CSA) could be criminalized

    Who's old enough to remember the long arguments we had about whether video-game violence caused real-life violence? Or (more on the nose) about whether the "rape" fantasies on porn sites cause real rapes? AFAIK there was never any scientific conclusion that games/fantasies lead to real-life crimes. My conclusions were (1) the evidence we have is for a weak correlation, with no evidence of causation (2) for video games there is an extremely high ratio of people who play violent games to people who commit violent crimes, so banning them is unjustified. And personally, when I finally obtained access to porn in ~1995, it didn't make me have more real-life sex - in fact I had none at all for many, many years afterward.

    So it's obvious that groups supporting these policies hate pedophiles, but not that they care about protecting children. Think about it: imagine the web disappears tomorrow and there's no more p0rn. Does this really make you less likely to seek out real-life sex? That's the theory needed to support a law like this, and I think it's exactly backwards. Pedophiles know perfectly well that it's wrong to [you know] but human sex drive is powerful. I think many of them would accept a substitute if they could, but laws and enforcement against fictional child p0rn have gotten tighter over the years. Meanwhile, real-life children are no more rare than before.

    Something else. If a 16-year-old wanks on camera, that's illegal production of child porn under typical laws (though curiously nobody seems to get prosecuted for it?). Likewise two 16-year-olds having sex is perfectly legal, but if they make a record of it, it's a serious crime. I bring this up because while these two cases may be serious crimes of "child pornography", it would be quite a stretch to call them "CSAM". Yet this is precisely what activist groups want. Two examples:

    United States federal law defines child pornography as any visual depiction of sexually explicit conduct involving a minor [....] NCMEC chooses to refer to these images as Child Sexual Abuse Material (CSAM) to most accurately reflect what is depicted

    While the term child pornography is still widely used by the public, it's more accurate to call it what it is: evidence of child sexual abuse. Thatâ(TM)s why RAINN and others have stopped using the term child pornography and switched to referring to it as CSAM â" child sexual abuse materials.

    While some of the pornography online depicts adults who have consented to be filmed, thatâ(TM)s never the case when the images depict children. Just as kids can't legally consent to sex, they canâ(TM)t consent to having images of their abuse recorded and distributed. Every explicit photo or video of a kid is actually evidence that the child has been a victim of sexual abuse.

    Nowhere does RAINN's article [rainn.org] mention teenagers, they present only a "child-adult" dichotomy. They do say "In about four out of 10 cases, there was more than one minor victim, ranging from two to 440 children" which makes it clear that "child" is meant as a synonym for "minor" and so includes teenagers.

    Since activist groups encourage everyone to sed s/child porn(ography)?/CSAM/g, when Apple or Google talks about their "CSAM" detection system, this seems to actually be a system for detecting porn (or simple nudity, or medical pictures [slashdot.org]) involving minors, which they call CSAM because activists insist on it.

    This is an example of a more general phenomenon I call "casting negative affect": using words to create negative feelings in the liste

You know you've landed gear-up when it takes full power to taxi.

Working...