Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
EU Encryption Privacy Your Rights Online

New EU Rules Would Require Chat Apps To Scan Private Messages for Child Abuse (theverge.com) 204

The European Commission has proposed controversial new regulation that would require chat apps like WhatsApp and Facebook Messenger to selectively scan users' private messages for child sexual abuse material (CSAM) and "grooming" behavior. The proposal is similar to plans mooted by Apple last year but, say critics, much more invasive. From a report: After a draft of the regulation leaked earlier this week, privacy experts condemned it in the strongest terms. "This document is the most terrifying thing I've ever seen," tweeted cryptography professor Matthew Green. "It describes the most sophisticated mass surveillance machinery ever deployed outside of China and the USSR. Not an exaggeration." Jan Penfrat of digital advocacy group European Digital Rights (EDRi) echoed the concern, saying, "This looks like a shameful general #surveillance law entirely unfitting for any free democracy." (A comparison of the PDFs shows differences between the leaked draft and final proposal are cosmetic only.) The regulation would establish a number of new obligations for "online service providers" -- a broad category that includes app stores, hosting companies, and any provider of "interpersonal communications service."
This discussion has been archived. No new comments can be posted.

New EU Rules Would Require Chat Apps To Scan Private Messages for Child Abuse

Comments Filter:
  • Not a chance. (Score:5, Insightful)

    by splutty ( 43475 ) on Wednesday May 11, 2022 @11:04AM (#62523104)

    Even if this somehow manages to actually get voted into law (and that's already a rather low chance), it's not going to survive the first legal challenge, considering this is illegal in almost all member countries.

    The amount of laws that would need to be changed for this to not be illegal is staggering, and I seriously doubt any country is going to do that.

    • Re: (Score:3, Interesting)

      It isn't illegal when the law man is doing it.
      • Re:Not a chance. (Score:4, Interesting)

        by thegarbz ( 1787294 ) on Wednesday May 11, 2022 @12:21PM (#62523414)

        It isn't illegal when the law man is doing it.

        The law man has laws to follow. It's the reason stupid laws get struck down by courts all the time, even in Europe.

        • It's the reason stupid laws get struck down by courts all the time, even in Europe.
          Actually no.
          Laws can only be "struck down", if there is a higher law contradicting them, e.g. a constitution.

        • The EU isn't the US. They already have a tendency to favor safety much more than liberty. In the US, free speech is much more ironclad for this reason. If the EU wants to censor you, they just call it hate speech and say that it's necessary to protect the public.

          Likewise, privacy takes a back seat to safety, or at least perceived safety, in the EU. For example, in Germany you can't play video games without providing ID, because they're afraid that if young people play GTA, then they'll turn cars into improv

    • The point of the EU is to put stuff into law that is laughed out of court in the national courts. Except in France or Sweden.

      • Usually EU law is considered supreme and national courts abide by it. The court that can and sometimes does nullify bad EU rules is the European Court of Justice. Of course, someone has to sue for that to happen, the court does not do it on its own initiative.

    • This is the same EU with the 'court of human rights' that gives itself the power to balance the right to freedom of expression with the rights of others to have their religious feelings protected. [coe.int]

      They will start at the position that the government needs to spy on you because 'think of the children', and do mental double backflips to work backwards from that conclusion, despite whatever laws you think you have that would protect you.

      • by UpnAtom ( 551727 )

        The European Court of Human rights is nothing to do with the EU.

        Please stop Dunning-Krugering things you know very little about.

    • You seem to not understand how the EU works.
      EU member states have contractually committed themselves to be little more than slave states concerning EU edicts like this. If a new EU edict has been proclaimed, member nations are contractually obliged to make it into national law, whatever way, under threat of punishment.

      • by UpnAtom ( 551727 )

        You seem to not understand how the EU works.
        EU member states have contractually committed themselves to be little more than slave states concerning EU edicts like this.

        LOL, brainwashed by Nigel Farage?

        Elected MEPs can set laws withn the EU's jurisdictions as long as they don't breach the ECHR or the Charter of Fundamental Rights. That's what they're elected to do.

        If you don't like it, go to Brexitland.

    • Well,
      as usually the article is super bad.
      And here is the original proposal: https://ec.europa.eu/commissio... [europa.eu]
      I'm pretty sure it will be voted into law, and there is no real human possible objection to it, unless you are a child abuser yourself.

      • you cant find any reason that privacy is warranted as long as we "think of the children" ?
        • You missinterprete my answers.

          I'm neither pro nor against that law proposal.

          I'm against idiots who post bullshit, that is all.

          And the law does not violate privacy, anyway.

          No one reads/sees your messages.

  • by Merk42 ( 1906718 ) on Wednesday May 11, 2022 @11:04AM (#62523106)
    They are no longer Private.

    How about the EU puts a rule to randomly search vehicles on the road, in case they have kidnapped children in them.
    • Comment removed based on user account deletion
      • Amazon is already at it. Maybe write your MP that they should subsidize that stuff.

      • No need. NSA already did that. Now the EU can buy that data and use it in court..
        • by JustNiz ( 692889 )

          Nah the NSA are behind the curve. Google, Facebook, Amazon, Tesla and the phone companies are the guys that know everything about you,

    • by dgatwood ( 11270 ) on Wednesday May 11, 2022 @02:01PM (#62523758) Homepage Journal

      They are no longer Private.

      Worse. When you introduce technology that lets companies do this on behalf of one government, nothing can prevent that technology from being used by *any* government to do *any* arbitrary spying.

      But I hope the EU commissioners enjoy having their private messages scrutinized and reported if they are critical of Xi Jinping or Vladimir Putin.

      The sorts of clueless people who can't understand this concept have absolutely no business being in any position of authority, because it is instantly clear that they haven't given more than the most superficial thought to the worst-case consequences of their ideas. The reason so many people screamed when Apple did a much, much smaller version of this is because the worst-case consequences of even a massively scaled-back version of this idea are utterly catastrophic for freedom worldwide. And that's before you even start considering the consequences of false positives on individual safety.

      Frankly, it is downright embarrassing that the EU commissioners are so utterly clueless that they would propose a scaled-up version of the same idea that led to Apple being utterly crucified in the press just a couple of months before.

      If this happens, it will be absolutely necessary for social media to turn off their apps in Europe and ship a separate app for use in Europe, because nobody anywhere else in the world is going to put up with an app that has a giant government wiretapping back door.

      • I think the clueless people is you.

        The proposed law is supposed to check images for child porn.

        Not to spy on your messages, regardless if they are pro Putin or anti Putin or what ever.

        For that: you would need another law

        No idea why we have idiots in the year 2022 who do not even know how a law works.

        • There is particular tech required to implement this law. They will totally use it for other purposes. Once a successor to Snowden blows a whistle on this they'll use the same tech to prosecute all people who spread messages about this.
          • Cant be done.
            As the law is explicitly about child porn.

            Can't be so hard to grasp that you can not simply spy on anyone. To "spy" on a target person one needs a warrant.

            • It will require spying on everyone for child porn. But hashes they check for are not public knowledge so in practice they can check for anything they'd like.
              • But hashes they check for are not public knowledge so in practice they can check for anything they'd like.
                No they can't.
                The system is run by the corporation offering the service.
                No one except their programmers have access to that.

                And they most likely would not stay silent if an unauthorized government agency suddenly demands code changes to search for something different than formulated in the law.

        • by dgatwood ( 11270 ) on Wednesday May 11, 2022 @06:39PM (#62524490) Homepage Journal

          I think the clueless people is you.

          The proposed law is supposed to check images for child porn.

          Unless the summary and the original article are both wrong, you are wrong. It also requires checking for "grooming behavior", which has nothing to do with images.

          Also, you're completely missing the whole point, which is that once you break the encryption by adding code on the device that can apply AI to look for unknown images that are flagged by some arbitrary object recognition neural network, it can look for any arbitrary things matched by whatever neural network some foreign government might insist on supplying to them, whether the thing matched by that neural net turns out to be child porn or photographs of sensitive documents.

          Worse, because this would have to be done by neural networks, you could never be absolutely sure that they aren't spying. They could slip a few images into the training set that match some specific set of interesting keywords, and you'd never know it just by looking at the model or running some reasonable set of test images/text through it, and then one day, boom. Suddenly Russia knows that the U.S. is planning to take some action in Ukraine, because some image or phrase triggered the recognizer. In some cases, governments might not even need to see the document to gain critical insight; the mere existence of something matching a particular recognizer could be enough to leak state secrets.

          And before you say, "Yes, but what if they refuse to allow the government to provide the neural networks?" I would point out that China could easily pass a law saying that if a company wants to operate in China, they have to allow the government to provide their own neural networks. And boom, they bend over. As long as the answer is "We can't do this," it won't happen. The second somebody says, "We could maybe do it this way," the floodgates suddenly open, and every authoritarian government in the world starts making demands.

          The only way to prevent this is to draw a line in the sand and absolutely refuse to allow any back doors to crypto. Folks have been saying this since the 1990s Clipper Chip fiasco, but for some reason, the government folks who want to spy on the public just can't seem to get it through their heads that once a back door exists, there's no way to guarantee that it won't be used in ways that are unintended.

          IMO, you're hopelessly naïve if you think this won't be abused. It isn't a question of "if", but "when" and "how".

          • Unless the summary and the original article are both wrong, you are wrong. It also requires checking for "grooming behavior", which has nothing to do with images.
            Yes, the summary is wrong and the article is wrong too. But the article links to the proposal in the third sentence.

            it can look for any arbitrary things matched by whatever neural network some foreign government might insist on supplying to them,
            And how exactly would a foreign government get his code on my device? You are just silly.

            If a foreign go

      • Europe and ship a separate app for use in Europe, because nobody anywhere else in the world is going to put up with an app that has a giant government wiretapping back door.
        There is no giant government tapping backdoor. Perhaps you should read the proposals before knee jerking off.
        Especially as the government has absolutely nothing to do with the process.

        • by dgatwood ( 11270 )

          Europe and ship a separate app for use in Europe, because nobody anywhere else in the world is going to put up with an app that has a giant government wiretapping back door. There is no giant government tapping backdoor. Perhaps you should read the proposals before knee jerking off. Especially as the government has absolutely nothing to do with the process.

          Of course it's a giant government wiretapping back door. What do you think scanning private messages with a pattern recognizer is? You don't have to have the actual contents of the message for certain messages' existence to be a problem. Merely flagging someone as having expressed dissident statements about the leader of an authoritarian regime could be deadly. And once the support code exists for running these recognizers to detect CSAM, the same code can potentially be used for that, too, either with

          • Of course it's a giant government wiretapping back door.
            No it is not.
            Or you have weird ideas about:
            a) how a government works
            b) what wire tapping actually is

            Merely flagging someone as having expressed dissident statements about the leader of an authoritarian regime could be deadly.
            And that is not possible with this proposed system. Again: learn to read about the topic instead knee jerking bullshit.

            • by dgatwood ( 11270 )

              Of course it's a giant government wiretapping back door. No it is not. Or you have weird ideas about: a) how a government works b) what wire tapping actually is

              Wiretapping is government gaining access through some presumably authorized means to the contents of private communications. This is quite literally wiretapping. The communication is private, and the government is able to get their hands on a flag that says that someone's communication contains a photo or text that a neural network says is dangerous. In this particular case, it is because some machine learning model or equivalent says that it might contain underage nude photos or text communication inten

              • Wiretapping is government gaining access through some presumably authorized means to the contents of private communications. This is quite literally wiretapping.
                No it is not.

                Wiretapping is: RECORDING EVERYTHING THAT GOES OIVER THE WIRE.

              • But swap in a different neural network, and that exact same code is now snitching on political dissidents with no changes other than swapping out a single data file!
                NO ONE CAN DO THAT. How dumb are you?

                If one could, they would do it right now already.

              • Again, learn to RTFA instead of defending fascism.
                I do not defend fascism, or this proposal even (the one you still have not read yet).

                I argue against your WRONG ARGUMENTS!
                That is all.

      • Firstly, all measures are voluntary.

        As we know, it's impossible to detect grooming without decrypting messages.

        So you have four options:
        1. Don't weaken encryption and warn those who identify as young girls if they're talking a lot to older men.
        2. Weaken encryption for young girls talking a lot to older men.
        3. Weaken encryption for everyone.
        4. Ban children from using it.

        I can't see Apple or Meta doing #3 or #4. Everyone will probably do #1.

        https://www.theverge.com/2022/... [theverge.com]

        • by dgatwood ( 11270 )

          So you have four options:

          1. Don't weaken encryption and warn those who identify as young girls if they're talking a lot to older men.

          Which doesn't detect anything, and thus doesn't meet the requirements because it can't catch anybody.

          2. Weaken encryption for young girls talking a lot to older men.

          And now they send nudes to their teenage boyfriends, and suddenly somebody manages to get past the newly weakened encryption and get their hands on the photos, and there's liability all around. Nobody will do this one, either. Not to mention that the day their boyfriends turn 18, if the CSAM filter flags them as having dirty pics of their 17-year-old girlfriends, suddenly they're dangerous predators and th

  • by SpzToid ( 869795 ) on Wednesday May 11, 2022 @11:20AM (#62523164)
    ...this tired discussion happens again.
  • Gotcha! Door's open boys!

    A bunch of anthropomorphic rolled up sheets of paper containing every fascist's dream law runs up the steps into the house of government.

  • Can't wait for it to flag things like, "Hey, I beat my kid -- at checkers!"

    #ThinkOfTheChildren-LosingAtCheckers

    • by presidenteloco ( 659168 ) on Wednesday May 11, 2022 @11:49AM (#62523268)
      Also, in end-to-end encrypted chats, the scanning would have to run as software on your phone. Depending how sophisticated its language recognition was, that could be a battery consumer. Relatively minor compared to the lack of privacy. How soon before it would be reconfigured to scan for, say, signs of environmental activism?
      • Also, in end-to-end encrypted chats, the scanning would have to run as software on your phone. Depending how sophisticated its language recognition was, that could be a battery consumer. Relatively minor compared to the lack of privacy. How soon before it would be reconfigured to scan for, say, signs of environmental activism?

        The government's favorite, built in security defects that only *they* will ever use. Can we start by testing out the prototypes on government official's private devices?

      • that could be a battery consumer.

        Wow. Clearly you've no idea what actually happens to your images when you open them in a chat app if you think this is a concern. This is a comparison of a checksum to the CSAM database, not some high impact AI learning algorithm. It's far more of a battery consumer to apply an instagram filter.

        Can you do us a favour and not pollute the discussion of the actual terrible reasons this law is bad with these useless non-issues.

        • None of this is limited to images. Nor did his comment limit the scanning to images. That is all in your head. Can you please take your own advice and not pollute the conversation with your insulting and uninformed responses. His concerns may be limited, but yours is equally dumb with the added bonus of derision.

      • How soon before it would be reconfigured to scan for, say, signs of environmental activism?

        Some might even get flagged for reporting weather conditions: "Storm in Capital" :-)

      • How soon before it would be reconfigured to scan for, say, signs of environmental activism?

        Reconfiguring would be illegal. You would need a new law for that. Sorry, do you have some weird ideas how laws work?
        And what is wrong with "environmental activism" anyway? Why would anyone look for such people?

    • I think that is more a US problem. If you beat your kid at checkers, it means you are a bad looser in a lot of languages.
  • by Opportunist ( 166417 ) on Wednesday May 11, 2022 @11:55AM (#62523292)

    Scanning messages for abuse is going to get a load of false positives in various BDSM chatrooms.

    And I really, really wonder just how many conservative politicians will suddenly have to make embarrassing public statements because of that.

    • really wonder just how many conservative politicians will suddenly have to make embarrassing public statements because of that.
      None.
      As police investigations are not reported to the press.
      So: no one will know about it.

      • by djinn6 ( 1868030 )

        So the police is "no one". Gotcha.

        There will absolutely, 100%, no possible way this will be abused by anyone at any point in time.

        • There will absolutely, 100%, no possible way this will be abused by anyone at any point in time.
          Without breaching other laws? Nope.

  • I've said it a few times, but allowing for self-policing is really the only way to even introduce any filtering

    Create trusted flag hashes (it has to have a cert) and allow you to enable text like a grammar check or images in the regular file system. This could be anything you want - offensive language, double entendres, that "known images" hash they use etc.

    This would allow the paranoid or overly cautious to flag and remove items they may not have the sense to delete normally . . . again, I'm keeping
  • This is 180 degrees out from other EU regulations regarding personal privacy. Furthermore, just like 'net nanny' attempts to block certain types of content on the web, this will also produce false positives, causing disasterous problems for people who don't deserve it, and false negatives, allowing the very people they're trying to catch to get away with it, and just like wordfilters used by some websites in the past, pedos will just alter their behavior and language to get around the detection algorithms.
    • False positives do not cause a problem.
      As the police officer reviewing it will notice, or do you think the "supposed victims" get an automatic court verdict and are supposed to find themselves in a jail voluntarily?

  • by PinkyGigglebrain ( 730753 ) on Wednesday May 11, 2022 @12:22PM (#62523426)

    as abhorrent as child abuse is this legislation at first glance looks like its pulling the "think of the Children" card to get into law.

    In my book that is an automatic fail and I will oppose it the best I can.

    The trouble with fighting for human freedom is that one spends most of one's time defending scoundrels. For it is against scoundrels that oppressive laws are first aimed, and oppression must be stopped at the beginning if it is to be stopped at all.

    - H. L. Mencken

  • by PPH ( 736903 ) on Wednesday May 11, 2022 @01:04PM (#62523556)

    ... high horse the EU is sitting on w.r.t. child abuse. When are they sending Roman Polanski back here for trial?

  • In my opinion, it's completely impossible to implement an automatic system which does this kind of scanning anyway. That's because in order to tell if a conversation carries indicators of malicious activity the scanner:
    A. Would need to be capable of nearly perfect understanding of the human language with all it's nuances and highly contextual interpretation:
    B. Would need to take into account the relationship between the parties to, for example, differentiate between parents taking to their children, or sex

  • by NotEmmanuelGoldstein ( 6423622 ) on Wednesday May 11, 2022 @05:01PM (#62524232)

    ... general surveillance law ...

    The EU demanding 'prove you're innocent': It reveals even these supposedly practical bureaucrats want absolute power over technology.

Keep up the good work! But please don't ask me to help.

Working...