Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Privacy The Internet Communications Facebook Network Security Social Networks

None of Your Pixelated or Blurred Information Will Stay Safe On The Internet (qz.com) 139

The University of Texas at Austin and Cornell University are saying blurred or pixelated images are not as safe as they may seem. As machine learning technology improves, the methods used to hide sensitive information become less secure. Quartz reports: Using simple deep learning tools, the three-person team was able to identify obfuscated faces and numbers with alarming accuracy. On an industry standard dataset where humans had 0.19% chance of identifying a face, the algorithm had 71% accuracy (or 83% if allowed to guess five times). The algorithm doesn't produce a deblurred image -- it simply identifies what it sees in the obscured photo, based on information it already knows. The approach works with blurred and pixelated images, as well as P3, a type of JPEG encryption pitched as a secure way to hide information. The attack uses Torch (an open-source deep learning library), Torch templates for neural networks, and standard open-source data. To build the attacks that identified faces in YouTube videos, researchers took publicly-available pictures and blurred the faces with YouTube's video tool. They then fed the algorithm both sets of images, so it could learn how to correlate blur patterns to the unobscured faces. When given different images of the same people, the algorithm could determine their identity with 57% accuracy, or 85% percent when given five chances. The report mentions Max Planck Institute's work on identifying people in blurred Facebook photos. The difference between the two research is that UT and Cornell's research is much more simple, and "shows how weak these privacy methods really are."
This discussion has been archived. No new comments can be posted.

None of Your Pixelated or Blurred Information Will Stay Safe On The Internet

Comments Filter:
  • by Anonymous Coward

    This is my shocked face: https://upload.wikimedia.org/wikipedia/en/d/d8/Mr_Swirl.jpg

    Putting a big black square over your face in Paint is the only surefire method.

    Anyway, using social media and simultaneously demanding privacy is pure silliness to begin with. The real question is how much of an illusion of privacy needs to be maintained to keep you from complaining.

    • Only if you also make sure that the file format doesn't support EXIF of some other metadata that contains your address or an unmodified thumbnail.
    • by Mashiki ( 184564 )

      Putting a big black square over your face in Paint is the only surefire method.

      Only works if you're applying it to the same layer. Otherwise some programs will save it as a new layer, making it trivial to uncover and that's that.

      • You...do know that jpeg, gif, and png--the image formats most often seen on the internet--all require you flatten the layers in the process of saving, right? It's only trivial to remove the black square when it's still in its own layer. Still, the easiest tool to use for applying that black square is the most simple pixel-pushing program on the machine you're using, which probably will never ever support layers. Something like GIMP or Photoshop is fussy and overkill.

        Basically, it looks like the article p

  • by Hognoxious ( 631665 ) on Thursday September 15, 2016 @05:17AM (#52891649) Homepage Journal

    The algorithm doesn't produce a deblurred image

    I felt a great disturbance in the force, as if a million Japanese porn fans cried out in disappointment.

    • by Anonymous Coward

      Ah, was about to comment about Japanese porn. Technically, it's brute-forceable, "these combination of pixels generated this large square with the following color.", and at 30 samples per second one could reverse engineer that combination.

      Not that it'd be worth it. And if it's just some calculated pixels, is it still a photo of that actress' naughty bits?

      • by Anonymous Coward

        A lot of Japanese porn used to be obscured with a few well-known reversible filters. That practise disappeared with the increase of the export market, as they'd then sell the uncensored versions at inflated prices abroad for grey-import back to Japan.

  • by orlanz ( 882574 ) on Thursday September 15, 2016 @05:18AM (#52891657)

    For a computer, most algorithms behind comparing two pictures is already a blurred picture of both. Most of these algorithms take samples/pixels of the pictures and see if the relationships of both sets of samples are the same or within a margin of deviation. There is little value in comparing pixel by pixel for exact matches. Similar to human finger prints.

    A blurred picture is similar to taking less samples on one picture and setting the margin of deviation wider.

    But for computers, 57% is pretty bad. 85% is also very bad and that's when you are telling the machine the answer. At those rates, this is kind of hard to do mass comparisons... the false positives would be far too high for any human to weed through. This will apply more for targeted searches where an investigator wants the 5 most probable matches to a blur. Unlike the researchers here who know the answer before hand, he still needs to take the guess on which one it actually is.

    In a criminal investigation, if we had a database of likely suspects, this would work. But we are all about mass collection of data data data. With a large population of pictures, the blur will probably match a lot more than 5.

    • by AmiMoJo ( 196126 ) on Thursday September 15, 2016 @05:31AM (#52891679) Homepage Journal

      That pixelated images are insecure has been known about for years. I seem to recall it was even mentioned on Slashdot. There are many other attacks, for example if you have text (like a number plate) you can just try running a dictionary attack of images through a pixelation filter and select the closest matching result.

      Black bars have always been the preferred method.

      • Somehow I don't think European censorship [reuters.com] is going to really help out here.
      • Yeah, the problem with this research is that you need a small dataset. Characters are pretty easy with just 26. With a small sunset of faces it should be easy to identify blurred faces but start adding variables and this attack vector gets absurd: Pixel blur radius, filter type, 6 billion people as options, lighting etc. If you are deciding "which of these hundred people" is this, I guess it's useful but I cant see this being useful to deanonymize a random person on Facebook.

      • Interpol reversed the (deterministic) Photoshop swirl filter [wikipedia.org] a child molester had been using to hide his face in the child porn pics he published.

        Black bars are the obvious method for hiding someone's identity. But then idiot producers who didn't understand security or math decided they weren't aesthetic, and ordered their media compnies to use pixelation or blurring instead. Even recording video of the person in a darkened room isn't enough - the camera can pick up enough low light data to yield a pas
    • This is just another false evidence generator. Give me a pixelated image and I'll paint any number of an infinity of images that will average out to that patch of squares.
      There's data missing if you add data that's false evidence.
      There's dogs everywhere!

  • Just do like I do. If you put a picture of your car online, put a black bar over the license plate.

    • by RobinH ( 124750 )
      Actually, I was wondering: what kind of risk do you have from your license plate being visible in an online photo? Obviously I see them blurred out all over the place, and even blurred out the plates on my pictures when I sold my last car, but I'm not really sure why it's so important. What can someone do with my license plate number?
      • With a database (police, credit agencies, etc) they can link that plate number to you and see everything about you that they have collected. You can assume most bad guys have access to that kind of database at this point, considering network security is non-existent.
      • Well somewhere there will be a database that ties the number plate to your name and address. Actually probably several databases not all subject to the same degree of security. Then the absence of the vehicle outside said address would be a good indicator of the premises being empty.

        And security breaches aren't the only problem for these databases, in the UK those permitted to access the official database have been known to access it unofficially [slashdot.org].

        And I would also expect that people well versed in scams and

        • Well somewhere there will be a database that ties the number plate to your name and address. Actually probably several databases not all subject to the same degree of security. Then the absence of the vehicle outside said address would be a good indicator of the premises being empty.

          I have two ways of defeating that - a garage and a second car. If I'm selling a car, I can guarantee you I have at least one other fucking car.

        • by radish ( 98371 )

          Then the absence of the vehicle outside said address would be a good indicator of the premises being empty

          As would knocking on the door and getting no response. In fact, that would be a much better indicator of the premises being empty assuming the possibility that more than one person lives there.

      • by AHuxley ( 892839 )
        Depends who is after the info and what contacts they have and at what price.
        Law enforcement, ex or former law enforcement, private detective might all have their contacts.
        The other issues is state police, federal agencies and the mil just seeking all pics online for matching faces, passenger faces and plate numbers in case they are ever seen near any sensitive site.
        The private sector will often have their own security walk out and take a picture, use facial recognition, try and find a plate number.
        A pro
  • by Anonymous Coward

    Blurring is a technique reversible with Wiener filtering. Basically the quality of recovery very much depends on subsequent quantization/compression. Pixelation is more complete information loss.

    However, the article talks about video filters. In that case, per-frame pixelization will let a lot of image detail become recoverable through motion compensation (straightforward blurring is less suscetible to this recovery strategy). So if you really want to inhibit recovery, blot out the information hard. Th

  • by Anonymous Coward

    It is a fundamental law of computer science that you cannot increase the amount of information in a given dataset. In this case the combined dataset of the blurred image and the learned statistical averages of a human face.

    Once an image has been blurred (information has been deleted) it cannot be recreated. What you can do is to apply statistical averages in the hopes of getting something which might resemble the original information. It will - however - be just that, cosmetic improvements based on statisti

    • by dublin ( 31215 )

      It's not necessary to reconstruct the image to perform a good recognition on it.

  • You know, you raise a good question.

    I've just always been told it's a good practice, but yeah, what would someone actually be able to do with someone else's plate numbers?

  • Can we stop with this "deep learning" bullshit now? It is just algorithms. Every moron has to interject "AI" or "deep learning" or "neural nets" into their program description. This is really stupid "research" anyway. Is this what passes for research in CS now?
    • by MrKaos ( 858439 )

      Can we stop with this "deep learning" bullshit now? It is just algorithms. Every moron has to interject "AI" or "deep learning" or "neural nets" into their program description.

      Aren't you a clever slashbot.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      "Deep learning" is a configuration of a neural network. Historically we couldn't have nested neural networks because we didn't know how to train them in any reasonable amount of time. Then we figured out how, and discovered nested nets work far better than traditional neural networks.

      So you get more specific and descriptive going from: algorithms -> AI -> reinforcement learning -> neural networks -> deep learning.

      • There's no reason to exclude the last three from the old-school generic description of AI, which is trying to teach computers to do things that humans currently do better.
    • So had they said "Using algorithms, the three-person team was able to identify obfuscated faces and numbers with alarming accuracy", that would make the sentence so much more informative, right? :D
    • Is 'machine learning' ok?
    • by clodney ( 778910 )

      Can we stop with this "deep learning" bullshit now? It is just algorithms. Every moron has to interject "AI" or "deep learning" or "neural nets" into their program description. This is really stupid "research" anyway. Is this what passes for research in CS now?

      Of course it is just algorithms - that is what all computer science is. And in some cases those algorithms were known 20 or 30 years ago, but things that were computationally infeasible at that time are now trivial.

      And it is important to note that some of these algorithms work in ways that are very different than human vision, where humans are almost unable to understand how neural nets arrive at an answer.

      One fascinating example I saw on TV (I think it was 60 minutes) was that humans are unable to recogni

  • I remember a demo of Photoshop from two years ago when they managed to reverse lens blur off a photo. It was only a matter of time.
    • The lens has a certain unchanging point spread function (how a point of light is spread into a blur) which scales linearly with distance away from (and closer than) the focal plane. You estimate the size of the blur, then apply an inverse of this PSF. The process is similar to tomography [wikipedia.org] used in CT scans (Computer-assisted Tomography) and MRIs. Likewise, camera shake simply adds a linear smear component to this PSF. Heck, technically you don't even need a lens. The light shining through a window which
  • Limited usefulness (Score:4, Insightful)

    by hackertourist ( 2202674 ) on Thursday September 15, 2016 @06:26AM (#52891815)

    They had a photo with an obscured face and the same photo with unobscured face in their training set. It seems obvious a computer can match those two. The solution would be to use unique photos, not uploaded anywhere, as the source for obscuration and only publish the obscured version.

  • Most Asian women I see in the US are no longer pixelated.

  • If you care about security, you shouldn't be using blur, you should be putting a nice black circle over it. A yellow smiley face works just as well.
  • If I take a photo that has license plates, street addresses etc...I CUT them out, PAST another image of the photo in place. If you don't just blur it, but REMOVE it, how would they figure it out?
  • It's gone.

    Between social media mining, NSA/CIA/FBI operations, license plate readers, Stingray gadgets, the GPS in your phone, cell tower triangulation, TPMS scanners, and the video cams on every power pole and stop light, the concept of 'privacy' or anonymous behavior is pretty much gone.

    I'd wager it would be nearly impossible to travel between any to major cities or buy anything in a store without leaving a trackable signature.

    You'd basically have to travel by bicycle with your head covered (leaving your

  • Wait... you mean that's an actual thing now?

  • Either mask the face/license plate/whatever entirely or replace it with a "fake blur" that was made from another image.

    For license plates, use a sample plate like ABC-123 to generate the blurred image.

    Faces will be a little harder to do: Either 1) you will only have a few "sample faces" and things will look creepy even if you use the best-matching sample, 2) you will have a few thousand and you will, in effect, leak information, or 3) you will be in between and it will look creepy and leak information.

    Perh

  • That's what they do on all the crime scene investigation shows!

  • Turns out someone's been fixing blurred lines [google.com] for a while now

  • Let's see their 'deep learning tool' identify something that's got a featureless black box over it, or someone's face that's got a black box or oval over it.
  • Comment removed based on user account deletion
    • by dublin ( 31215 )

      True, the lack of layers in Paint makes it a good choice for this kind of thing - perhaps the only thing it's really good at...

      It's stunning how many people do this kind of thing in Photoshop or Acrobat, but leave the layers intact, so you can remove the obscuration with a little advanced editing...

No spitting on the Bus! Thank you, The Mgt.

Working...