Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Privacy

AI Fake-Face Generators Can Be Rewound To Reveal the Real Faces They Trained On (technologyreview.com) 23

An anonymous reader quotes a report from MIT Technology Review: Load up the website This Person Does Not Exist and it'll show you a human face, near-perfect in its realism yet totally fake. Refresh and the neural network behind the site will generate another, and another, and another. The endless sequence of AI-crafted faces is produced by a generative adversarial network (GAN) -- a type of AI that learns to produce realistic but fake examples of the data it is trained on. But such generated faces -- which are starting to be used in CGI movies and ads -- might not be as unique as they seem. In a paper titled This Person (Probably) Exists (PDF), researchers show that many faces produced by GANs bear a striking resemblance to actual people who appear in the training data. The fake faces can effectively unmask the real faces the GAN was trained on, making it possible to expose the identity of those individuals. The work is the latest in a string of studies that call into doubt the popular idea that neural networks are "black boxes" that reveal nothing about what goes on inside.

To expose the hidden training data, Ryan Webster and his colleagues at the University of Caen Normandy in France used a type of attack called a membership attack, which can be used to find out whether certain data was used to train a neural network model. These attacks typically take advantage of subtle differences between the way a model treats data it was trained on -- and has thus seen thousands of times before -- and unseen data. For example, a model might identify a previously unseen image accurately, but with slightly less confidence than one it was trained on. A second, attacking model can learn to spot such tells in the first model's behavior and use them to predict when certain data, such as a photo, is in the training set or not.

Such attacks can lead to serious security leaks. For example, finding out that someone's medical data was used to train a model associated with a disease might reveal that this person has that disease. Webster's team extended this idea so that instead of identifying the exact photos used to train a GAN, they identified photos in the GAN's training set that were not identical but appeared to portray the same individual -- in other words, faces with the same identity. To do this, the researchers first generated faces with the GAN and then used a separate facial-recognition AI to detect whether the identity of these generated faces matched the identity of any of the faces seen in the training data. The results are striking. In many cases, the team found multiple photos of real people in the training data that appeared to match the fake faces generated by the GAN, revealing the identity of individuals the AI had been trained on.

This discussion has been archived. No new comments can be posted.

AI Fake-Face Generators Can Be Rewound To Reveal the Real Faces They Trained On

Comments Filter:
  • For example, finding out that someone's medical data was used to train a model associated with a disease might reveal that this person has that disease.

    From the description of how the attack works, this would imply you had the original medical data to check against the model and see how it responds... so wouldn't you know they had that disease from the original medical data itself?

    I'm not seeing an extreme risk here.

    • A black hat may have an older medical record from before the target was diagnosed with the disease. But, yeah, this seems like a contrived example.

      GANs can be trained for any criteria. So if you want the images to be equidistant from multiple people in the training set, make that a criterium, and the exploit described in TFA is no longer possible.

      They haven't discovered an inherent weakness in GANs, just in one particular GAN.

    • by ceoyoyo ( 59147 )

      You're correct. People like to use medical data as an example without having any idea how it actually works. It makes for some pretty hilarious gaffs.

      The actual paper/story is about GAN's producing images that are similar to their training data. That's not surprising at all. It's what you trained the thing to do, after all. You could add a penalty for excessive similarity into your cost function if it was a concern. I doubt anybody would do that though, because these things are usually trained on giant data

      • I can't imagine using medical data to train an AI algorithm without deidentifying the data. But who knows, maybe there are some lazy researchers out there...
        • by ceoyoyo ( 59147 )

          The point is that, theoretically, if you trained a model with my deidentified scan, someone could come by later, show my scan to your system, and guess that you had trained it using my scan. It's a bit of a silly, roundabout scenario where the attacker already has all the information.

    • by xalqor ( 6762950 )

      I had a similar thought about this line:

      To do this, the researchers first generated faces with the GAN and then used a separate facial-recognition AI to detect whether the identity of these generated faces matched the identity of any of the faces seen in the training data.

      Until that line, I thought they were talking about deducing what someone looks like from the training dataset using ONLY similar outputs from the tools, and then searching online, or some other database, to find a match. After that line,

    • by vivian ( 156520 )

      It seems like this attack requires you to have the person's medical records to tell if the neural network was trained on those records - so I don't really see the security risk here.

      Sort of like saying hey I can tell if this password encryption algorithm has encoded your password by using your password to see if it encrypted it.

      • In the password example, you are repeating trials to determine the password of a known user.

        In the case of the medical record are you saying you'd randomly generate medical records out of thin air until one hit, then look up the name out of the randomly generated character set you created that happened to have created an entire valid medical record out of thin air?

  • by Kotukunui ( 410332 ) on Wednesday October 13, 2021 @05:33PM (#61889535)
    Interesting to generate a face at the site and then do a reverse image search to see who the Google Image search thinks it is.... Eyeballing it shows very few that I would say are actually the same person, but then Google mostly suggests "celebrities" first
  • seriously? (Score:5, Insightful)

    by epine ( 68316 ) on Wednesday October 13, 2021 @05:36PM (#61889549)

    Such attacks can lead to serious security leaks. For example, finding out that someone's medical data was used to train a model associated with a disease might reveal that this person has that disease.

    For fifteen years I visited slashdot twice a day, sometimes more often. In recent years I slowed down to twice a month, as slashdot culture become more like reddit culture. It's hard to put my finger on precisely what this x-factor is, but I know it when I see it.

    Am I missing something here, or is the above passage total geek fail?

    I mean it's possible that I've reached my silver-sourcer best-before date, and a trillion brain cells died over night, and I've crossed over the Dunning Krueger horizon into the final sunset, which is quite possibly not so different than falling into a gentle black hole of galactic mass, with no real tides at the event horizon to tap you with a clue stick by tearing your body asunder, limb from limb.

    According to the brain cells I still retain, whatever their final number, it seems to me that to find out by this method that someone's medical data is inside the model, you need to test this by already having said medical data in hand. If so you can't by this method learn whether the person had the disease. What you can learn is whether they signed the consent form to participate in lending their personal data to the model builders.

    While we're stuck here at the level of pushing back against reddit-worthy brain farts, we're not discussing any of the deeper issues. I can get that from any politician anywhere — modern supply now being as near to infinite as to make no difference.

    Et tu, News for Nerds? This is so depressing.

    • Talk about the decline of News for Nerds. I just hit up the Firehose and the page had no up/down vote icons. F5 ... same again. Oh well.
    • I've just come back after a decade or more absence. (Life takes twists and turns ya know) Kind of shocked and disappointed in what's left of a once thriving, interesting community. The bones are still good but the flesh has wasted away somehow.
    • I was going to reply, but you're still manually typing those blockquotes tags in, so it's too much a PITA.
      <quote> was introduced in what, 2002? 2003? Stop typing blockquote, it makes it a bunch of work. It's a nerd fail, geek-boy.

    • According to the brain cells I still retain, whatever their final number, it seems to me that to find out by this method that someone's medical data is inside the model, you need to test this by already having said medical data in hand. If so you can't by this method learn whether the person had the disease. What you can learn is whether they signed the consent form to participate in lending their personal data to the model builders.

      I think you are mostly right. But who is to say that some of the data set isn't publically available? Lets say I know some set of things about you that (almost) uniquely identify you (zipcode, gender, doctor, genetic heritage, birthday, etc..) that was used in training the model. A lot of this stuff would be provided in an insurance application, job application, etc. And even more is scrape-able or inferable from the web. Maybe that data alone is enough to help me match. But if not, then I can fuzz over th

    • I mean it's possible that I've reached my silver-sourcer best-before date, and a trillion brain cells died over night, and I've crossed over the Dunning Krueger horizon into the final sunset, which is quite possibly not so different than falling into a gentle black hole of galactic mass, with no real tides at the event horizon to tap you with a clue stick by tearing your body asunder, limb from limb.

      First, that was beautiful.

      Second, I'm of the supposition that anyone with sufficient brain capacity to suggest this has happened to them is actually not suffering from the effects they fear. It's the rest of humanity that has crossed the Dunning Krueger horizon. Those that aren't within it are struggling to figure out why the world has stopped making sense, and occam's razor their way into flipping the script.

    • You're not alone. I had exactly the same thought (but not as eloquently expressed). So you have some data, and can identify whether that data was part of the training set. But so what? Sure, you can identify if a person was part of that set... but this doesn't reveal anything about the person that you didn't already know. Trying to figure out if I have a disease? You could compare my medical history to a neural net to see if I'm part of the training set. OR, you could just look at my medical history its
  • Where is security governed through scrutiny? Your privacy denied, organised and confined No place to hide

  • In 1973, Scientific American had an article by linguist Victoria Fromkin on using errors in language expression, to gain insight into the mechanisms of language production, with Spoonerisms being the example that I recall. Hitting refresh on the face website a few times, eventually brought up a couple of very interesting images. In cases where another person, probably a child, was next to the main face, the "child" sometimes suffered a facial tragedy recommending one of those reconstructive surgery charit
  • Refresh and the neural network behind the site will generate another, and another, and another.

    That's not the way I remember it being explained here.
    When that site went up, we were told it was for viewing examples. That made a lot more sense than the idea that the image was being generated on the fly in the fraction of a second before it downloads.

  • by thegarbz ( 1787294 ) on Thursday October 14, 2021 @03:22AM (#61890613)

    If you can reverse a GAN to identify the underlaying training data there's potentially to manually review and weed out data which is causing an error in the algorithm. Sure the point of these training algorithms is to throw more data at it so the underlying problem gets massively reduced, but that only works if there isn't some common problem in the data you are feeding it.

Hackers are just a migratory lifeform with a tropism for computers.

Working...