Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Privacy United States Technology

Amazon's Facial Recognition Misidentified 1 in 5 California Lawmakers as Criminals (vice.com) 79

The ACLU tested Rekognition, Amazon's facial recognition technology, on photographs of California lawmakers. It matched 26 of them to mugshots. From a report: In a recent test of Amazon's facial recognition software, the American Civil Liberties Union of Northern California revealed that it mistook 26 California lawmakers as people arrested for crimes. The ACLU used Rekognition, Amazon's facial recognition software, to evaluate 120 photos of lawmakers against a database of 25,000 arrest photos, ACLU attorney Matt Cagle said at a press conference on Tuesday. One in five lawmaker photographs were falsely matched to mugshots, exposing the frailties of an emerging technology widely adopted by law enforcement. The ACLU used the default Rekognition settings, which match identity at 80 percent confidence, Cagle said. Assembly member Phil Ting was among those whose picture was falsely matched to an arrest photo. He's also an active advocate for limiting facial recognition technology: in February, he introduced a bill, co-sponsored by the ACLU, that bans the use of facial recognition and other biometric surveillance on police-worn body cameras.
This discussion has been archived. No new comments can be posted.

Amazon's Facial Recognition Misidentified 1 in 5 California Lawmakers as Criminals

Comments Filter:
  • Well (Score:5, Funny)

    by cyberchondriac ( 456626 ) on Tuesday August 13, 2019 @05:06PM (#59084260) Journal

    Sounds about right to me.

  • by nwaack ( 3482871 ) on Tuesday August 13, 2019 @05:07PM (#59084262)
    Then again, maybe not, since the real number is probably more like 4 in 5.
  • by ITRambo ( 1467509 ) on Tuesday August 13, 2019 @05:14PM (#59084278)
    I suspect just about all of them have abused the law at some time, not necessarily in an evil way. But, every corrupt politician needs to start somewhere. How many have helped out friends? How many have gotten out of a ticket? The list of questions is very, very long.
  • They either have or will commit crimes.
  • A criminal does not necessarily get arrested, and a person who gets arrested is not necessarily guilty of the crime.
    • Misidentification can lead to being treated with undue aggressiveness or fear. The cops night not do anything, but private security might tackle you immediately, lock you up for a while before calling the cops, or eject you from the premesis with no compensation for it (think a cruise, ball game, or plane).
      • In this case, I think that if the misidentification rate is that high, it might actually help. As in, if the cameras are identifying 20% of the population as potentially a flagged criminal, every cop is going to get tired of the false positives and stop treating them as seriously. If the cameras only misidentify 0.1% of the population, that's accurate enough that treating every flag as an alert is actually a practical measure, and the poor schmuck with a face almost identical to some felon's gets to deal

  • That seems low.

  • 5-1=? (Score:5, Insightful)

    by billybob2001 ( 234675 ) on Tuesday August 13, 2019 @05:26PM (#59084306)

    Misidentified 1 in 5 California Lawmakers as Criminals

    Does that mean it Correctly identified the other 4 as criminals?

    • Unfortunately, the actual results aren't so exciting. Facial recognition did not confirm any matches, it just spit out potential matches. But that doesn't make a good headline.
      • Its also interesting that they say the software was clear about having 80 percent confidence, and 4 of 5 (80%) correctly found no potential matches. So why is this news?
      • "Facial recognition did not confirm any matches, it just spit out potential matches. But that doesn't make a good headline."

        ... or a good facial recognition system for that matter.

        • "Facial recognition did not confirm any matches, it just spit out potential matches. But that doesn't make a good headline."

          ... or a good facial recognition system for that matter.

          This comment, and the headline of this discussion, demonstrate the lack of understanding on what facial recognition is intended to do.

          Facial recognition that does not "confirm any matches" is perfectly good facial recognition, because that is not what facial recognition is supposed to do. Facial recognition applies matching algorithms and reports a percentage match. That's all. It is not designed to prove the identity -- that's magic TV forensics. It winnows out the negatives so something else, usually a

          • > Facial recognition that does not "confirm any matches" is perfectly good facial recognition, because that is not what facial recognition is supposed to do.

            I'm afraid this is _precisely_ what facial recognition is supposed to do, especially in biometric uses. It's shown real limitations and has never been that accurate. But it's certainly the goal of nearly all research and development.

            • I'm afraid this is _precisely_ what facial recognition is supposed to do, especially in biometric uses.

              I'm afraid you are precisely wrong, precisely because this was not a biometric use. There was no authentication or access control, it was simple facial matching. That is easily proven by pointing to the 80% match criterion.

              • Sadly, you seem to have reversed the logic. You claimed:

                > Facial recognition that does not "confirm any matches" is perfectly good facial recognition, because that is not what facial recognition is supposed to do. [ snipped ] It is not designed to prove the identity

                The Wikipedia definition, is:

                > A facial recognition system is a technology capable of identifying or verifying a person from a digital image or a video frame from a video source.

                You seem to also claim that, because biometric ID's verify ide

            • > Facial recognition that does not "confirm any matches" is perfectly good facial recognition, because that is not what facial recognition is supposed to do.

              I'm afraid this is _precisely_ what facial recognition is supposed to do, especially in biometric uses. It's shown real limitations and has never been that accurate. But it's certainly the goal of nearly all research and development.

              The goal is whatever the use wants. Maybe you get your perception of what Facial Recognition is by what you see in the movies and on TV shows, and maybe from stupid headlines, but in the real world and in any practical sense the implementation is to find potential matches that can then be validated by other means.

              Perfect accuracy may be a target for developers, but it is not required to be a useful tool. There are lots of tools in use every day that aren't perfectly accurate.

          • You literally just said 1 in 5000 is good and apparently don't know that 1 in 5 is not the same thing. In your 25000 person example the system gives 5000 pictures to sift through. Is it better than nothing? Possibly, if time isn't a factor and they have time to sift through 5000. Is it good? Of course not you fucking idiot.
            • You literally just said 1 in 5000 is good and apparently don't know that 1 in 5 is not the same thing.

              You apparently do not know what facial recognition, especially in this context, is meant to do, so you pretend to know what I don't know. I said nothing about 1 in five. "One in five" was irrelevant to my comment.

              In your 25000 person example the system gives 5000 pictures to sift through.

              No, as I said pretty explicitly, my example gives five pictures to sift through. Let's see if we can find the words I actually wrote: "A facial recognition system that throws out 24,995 faces and reports the top five for an input image is doing a very good job." Yes, "throws out 24,995" means there

              • I have taken what you originally wrote and modified it to be accurate rather than absurd as it was in the form you wrote it:

                "The above comment, and the inability of the poster to remember the headline of this discussion, demonstrate the lack of understanding on how to have an intelligent thought. "

                Let's see how this plays out and makes your comments phenomenally stupid, shall we?

                "I said nothing about 1 in five."

                The subject, as is clearly stated in the title, is that this system has a 20% false positive rate

                • Reminder: here is what you wrote and what I replied to:

                  "Facial recognition did not confirm any matches, it just spit out potential matches. But that doesn't make a good headline."

                  ... or a good facial recognition system for that matter.

                  There is no mention of "one in five" there. There is a generic statement about a facial recognition system that doesn't confirm any matches and that it isn't a good facial recognition system. Now, please proceed.

                  I have taken what you originally wrote and modified it

                  Yes, I know you modified it. That means it isn't what I wrote.

                  The subject, as is clearly stated in the title, is that this system has a 20% false positive rate. It actually says 1 in 5.

                  Actaully, the subject of this thread says "5-1=?" It is not "1 in five". Second, as I pointed out, the entire comment you made, and that I replied to, had morphed from a discussion o

                  • I figured I'd let you bury yourself as deeply as possible and show how you like to build strawmen and generally try to worm your way out of your idiocy by misquoting, pretending things were never said, and trying to claim you said something different than you actually have. You have done an excellent job at playing into your own stupidity. You should really create a new ID "The Transparent One", because any moron can see through your elementary school debating techniques.

                    Of course your first ridiculous att
  • Are we sure they MISidentified them?

  • I don't see any problems with the software.

    • Say, if they were matching against cctv from a certain island... Could easily be one in five.

    • I see a huge usability problem here -- Amazon recommends setting the threshold to 99% but has a default setting of 80%.
      Doesn't seem like it would take much looking at their analytics to realize that people aren't overriding the default. If they were concerned about false positives they'd have upped the default to match the recommendation by now.
  • Without something to compare that result with, it is meaningless.

    How many of those politicians would have been matched against a mugshot if people had been doing the job?

  • instead of, five of five?
  • The results should be amusing -- and enlightening.
  • Comment removed based on user account deletion
  • For my variation on a "but they are all criminals" joke???
  • In statistics "confidence" has a very specific meaning related to populations and samples. It does not apply to *tests* and *methods*. It can't.

    It is simply impossible to say of a test that it "matches identity with 80% confidence". The very statement itself is mathematically speaking nonsense. You *can't* know how much to trust any test without characterizing the samples it is being applied to. If you ran this test on photos from a hundred years ago, how much confidence should you have in any matches

    • by Entrope ( 68843 )

      There's usually some goodness-of-fit metric as the output of these algorithms. The usual way to run such a system at an "80% confidence" level is to set a threshold on the goodness-of-fit metric such that it is right 80% of the time on the evaluation data set.

      • by hey! ( 33014 )

        Right. But it tells you nothing about how much to trust a test result in a real world situation.

        I'm not saying tests like these don't have utility. But promoters of these tests overstate that utility. In fact it would be more accurate to say that the test is "no more than 80% accurate" rather than to say it's "80% accurate".

        • by Entrope ( 68843 )

          And you ridiculously understated the utility, by claiming that it "is mathematically speaking nonsense" (it isn't) and by comparing it to testing against photos from a hundred years ago. If one's purpose is to identify which people in 100-year-old photos were the same, the method and the confidence level would have meaning. If you (yes, you) ask it to match people in 100-year-old photos to recent mugshots, the problem is with the user rather than the test.

          You are still making bogus claims, because you don

          • by hey! ( 33014 )

            It is nonsense. It is *literally* nonsense. You cannot state the confidence you can have in a test result simply knowing the statistical properties of the test. There's even a name for it: the base rate fallacy.

            because you don't know whether the system will be more or less accurate than the estimate

            I'm not the one claiming anyone can.

            • by Entrope ( 68843 )

              It is nonsense. It is *literally* nonsense.

              You keep using that word. I do not think it means what you think it means. The 80% number may be wrong, but it is an entirely standard and accepted way to characterize a classifier's behavior.

              You cannot state the confidence you can have in a test result simply knowing the statistical properties of the test.

              Now that is nonsense. What statistical properties do you think a test has by itself? The 80% number is a claim about the statistics of the results of the tec

              • by hey! ( 33014 )

                The 80% number is a claim about the statistics of the results of the technique, given assumptions about the population it works on.

                Given what assumptions?

                • by Entrope ( 68843 )

                  Ask Amazon. I'm not the one who came up with the 80% claim. Usually it's that the test data population is a good model for the general population.

          • If you (yes, you) ask it to match people in 100-year-old photos to recent mugshots, the problem is with the user rather than the test.

            Unless you get a hit and out one of the immortals before the time of the Gathering.

  • How it really works (Score:5, Informative)

    by MikeWhoIsTall ( 5584316 ) on Tuesday August 13, 2019 @06:07PM (#59084408)
    Background: I do not work for Amazon and I have no knowledge of the internal workings of their specific system I've worked in the biometrics industry for quite some time. The headline and implications sound all very scintillating unless you know how real systems work and are used. With most algorithms, when you match two faces you get a "score". This score basically means "likelihood of the two faces being a match". When you run these through a system, you typically set a score threshold. Let's say the match confidence score is on a 1-100 scale - 1 being very unlikely to be a match, 100 being highly likely to be a match. You might configure the system to say "Give me all results where the score is 75 or above". Now, you are probably going to look similar to a fair number of people in the rest of the population, at least according to the algorithms. So they don't take the person with the highest score and put him or her in prison immediately. There are human operators and law-enforcement agents that would take the matches with the highest scores and 1) Visually inspect them to see if they really are that close or not 2) Do a quick background check to see if this person has been charged with related crimes previously 2) at the next level, do some research to see if the person that is a match could even be reasonably said to be in the area of a suspected crime at the time it occurred. Only then would the matched person even be brought in for questioning, much less charged with anything. So the fact that some subset of politicians have a 25% match rate using some threshold score value in a database of gosh-knows how many known criminals is not that far-fetched, and does not indicate that a suspect that might match the database at the same rate would be cuffed and brought immediately to prison. Now I'm not a fan of scanning entire crowds of people at an event or populated area hoping that you catch someone with an outstanding warrant, etc, which is a different issue. But the headlines and implications thereof grossly overestimate the role of a raw match in the larger process of investigating a crime.
    • Don't sully this thread with logic and a decent explanation. That is so boring.

    • Cops can't be trusted to calibrate / confirm calibration of their DUI and speeding meters. I don't expect diligence and restraint from the barely-educated police force.
    • Only then would the matched person even be brought in for questioning, much less charged with anything.

      You read like a smart chap. Do you really believe the above statement? Would you bet your freedom on it?

      • With regards to facial recognition technology, I do believe it. Even people that are an 88% match according to the algorithm can many times look so different that it is easy to rule out via a quick visual assessment. Now I am talking about U.S. law enforcement agencies. I'm not clear on other areas of the world. I think the biggest danger is surveillance. Using your face to know where you are to keep tabs on you. I think the potential there is way more sinister than in the criminal apprehension use ca
    • You are absolutely right, but I think you are underestimating the extent to which its "recommended" use will be honoured, versus the extent to which it will be relied on as first-order evidence, and in full ignorance of the prosecutor's fallacy.

      We have seen this happen before with DNA (have a look at the Innocence Project). It is not unreasonable to make the limitations of such a system widely known, given the potential for its use as an almighty tool from people who in fact, quite reasonable, have no id
      • The thing about DNA testing is that it's reliability is on a whole different level than any other tool available to crime investigators. While it is possible for two samples from different people to come out as the same person, this is so unlikely we're literally talking odds lower than one-in-a-million. Misuse of DNA evidence is mostly just planted and tampered evidence, which are basically their own issues.

        No, the most people who get sent to prison wrongfully do so based on things false confessions, fa
        • this is so unlikely we're literally talking odds lower than one-in-a-million.

          I remember reading an article that mentioned that in the early days of DNA testing it was actually closer to 1:10k or so.

          This had to do with that DNA samples from crime scenes tend to be tiny, contaminated, and degraded.

          Over the last 20 years or so though, we've improved DNA testing almost as much as computers. The cost has dropped a couple orders of magnitude, the size of the sample needed has dropped a similar amount because we've developed tests that require smaller samples to begin with and also develo

        • I'm not implying that DNA is not reliable. Quite the contrary. The more reliable and familiar a method, the more likely it is to be misused.
          I appreciate what you're saying that the Prosecutor's fallacy, planting, tampering, bad handling etc, are issues of themselves, but that's just splitting hairs (pun not intended). False DNA evidence suffering from these issues is still more likely to be relied on than, say, witness testimony, because it's considered more reliable.
          Therefore in the context of the or
  • Thats not very good.
  • by h33t l4x0r ( 4107715 ) on Tuesday August 13, 2019 @07:23PM (#59084570)
    It correctly identified Mitch McConnell as a turtle.
  • It's a travesty that they would miss 4 out of 5 criminals.

  • Are they sure?

  • If California had elected more Republicans, the accuracy would trend upward almost in direct proportion to the number of right wing legislators. Right wing Democrats could pump the percentage even higher.

  • Maybe they weren't misidentified. Think of that?

If all the world's economists were laid end to end, we wouldn't reach a conclusion. -- William Baumol

Working...