Amazon's Facial Recognition Misidentified 1 in 5 California Lawmakers as Criminals (vice.com) 79
The ACLU tested Rekognition, Amazon's facial recognition technology, on photographs of California lawmakers. It matched 26 of them to mugshots. From a report: In a recent test of Amazon's facial recognition software, the American Civil Liberties Union of Northern California revealed that it mistook 26 California lawmakers as people arrested for crimes. The ACLU used Rekognition, Amazon's facial recognition software, to evaluate 120 photos of lawmakers against a database of 25,000 arrest photos, ACLU attorney Matt Cagle said at a press conference on Tuesday. One in five lawmaker photographs were falsely matched to mugshots, exposing the frailties of an emerging technology widely adopted by law enforcement. The ACLU used the default Rekognition settings, which match identity at 80 percent confidence, Cagle said. Assembly member Phil Ting was among those whose picture was falsely matched to an arrest photo. He's also an active advocate for limiting facial recognition technology: in February, he introduced a bill, co-sponsored by the ACLU, that bans the use of facial recognition and other biometric surveillance on police-worn body cameras.
Well (Score:5, Funny)
Sounds about right to me.
Re:Well (Score:5, Funny)
Yeah, I'm not sure misidentified is the right word unless they are talking about the 4/5 it got wrong.
Re: (Score:3)
Re: (Score:3)
So the documentation says it has an 80% confidence and when you try it, it wrongly matches 20% of the folks you test it with. Okay. That math seems remarkably... accurate?
Re: (Score:1)
Re: (Score:2)
Re: (Score:3)
Q: How do you know a politician is lying?
A: His lips are moving!
Re: (Score:1)
Re: (Score:3)
80% failure rate is pretty egregious
Re: (Score:3)
80% failure rate is pretty egregious
Not for the government in CA. Citation: Train to nowhere
Re: (Score:2)
All one needs to know is who actually profited from this train. She still in the Senate. Nothing to see here, move along.
Re: (Score:2)
This ^ comment is clearly a joke, but the article never states that they actually checked. Their have been Democrats who were sworn into office while in jail for murder and drug trafficking. I am sure at least a few Californian lawmakers have mugshots on file.
Very smart facial recognition (Score:4, Funny)
Re: (Score:2)
That seems low (Score:3)
Future Crime (Score:1)
Headline and body contradict (Score:2)
Re: Headline and body contradict (Score:2)
Misidentification rates (Score:1)
In this case, I think that if the misidentification rate is that high, it might actually help. As in, if the cameras are identifying 20% of the population as potentially a flagged criminal, every cop is going to get tired of the false positives and stop treating them as seriously. If the cameras only misidentify 0.1% of the population, that's accurate enough that treating every flag as an alert is actually a practical measure, and the poor schmuck with a face almost identical to some felon's gets to deal
That is a problem (Score:2)
That seems low.
5-1=? (Score:5, Insightful)
Misidentified 1 in 5 California Lawmakers as Criminals
Does that mean it Correctly identified the other 4 as criminals?
Re: (Score:3)
Re: (Score:2)
Re: (Score:3)
... or a good facial recognition system for that matter.
Re: (Score:3)
"Facial recognition did not confirm any matches, it just spit out potential matches. But that doesn't make a good headline."
... or a good facial recognition system for that matter.
This comment, and the headline of this discussion, demonstrate the lack of understanding on what facial recognition is intended to do.
Facial recognition that does not "confirm any matches" is perfectly good facial recognition, because that is not what facial recognition is supposed to do. Facial recognition applies matching algorithms and reports a percentage match. That's all. It is not designed to prove the identity -- that's magic TV forensics. It winnows out the negatives so something else, usually a
Re: (Score:2)
> Facial recognition that does not "confirm any matches" is perfectly good facial recognition, because that is not what facial recognition is supposed to do.
I'm afraid this is _precisely_ what facial recognition is supposed to do, especially in biometric uses. It's shown real limitations and has never been that accurate. But it's certainly the goal of nearly all research and development.
Re: (Score:2)
I'm afraid this is _precisely_ what facial recognition is supposed to do, especially in biometric uses.
I'm afraid you are precisely wrong, precisely because this was not a biometric use. There was no authentication or access control, it was simple facial matching. That is easily proven by pointing to the 80% match criterion.
Re: (Score:2)
Sadly, you seem to have reversed the logic. You claimed:
> Facial recognition that does not "confirm any matches" is perfectly good facial recognition, because that is not what facial recognition is supposed to do. [ snipped ] It is not designed to prove the identity
The Wikipedia definition, is:
> A facial recognition system is a technology capable of identifying or verifying a person from a digital image or a video frame from a video source.
You seem to also claim that, because biometric ID's verify ide
Re: (Score:2)
> Facial recognition that does not "confirm any matches" is perfectly good facial recognition, because that is not what facial recognition is supposed to do.
I'm afraid this is _precisely_ what facial recognition is supposed to do, especially in biometric uses. It's shown real limitations and has never been that accurate. But it's certainly the goal of nearly all research and development.
The goal is whatever the use wants. Maybe you get your perception of what Facial Recognition is by what you see in the movies and on TV shows, and maybe from stupid headlines, but in the real world and in any practical sense the implementation is to find potential matches that can then be validated by other means.
Perfect accuracy may be a target for developers, but it is not required to be a useful tool. There are lots of tools in use every day that aren't perfectly accurate.
Re: 5-1=? (Score:2)
Re: (Score:2)
You literally just said 1 in 5000 is good and apparently don't know that 1 in 5 is not the same thing.
You apparently do not know what facial recognition, especially in this context, is meant to do, so you pretend to know what I don't know. I said nothing about 1 in five. "One in five" was irrelevant to my comment.
In your 25000 person example the system gives 5000 pictures to sift through.
No, as I said pretty explicitly, my example gives five pictures to sift through. Let's see if we can find the words I actually wrote: "A facial recognition system that throws out 24,995 faces and reports the top five for an input image is doing a very good job." Yes, "throws out 24,995" means there
Re: (Score:2)
Let's see how this plays out and makes your comments phenomenally stupid, shall we?
The subject, as is clearly stated in the title, is that this system has a 20% false positive rate
Re: (Score:2)
"Facial recognition did not confirm any matches, it just spit out potential matches. But that doesn't make a good headline."
... or a good facial recognition system for that matter.
There is no mention of "one in five" there. There is a generic statement about a facial recognition system that doesn't confirm any matches and that it isn't a good facial recognition system. Now, please proceed.
I have taken what you originally wrote and modified it
Yes, I know you modified it. That means it isn't what I wrote.
The subject, as is clearly stated in the title, is that this system has a 20% false positive rate. It actually says 1 in 5.
Actaully, the subject of this thread says "5-1=?" It is not "1 in five". Second, as I pointed out, the entire comment you made, and that I replied to, had morphed from a discussion o
Re: (Score:2)
Of course your first ridiculous att
Really? lawmakers? (Score:2)
Are we sure they MISidentified them?
say what (Score:2)
I don't see any problems with the software.
Re: (Score:2)
Say, if they were matching against cctv from a certain island... Could easily be one in five.
Huge usability problem (Score:2)
Doesn't seem like it would take much looking at their analytics to realize that people aren't overriding the default. If they were concerned about false positives they'd have upped the default to match the recommendation by now.
More accurate than people? (Score:2)
How many of those politicians would have been matched against a mugshot if people had been doing the job?
Comparing with the alternatives (Score:1)
I've actually seen this before. Lots of complaints about how inaccurate the computer system was, until it was brought up that the human operated system it was replacing had double the errors.
lol (Score:1)
Please test this at 1600 Pennsylvania Avenue (Score:1)
Re: (Score:2, Troll)
Re: (Score:3)
Re: (Score:2)
Conservatives are worried that if this software improves, all liberals will have to do to compromise a Trump supporter's facial recognition security is drop their pants and spread their bum cheeks.
Re: (Score:3)
That was done. Trump was identified as a traffic cone lost on the Jersey Turnpike in the 60s
Re: (Score:2)
Re: (Score:2)
Did I make it in time? (Score:2)
Re: (Score:2)
Nope. But feel free to pile on. (I did.)
What do they even mean by 80% confidence? (Score:2)
In statistics "confidence" has a very specific meaning related to populations and samples. It does not apply to *tests* and *methods*. It can't.
It is simply impossible to say of a test that it "matches identity with 80% confidence". The very statement itself is mathematically speaking nonsense. You *can't* know how much to trust any test without characterizing the samples it is being applied to. If you ran this test on photos from a hundred years ago, how much confidence should you have in any matches
Re: (Score:2)
There's usually some goodness-of-fit metric as the output of these algorithms. The usual way to run such a system at an "80% confidence" level is to set a threshold on the goodness-of-fit metric such that it is right 80% of the time on the evaluation data set.
Re: (Score:2)
Right. But it tells you nothing about how much to trust a test result in a real world situation.
I'm not saying tests like these don't have utility. But promoters of these tests overstate that utility. In fact it would be more accurate to say that the test is "no more than 80% accurate" rather than to say it's "80% accurate".
Re: (Score:3)
And you ridiculously understated the utility, by claiming that it "is mathematically speaking nonsense" (it isn't) and by comparing it to testing against photos from a hundred years ago. If one's purpose is to identify which people in 100-year-old photos were the same, the method and the confidence level would have meaning. If you (yes, you) ask it to match people in 100-year-old photos to recent mugshots, the problem is with the user rather than the test.
You are still making bogus claims, because you don
Re: (Score:2)
It is nonsense. It is *literally* nonsense. You cannot state the confidence you can have in a test result simply knowing the statistical properties of the test. There's even a name for it: the base rate fallacy.
because you don't know whether the system will be more or less accurate than the estimate
I'm not the one claiming anyone can.
Re: (Score:2)
You keep using that word. I do not think it means what you think it means. The 80% number may be wrong, but it is an entirely standard and accepted way to characterize a classifier's behavior.
Now that is nonsense. What statistical properties do you think a test has by itself? The 80% number is a claim about the statistics of the results of the tec
Re: (Score:2)
The 80% number is a claim about the statistics of the results of the technique, given assumptions about the population it works on.
Given what assumptions?
Re: (Score:2)
Ask Amazon. I'm not the one who came up with the 80% claim. Usually it's that the test data population is a good model for the general population.
Re: (Score:2)
If you (yes, you) ask it to match people in 100-year-old photos to recent mugshots, the problem is with the user rather than the test.
Unless you get a hit and out one of the immortals before the time of the Gathering.
How it really works (Score:5, Informative)
Re: (Score:2)
Don't sully this thread with logic and a decent explanation. That is so boring.
Re: (Score:3)
Re: (Score:3)
Only then would the matched person even be brought in for questioning, much less charged with anything.
You read like a smart chap. Do you really believe the above statement? Would you bet your freedom on it?
Re: (Score:1)
Re: (Score:3)
We have seen this happen before with DNA (have a look at the Innocence Project). It is not unreasonable to make the limitations of such a system widely known, given the potential for its use as an almighty tool from people who in fact, quite reasonable, have no id
Re: (Score:2)
No, the most people who get sent to prison wrongfully do so based on things false confessions, fa
Re: (Score:1)
this is so unlikely we're literally talking odds lower than one-in-a-million.
I remember reading an article that mentioned that in the early days of DNA testing it was actually closer to 1:10k or so.
This had to do with that DNA samples from crime scenes tend to be tiny, contaminated, and degraded.
Over the last 20 years or so though, we've improved DNA testing almost as much as computers. The cost has dropped a couple orders of magnitude, the size of the sample needed has dropped a similar amount because we've developed tests that require smaller samples to begin with and also develo
Re: (Score:2)
Re: (Score:2)
I appreciate what you're saying that the Prosecutor's fallacy, planting, tampering, bad handling etc, are issues of themselves, but that's just splitting hairs (pun not intended). False DNA evidence suffering from these issues is still more likely to be relied on than, say, witness testimony, because it's considered more reliable.
Therefore in the context of the or
Wow only 20% accuracy? (Score:1)
On the plus side... (Score:3)
Re: (Score:2)
But as a demonstration that facial recognition still has a ways to go, it also identified Mitch as the muppet Beaker [google.com].
Good lord - an 80% error rate? (Score:2)
It's a travesty that they would miss 4 out of 5 criminals.
Misidentified? (Score:2)
Are they sure?
I think I see the problem... (Score:2)
If California had elected more Republicans, the accuracy would trend upward almost in direct proportion to the number of right wing legislators. Right wing Democrats could pump the percentage even higher.
Not the first to post this but (Score:2)
Maybe they weren't misidentified. Think of that?