London Police's Face Recognition System Gets It Wrong 81 Percent of the Time (technologyreview.com) 55
An anonymous reader quotes a report from MIT Technology Review: London's police force has conducted 10 trials of face recognition since 2016, using Japanese company NEC's Neoface technology. It commissioned academics from the University of Essex to independently assess the scheme, and they concluded that the system is 81% inaccurate (in other words, the vast majority of people it flags for the police are not on a wanted list). They found that of 42 matches, only eight were confirmed to be correct, Sky News reports. The Met police insists its technology makes an error in only one in 1,000 instances, but it hasn't shared its methodology for arriving at that statistic.
Re: (Score:1)
Sure about that? What percent of the general population are on the wanted list to begin with?
Re: Flip a coin (Score:1)
Wrong.
If you run this against a database of millions and ask for 1 matching result it's wrong 81% of the time. But if you ask for all matches which meet a minimum level of certainty you can get a small list which is almost guaranteed to contain the right one, and a human can use normal methods to check the list.
If you flip a coin, you have a 50% chance that the real suspect is not even in your results, and for large datasets will only halve the size of the database. And will contain a huge number of results
Re: (Score:2)
Exactly - flipping a coin isn't the right analogy. If you've got a database of 1% of the population flagged for attention, then the correct analogy is rolling a 100-face die and getting it right - 1%.
19% right on a large database is actually pretty impressive. The trick to making it useful is recognizing that it's still probably wrong. Use it to flag *possible* known troublemakers for closer human attention, and you've got an excellent tool for directing attention where it's most useful. Assuming of cour
Dupe (Score:3)
Dupe [slashdot.org]. Don't ever change /.
It's not a dupe (Score:2)
They're simply pointing out that the facial recognition system hasn't improved since this past Thursday.
Re: (Score:2)
What's most fucking hilarious:
Posted by BeauHD on Fri July 05, 2019 05:00 PM from the error-prone dept.
Indeed.
Re: (Score:2)
Seems like Slashdot could do with some AI powered dupe detection.
Good (Score:2)
I, for one, am rooting against the widespread dissemination of this surveillance-rich technology. Continued reports of its inaccuracies should at least delay its inevitable ubiquitous deployment.
Re: (Score:2)
Re: (Score:3)
If you're tracking someone's movements, false positives should be generally easy to detect. Like you logically can't have someone move within five minutes to the other side of London and back, so that would be an obvious false positive.
I always err on the side of considering I might wind up in a bad mix of circumstance and inclination, like ole Andy Dufresne; [fandom.com] he of the Shawshank false positive, wearing the unfortunate scent of complicity, sprinkled with motive and intent.
Re: (Score:2)
For a while, I thought this would encourage creative make-up or masks... but you stand a good chance of being incorrectly identified as, say, the Meat-Safe Murderer.
("I never doon it!")
TSA is much worse (Score:2)
Year after year, the TSA's own tests show they miss up to 95% of all fake bombs and other fake weapons used in testing.
And yet, the claim is this agency is helping to protect us.
And Slashdot's dupe detection system (Score:2)
apparently gets it wrong close to 100% of the time.
Being wrong 81% of the time may not be that bad... (Score:2)