Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Privacy United Kingdom

81% of 'Suspects' Flagged By Met's Police Facial Recognition Technology Innocent, Independent Report Says (sky.com) 120

Four out of five people identified by the Metropolitan Police's facial recognition technology as possible suspects are innocent, according to an independent report. From a report: Researchers found that the controversial system is 81% inaccurate -- meaning that, in the vast majority of cases, it flagged up faces to police when they were not on a wanted list. The force maintains its technology only makes a mistake in one in 1,000 cases -- but it uses a different measurement to arrive at this conclusion. The report, exclusively revealed by Sky News and The Guardian, raises "significant concerns" about Scotland Yard's use of the technology, and calls for the facial recognition programme to be halted. Citing a range of technical, operational and legal issues, the report concludes that it is "highly possible" the Met's usage of the system would be found unlawful if challenged in court.
This discussion has been archived. No new comments can be posted.

81% of 'Suspects' Flagged By Met's Police Facial Recognition Technology Innocent, Independent Report Says

Comments Filter:
  • It doesn't matter how low the false positive rate of the *test* is; if the true positive rate in the *population* you're screening is lower than that, your hits are going to have more false positives than true positives.

    So you can do a really impressive-looking demo for a system; in fact the technology may be legitimately impressive without it being good enough for screening *everybody*. That's why drug tests are a crock.

    • by OzPeter ( 195038 )

      It doesn't matter how low the false positive rate of the *test* is; if the true positive rate in the *population* you're screening is lower than that, your hits are going to have more false positives than true positives.

      Yep, and apparently no-one ever explained it to them. The should be forced to read Data Mining for Terrorists [schneier.com]

      • Re: (Score:2, Insightful)

        The alarmist headline is targeted at the statistically incompetent, which are 93.71% of the population.

        81% false positive is VERY GOOD. If this system narrows the suspect pool in a population of a million down to five people, one of whom is the perp, then that is good enough. Because four of those people will have no connection to the victim, while the fifth guy will be her ex-boyfriend with a restraining order. The police should be able to figure it out from there.

        Bayesian statistics [wikipedia.org] should be taught in

        • If only policing worked that way. But in practice the first suspect found is the only suspect searched for.

        • by Anonymous Coward

          One thing I might caution is that a Chicken Little device placed in the hands of police can still be an effective way of gathering information. "Oh, our camera alerted on you...we'll need a DNA sample to clear this up. Spit here." In this case the cops wouldn't care if the device was 100% incorrect and alerted on everything (including water hydrants). The goal isn't a correct anything except an excuse to violate yet more civil liberties. It can be just as effective as the drug dog that conveniently alerts a

    • I concur (Score:5, Insightful)

      by goombah99 ( 560566 ) on Thursday July 04, 2019 @03:08PM (#58873964)

      came here to say this same thing. The results are phenomenally good. it means that instead of one in tenthousand people the cops look at being a criminal, now one in 5 are. Stunningly useful. and a boon to society not an intrusion.

      Ironically it's good that that the False positive rate is higher than the true Positive rate. Why? well imagine there was only one false positive in 100 true positives. The cops would assume you a guilty no matter how much you protested. that inocent person would have a hard time. But when it's the reverse, that 4/5ths of the people flagged are innocent the cops know that it's more likely they are dealing with an innocent person than a guilty one. And because their workload is orders of magnitude less with such a good screening system they might even have the time to suss out the guilty party

        Not bad.

      • by hey! ( 33014 )

        The cops would assume you a guilty no matter how much you protested.

        I think that's going to happen anyway. There's no way that cops or prosecutors will understand the base rate fallacy; even doctors seldom understand how truly unreliable even a very good test can be.

      • imagine there was only one false positive in 100 true positives. The cops would assume you a guilty no matter how much you protested.

        Good point.

      • by xlsior ( 524145 )
        Sucks if you are among those 4-in-5 innocents flagged as a criminal though - expect to get hassled over and over and over again, instead if your 1-in-10,000 odds.
      • Re:I concur (Score:5, Insightful)

        by sjames ( 1099 ) on Thursday July 04, 2019 @09:03PM (#58875344) Homepage Journal

        IF the cops keep in mind that there is an 80% chance the flagged person is innocent, it can be quite helpful. But that's a BIG if. There is a long and sordid history of police deciding they 'like someone' and becoming blind to anything exculpatory.

        • yep, but those sort of cops are gonna do that with or without this so complete irrelevant.
          • by sjames ( 1099 )

            The more excuse you give them to 'like' someone, the more they will do it.

        • by Agripa ( 139780 )

          IF the cops keep in mind that there is an 80% chance the flagged person is innocent, it can be quite helpful. But that's a BIG if. There is a long and sordid history of police deciding they 'like someone' and becoming blind to anything exculpatory.

          80% of the flagged people being innocent is even more useful. These are the law abiding who will not violently resist violation of their rights and prime targets for civil assets forfeiture.

      • by AmiMoJo ( 196126 )

        What actually happens is that the cops use "the computer says it was you" as an excuse to harass innocent people, maybe even pin crimes on them. The Metropolitan Police have form on that.

      • ...based on a false positive, [nytimes.com] will you still think the same? That woman nearly gave in and made a false confession to get out of lock up.

      • by Agripa ( 139780 )

        came here to say this same thing. The results are phenomenally good. it means that instead of one in tenthousand people the cops look at being a criminal, now one in 5 are. Stunningly useful. and a boon to society not an intrusion.

        It will sure be a boon to civil assets forfeiture.

        Ironically it's good that that the False positive rate is higher than the true Positive rate. Why? well imagine there was only one false positive in 100 true positives. The cops would assume you a guilty no matter how much you protested. that inocent person would have a hard time. But when it's the reverse, that 4/5ths of the people flagged are innocent the cops know that it's more likely they are dealing with an innocent person than a guilty one. And because their workload is orders of magnitude less with such a good screening system they might even have the time to suss out the guilty party

        That does not stop civil assets forfeiture.

        Not bad.

        No, it will be all bad, at least in the US.

  • by SmaryJerry ( 2759091 ) on Thursday July 04, 2019 @02:21PM (#58873740)
    I know how this works. The facial recognition identifies like a bunch of people then someone manually reviews it to make sure it's a match. It's the same way automated red light tickets work where I live. The machine says "hey this might be right" and an actually person confirms that before any action is taken. So while i still don't like this for privacy, it's not as bad as the article's headline reports. Where I live our city lets people view frames from cameras on the website that are around the city. Don't go to your citys website unless you want to see exactly how many cameras are all over the city and at sooo many different intersections in particular, it will freak you out how many places you are being recorded from just public cameras, not to mention all the private company's cameras that have angles on even more of the public roads and sidewalks.
    • That might be how it 'should' work, but come on, man. Haven't you lived in this world?

      The poorly trained and overly empowered jackboots responsible for running the system will not be as reasonable as you propose.
    • by AmiMoJo ( 196126 )

      To understand the issue with this you have to understand a bit of the history of the Metropolitan Police. The short version is that they were described in an official investigation into their failing to properly investigate the murder of a black man as "institutionally racist".

      What that means is that the have systemic issues that cause bias against non-white people. Not just individually racist officers, but problems with their procedures and the systems and techniques they have in place to investigate crim

  • by Sique ( 173459 ) on Thursday July 04, 2019 @02:21PM (#58873744) Homepage
    Normally, the question is framed within a medical setting:

    There is a new testing method X, which is able to find a certain illness Y with 99 percent confidence.

    The same testing method will yield a false positive result in one of 1000 cases.

    The illness Y you are screening for appears once within 1,000,000 people.

    If everyone gets screened and you are tested positive using X, how probable is it that you are really sick with Y?

    No matter how impressively exact the face recognition method for a single case is, because of the sheer number of people screened, your false positive rate will always be large.

    • increasing the sample size doesn't necessarily increase the false positive rate. It increases the total number of false positives, but that is not the rate.

      If the system works at all the false positive rate would go down as more data is input into the system.

      OTOH the system is just as likely to be one of these [wikipedia.org]. An excuse for cops to establish probably cause and search your belongings.

      Since drugs are illegal at some level in most jurisdictions and 1 in 5 smoke weed it's pretty easy to get to a 20
    • by Kjella ( 173770 )

      Sadly I think the solution is going to be more data. So the UK has a population of 66 million, if there's a 0.1% error rate that's 66000 people that look kinda like you, even if you're looking for a local criminal in a small town those odds are just bad. However through cell phone records etc. we probably got a good idea who positively are in the vicinity of this camera, if you're the police maybe you can set up an IMSI catcher for this specific purpose otherwise 5G cells are real tiny by default. Paired wi

  • by petes_PoV ( 912422 ) on Thursday July 04, 2019 @02:44PM (#58873842)

    it flagged up faces to police when they were not on a wanted list.

    Presumably the false positives are people who "looked like" a criminal.
    Which seems to be quite a prevalent view for juries, when deciding the fate of an accused person.

    But as a first-levell filter, it seems to me that this thing is scanning thousands - possibly millions - of faces and just reporting the "probables", Since this is then fed to a second-line of actual people, who validate or reject the computer's flagging then there is both the necessary safeguards and a huge saving in police time, from not having to scan all those millions of faces manually. I don't think anyone is suggesting the computer gets all these false-positives arrested.

    • Indeed, consider a system that was 90% accurate. Which policeman would waste time checking to see if an identification was correct? No, it would be "You're nicked! The computer says so", A system that's wrong 80% of the time _requires_ manual intervention, and that makes it safer.

  • by rsilvergun ( 571051 ) on Thursday July 04, 2019 @02:51PM (#58873878)
    I pointed this out elsewhere on the thread, but there is a good chance this is just an excuse to establish probable cause for searches.

    With Marijuna still illegal at some level in most places and with 1 in 5 using it getting to a 20% "accuracy" rate isn't exactly hard.
    • by Anonymous Coward

      This is why I've advocated that police must log their probable cause BEFORE inspection & that must be the ONLY thing they can detain/ticket/arrest a subject for. i.e. if a broken taillight, then that is only infraction that can be ticketed, not seatbelt nor dui. If they log a probable cause that cannot be seen from a distance then it should get thrown out without defendant having to raise the issue.

    • I pointed this out elsewhere on the thread, but there is a good chance this is just an excuse to establish probable cause for searches.

      I hope you got modded down elsewhere on the internet for this as well since a computer identifying your face when a police officer can see that you're a different person isn't probable cause. Not even in the USA where laws about this are far laxer.

      Oh I guess you were too busy ranting to notice we're not talking about the USA right?

  • by logicnazi ( 169418 ) <gerdes&invariant,org> on Thursday July 04, 2019 @03:28PM (#58874072) Homepage

    I'm sure most people that come to the police's attention the old fashioned way are also innocent.

    After all the vast vast majority of people are innocent of serious wrongdoing so identifying the few bad eggs means sifting through a haystack for a few needles. Anyone who remembers their statistics should know that even super accurate tests produce mostly false positives if the base rate of true positives is low enough.

    This isn't any reason to distrust the technology. Merely a reason to understand that it isn't proof of guilt just a good way to generate leads. The real danger here is misunderstanding by the public in criminal trials. People need to understand that being selected as a bad guy by a system even with a very high accuracy still could mean you're most likely innocent.

    • by Cederic ( 9623 )

      I'm sure most people that come to the police's attention the old fashioned way are also innocent.

      On Monday a man called 999 (the emergency services number in the UK) and asked for the police, requesting that they come and.. actually, I'm not sure. I don't know if he wanted them to arrest me or just stop me from doing what I was doing.

      I kept doing what I was doing. They must've told him to fuck off, no police came. But if they had shown up then I would indeed have come to their attention while innocent.

    • You seem to be urging calm when freaking out is actually a better reaction.

      People need to understand that being selected as a bad guy by a system even with a very high accuracy still could mean you're most likely innocent.

      And people need to understand that innocent people are crushed by the system every day, which means staying out of the system is a priority, which means opposing bad practices like faulty facial recognition and the psudoscience of bite marks. [criminallegalnews.org] In 2004 Brandon Mayfield [wikipedia.org] had his fingerprints

  • That fact that the suspect turns out to be innocent doesn't prove the identification is in error. It seems to me that both pro and anti are using misleading statistics here. I have concerns that this technology should be subject to proper controls and scrutiny but I don't agree with using dishonesty or misrepresentation to get to that position.

    The article is certainly wrong in its claim "A recent BBC documentary captured an incident where a man was fined after refusing to take part in a facial recognition

  • Haul them in anyway and we'll make something stick.

    "Show me the man and I'll find you the crime." - Lavrentiy Beria

  • These results could actually be quite impressive. The question should be, how many images / people were scanned, what is the rate in the general population that was scanned. If, for example, 1% of the people scanned are truly suspects, then the software is definitely doing much better than random.

    Basically, TFA doesn't say how much better the software does compared to random selection, so one can't really assess the effectiveness of the software based on the sensationalist article.

    • I'm reading this as a 99% failure rate at respecting the 4th amendment rights if the same thing were used in the USA.
      • Put another way, you've only got probable cause in 1% of your example case.
      • how is this a violation of the 4th amendment? you have no rights to privacy of identity or for your photo not to be taken while in a public space, hell in many parts of the US it is actually illegal to disguise your identity while in public
  • Being questioned has a large cost - in time, in risk of exposing real or imagined crimes etc.

    If there were a way to add this information to other information to reduce the number of innocent people who are questioned, it could be a win. OTOH if becomes relied on, or leads to more biased questioning (sure we questioned every black guy in the shopping mall - we are biased, the computer just told us to), then it could be very negative

  • Comment removed based on user account deletion
    • Your honor, even though I am on camera in the area of the crime, have a history of committing crimes, was convicted of a similar crime, and they have iron clad proof I am guilty; I challenge their use of facial recognition to obtain suspects for the crime!

      Will you challenge human facial recognition using the same camera data?

  • If it's false positive rate is 81%, it's rather likely that it's false negative rate is about the same.

    Facial recognition is not, and won't be for some time yet to come, ready for real world conditions, especially ones that put people at risk, despite what the marketing weasels will say to try and sell stuff.
    I know one large company I worked at we were always going over and yelling at them for lying to customers! It's not just that they didn't know, they could always call us on an inside line and get an im
    • by AHuxley ( 892839 )
      So police don't get new tools?
      Let the police have their new tools.
      Use the tech in areas with increasing crime rates.
      A nice map/GUI of the crimes and the faces detected.
      Are the people seen criminals? The "police" have humans to look over each "detection" of a face.
      Slowly the areas is made safe and fully gentrified as the criminals are detected.
      Add in ait analysis and it gets more advanced. On a smartphone while doing crime? Get a voice print.
      Slowly every criminal is detected using human review and
    • Questions to ask:

      Why only 6 of the 10 trials?
      Why trials instead of actual usage data? Why only 42 hits? How many possible people were scanned? In a group of 50, that would be bad. In a group of 50,0000, you have 42 possible bad actors, of which 8 were correctly identified.
      What was the false negative rate?
    • by vakuona ( 788200 )

      If it's false positive rate is 81%, it's rather likely that it's false negative rate is about the same.

      That does not follow at all. In fact, there is an inverse relationship between false positive and false negative rates. Basically, without increasing the accuracy of the algorithm, you would generally tradeoff between type 1 and type 2 errors.

  • evaluated the technology's accuracy at six of the 10 police trials. They found that, of 42 matches, only eight were verified as correct

    Insignificant sample size during some of the trials rather than actual usage data. I smell cherry picking.

This is the theory that Jack built. This is the flaw that lay in the theory that Jack built. This is the palpable verbal haze that hid the flaw that lay in...

Working...