Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Crime Technology

Police To Test App That Assesses Suspects (bbc.com) 92

An anonymous reader writes: Police in Durham are preparing to go live with an artificial intelligence (AI) system designed to help officers decide whether or not a suspect should be kept in custody, BBC is reporting. The system classifies suspects at a low, medium or high risk of offending and has been tested by the force. It has been trained on five years' of offending histories data. One expert said the tool could be useful, but the risk that it could skew decisions should be carefully assessed. Data for the Harm Assessment Risk Tool (Hart) was taken from Durham police records between 2008 and 2012. The system was then tested during 2013, and the results -- showing whether suspects did in fact offend or not -- were monitored over the following two years. Forecasts that a suspect was low risk turned out to be accurate 98% of the time, while forecasts that they were high risk were accurate 88% of the time.
This discussion has been archived. No new comments can be posted.

Police To Test App That Assesses Suspects

Comments Filter:
  • by freeze128 ( 544774 ) on Wednesday May 10, 2017 @09:50AM (#54392405)
    The cops aren't doing the profiling, the app is. Nice.
    • Done properly, this could be used as a way to prevent profiling. An algo can only make decisions based on the data provided to it. If race isn't provided as an input, it won't affect the decision. Humans can't make the same claim, as prejudices can sneak into our decisions unconsciously.
      • But the training data came from human judgment, in which case the algorithm has almost certainly inherited whatever biases were in that data.
        • by Shotgun ( 30919 )

          So, continue to train it. With proper feedback factors, the bias should lose influence on the outcome. If it doesn't, it isn't a very good AI.

          • Any classification system requires unbiased true input to train against.

            Here's the model: Black people commit offenses and get arrested; white people are sometimes arrested but are much more likely to be let off with a warning. If the system has any proxy for 'black' in it's inputs, it will train on that. And as we know, there can be MANY such proxies.

            Retraining doesn't help: it makes the same judgement call as the officers, and there's not unbiased sample to test against.

            Math can't get you away from bias

            • by gnick ( 1211984 )

              Black people commit offenses and get arrested; white people are sometimes arrested but are much more likely to be let off with a warning.

              If that's true, then the algorithm would correctly use race to determine whether an offender will be re-arrested. That doesn't make it "right," but it'll give the right answer to the question, "Will this person be arrested again?"

      • Re:So... (Score:5, Interesting)

        by speedplane ( 552872 ) on Wednesday May 10, 2017 @10:47AM (#54392919) Homepage

        Done properly, this could be used as a way to prevent profiling. An algo can only make decisions based on the data provided to it. If race isn't provided as an input, it won't affect the decision. Humans can't make the same claim, as prejudices can sneak into our decisions unconsciously.

        There are so many ways that the algorithm can introduce bias, even if race isn't provided as an input, if other factors that may be highly correlated by race are (e.g., home zip code, occupation, income, etc.).

      • The problem is that the ML was trained on existing data, which is itself based on human and systemic biases, which means it's going to reinforce those biases. If someone is more closely watched because of some characteristic (age, gender, race, zip code, online habits), they are more likely to be caught for the same crime as someone who is not watched as closely. It *might* be possible to control for this, but it should at least be identified as a risk.

  • Wonderful, more people get to find out that neural networks are a great way of coming to the same conclusion that any normal adult human could have come up with - after being woken up in the middle of the night after a evening of hard drinking.
  • by the_skywise ( 189793 ) on Wednesday May 10, 2017 @09:51AM (#54392425)

    https://www.youtube.com/watch?... [youtube.com]

    (Okay it's not quite AI assessing of a subject but can this type of AI assist be far behind?)

    • This is more similar to the Sybil system in Psycho-Pass. Well, what the Sybil system is *supposed* to be anyway...

  • Are we gonna replace Judge Judy with an app?

    This might be a non sequiteur, but I'd love a gps with Judhe Judy's mildly irritated voice.

  • It almost sounds to me like it was 100% accurate - if 2% of those deemed low risk of offending again then went and broke the law again, isn't that ... well, the definition of low risk? Same for the high risk result - if I understand it correctly, only 12% of those didn't break the law again.

  • by U8MyData ( 1281010 ) on Wednesday May 10, 2017 @10:02AM (#54392523)
    Here we go. Easy is a four letter word folks. Dollar signs once again outweigh society. I'm glad I am past my prime and worry for the kids left behind.
  • "Hi god. I'll be good, I promise."

    "I'll be the judge of that!"

    Meanwhile most of us are focused on the war with Oceania while more of this type of stuff comes into being.
  • by hey! ( 33014 ) on Wednesday May 10, 2017 @10:09AM (#54392563) Homepage Journal

    It just has to outperform cops.

    • by Anonymous Coward

      This. Even if the performance numbers look good. The cited statistics were for the edge cases, which I'm guessing are the easiest to classify. It only has value if it does better than the professional judgement of the officer who makes the decision normally. And of course, this doesn't even touch on the possibility for systematic discrimination, which would need it's own careful study and monitoring.

    • We aren't setting the bar very high anymore for success, are we?
  • by petes_PoV ( 912422 ) on Wednesday May 10, 2017 @10:18AM (#54392619)

    So if (or when) this tool "decides" it is safe to release a suspect, who then goes on to commit another crime after release, who is reprimanded? who carries the can? who pays?

    Ultimately the responsibility still lies with the police force. It is their tool, the public safety is their responsibility.There needs to be reinforcement of this at every level, so that nobody can shrug their shoulders and say "the computer said it was OK".

    • by gweihir ( 88907 )

      Indeed. However police is not held responsible for their mistakes more often than not, so "the computer decided it" is exactly what is going to happen.

      • and when the defendant requests the source code / logs for the system? They may need to give that out even if there contract / EULA says no.

        Court have Ordered Release of DUI 'Breathalyzer' Source Codes

    • by Ichijo ( 607641 )

      It might be better to let insurance companies decide whether to release a suspect, and to take the financial risk of doing so, and to get rewarded when the released suspect doesn't commit another crime.

    • by ghoul ( 157158 ) on Wednesday May 10, 2017 @01:15PM (#54394333)

      This would only be valid if Police was actually punished for their mistakes. Most often than not they are not so any computer system which can reduce mistakes even if it is punished for its new mistakes is an improvement.

  • Counter-app (Score:4, Insightful)

    by Errol backfiring ( 1280012 ) on Wednesday May 10, 2017 @10:22AM (#54392643) Journal
    Does this mean it would be possible to write a counter-app? I mean an app that tells you what to wear, what to say and how to behave such that the police app will judge you as "low risk"?
  • by kilodelta ( 843627 ) on Wednesday May 10, 2017 @10:50AM (#54392955) Homepage
    How UK police work but in the U.S. all the cops have is charge data.
  • by PPH ( 736903 ) on Wednesday May 10, 2017 @11:05AM (#54393097)

    So 12% that were judged high risk were in fact not.

    "It is better 100 guilty Persons should escape than that one innocent Person should suffer." - Benjamin Franklin

    Of course, this is the British we are talking about now. Blackstone [wikipedia.org] put that error rate at around 1 in 10. So this app is statistically close enough for them.

  • Guilty in Advance (Score:4, Interesting)

    by Lucidus ( 681639 ) on Wednesday May 10, 2017 @11:07AM (#54393121)

    I thought the idea was to detain people if they had already committed a crime, so I'm a little disturbed at the idea of holding them because you think they are likely to offend in future. If we are going to change the way we do these things, we will need to revamp our entire legal system (which I think would be a terrible mistake).

    • Being held in custody on suspicion of a crime or when charged with a crime are both legal parts of due process already. This app is just meant to add an automated analysis of data to help make the decision on whether to keep them in custody the legal amount of time or release them early.
    • I thought the idea was to detain people if they had already committed a crime, so I'm a little disturbed at the idea of holding them because you think they are likely to offend in future. If we are going to change the way we do these things, we will need to revamp our entire legal system (which I think would be a terrible mistake).

      The terrible mistake would be assume we still have a legal system.

      We don't. We have a justice system. BIG difference.

    • by Sigma 7 ( 266129 )

      I thought the idea was to detain people if they had already committed a crime, so I'm a little disturbed at the idea of holding them because you think they are likely to offend in future.

      The legal systems (at least the western ones) already try to determine whether or not someone is going to reoffend. There's basically two groups - ones with a single infraction, and ones who chronically violate the law.

      Since one of the objectives is to reduce recitivism (or at least the first offense), that's why there's s

      • by Lucidus ( 681639 )
        The things you mention influence the duration of sentencing after a conviction, which is very different from what I understood this program to do; i.e., to determine whether (or how long) to detain suspects who have not yet been convicted of anything. I understand that the police have (and must have) some discretion in how they handle different individuals, but this still scares me.
  • This sounds like one of those bullshit psychometric job interview tests. Its funny how society offloads the burden of reasoning onto popcorn science (psychometric job interview tests and this app for example) but ignores more rigorous science like what is happening to our planet (not perfect, and somewhat still subjective, but the majority of non-shill scientists agree). We are doomed as a species, maybe not in our lifetime but somewhere down the line for sure.
  • by Rick Schumann ( 4662797 ) on Wednesday May 10, 2017 @12:03PM (#54393595) Journal
    Well, I see things are developing about how I expected: not only is so-called 'AI' making us lazy and less skilled, now it's going to make us dumber and less capable of thinking for ourselves. I see nothing good coming from this, it'll let dangerous people go and lock up people who don't need to be locked up. What's next, some hackneyed 'AI app' to decide whether to shoot someone or not? GTFO with this nonsense.
  • by XSportSeeker ( 4641865 ) on Wednesday May 10, 2017 @01:42PM (#54394547)

    Precog system came earlier than I expected...

C'est magnifique, mais ce n'est pas l'Informatique. -- Bosquet [on seeing the IBM 4341]

Working...