Police To Test App That Assesses Suspects (bbc.com) 92
An anonymous reader writes: Police in Durham are preparing to go live with an artificial intelligence (AI) system designed to help officers decide whether or not a suspect should be kept in custody, BBC is reporting. The system classifies suspects at a low, medium or high risk of offending and has been tested by the force. It has been trained on five years' of offending histories data. One expert said the tool could be useful, but the risk that it could skew decisions should be carefully assessed. Data for the Harm Assessment Risk Tool (Hart) was taken from Durham police records between 2008 and 2012. The system was then tested during 2013, and the results -- showing whether suspects did in fact offend or not -- were monitored over the following two years. Forecasts that a suspect was low risk turned out to be accurate 98% of the time, while forecasts that they were high risk were accurate 88% of the time.
Re: (Score:3)
It's not Durham NC, it's Durham UK. So it's more likely to be the other way round.
Re: (Score:1)
How to avoid the grievance mongers; only apply it to whites.
Re: (Score:3, Insightful)
If the AI starts to evaluate, based SOLEY on data, that a particular racial group *does* tend to re-offend more often and is hence a higher risk....do we as the public start to believe it, or do we say that AI, even though purely scientific and logic based, is not Politically Correct and must have some artificial weights put into the algorithm to keep it from finding that some racial or economic strata of folks are more high risk and should be kept in jail?
Re:face recognition (Score:5, Insightful)
Existing police biases generate the data you're feeding into the system. Bias in, Bias out. It's not like this is a new idea.
Re: (Score:2)
Re: (Score:2)
"But hard to bias against who is committing the most gun violence, home invasions, rapes, violent crimes that actually impact those of us in regular society."
Still biased. Your list is avoiding violence and crimes from the higher classes.
Re: (Score:2)
See my reply here:
https://slashdot.org/comments.... [slashdot.org]
Re: (Score:2)
Err...exactly how is that?
Are you saying that in higher classes, if they find dead bodies, they don't investigate and don't find and try to convict people of murder?
I would posit higher class violence crimes would be taken into consideration just like lower class violent crimes. I would guess, however, you'd see less of it in higher classes, but that's just purely my observation.
Re: (Score:2)
"gun violence, home invasions, rapes, violent crimes"
Home invasions, gun violence or violent crimes, and I'm guessing rapes, tend to be more common in lower classes, on the other hand higher class crimes tend to be more dependent on capital expenditures.
"If the AI starts to evaluate, based SOLEY on data"
And the legal system is more lenient on higher classes, in general even for non victimless crimes. In other words the data is systemically biased, so the evaluation will be too.
Re: (Score:2)
Not to throw you reasoning askew, but could this be because the drug use is ancillary. Consider that police may not get calls for fights, break-ins, etc in higher income neighborhoods, so they have fewer reason to be patrolling those areas. The residents of higher income neighborhoods would be less likely to parade around the neighborhood with their drugs. That is, while they are using the drugs, they are not making themselves a public nuisance, let alone a public safety issue.
If the point of the police i
Re: (Score:2)
I think the point was that even when police catch upper classes with illegal drugs they tend to be much more lenient.
Also of course, the bigger your property the less likely the police will become aware of a crime happening on it (and even if they do become aware they will tend to be more lenient).
Re: (Score:3)
Actually that brings up a good point.
If the AI starts to evaluate, based SOLEY on data, that a particular racial group *does* tend to re-offend more often and is hence a higher risk....do we as the public start to believe it, or do we say that AI, even though purely scientific and logic based, is not Politically Correct and must have some artificial weights put into the algorithm to keep it from finding that some racial or economic strata of folks are more high risk and should be kept in jail?
Well artificial weights have to be incorporated into the program. We're not talking about strong AI here (Artificial General Intelligence) that can determine things for itself, it has to be given parameters. Race should not be one of those parameters as it's not actually a determiner. Poor white neighbourhoods have the same crime problem as poor black ones.... there's just a greater volume of poor blacks (same for Asians and Hispanics). The RWNJ bogeyman of "Political Correctness" doesn't enter into it, par
Re: (Score:2)
With todays sensitivity towards political correctness, prove to be unacceptable, EVEN if the AI is looked at and shown to be un-biased and just comes with this on observation.
I"m guessing they'll be using parameters like sex....if that is the case, why not race?
Is there really a legitimate reason, if the play field starts 100% fair and observation
Re: (Score:2)
Race should not be one of those parameters as it's not actually a determiner. Poor white neighbourhoods have the same crime problem as poor black ones.... there's just a greater volume of poor blacks (same for Asians and Hispanics).
If that's true, then the algorithm should assign race a weight of zero, assuming that the other relevant factors are used as parameters. Whether or not it should be included is a matter of political correctness, not racial bias.
...parameters should be selected based on science (criminology and psychology).
Why limit your parameter choice to what's suggested by criminology & psychology? Wouldn't science dictate that you use all useful data?
So... (Score:3)
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
So, continue to train it. With proper feedback factors, the bias should lose influence on the outcome. If it doesn't, it isn't a very good AI.
No, that will only reinforce bias (Score:2)
Any classification system requires unbiased true input to train against.
Here's the model: Black people commit offenses and get arrested; white people are sometimes arrested but are much more likely to be let off with a warning. If the system has any proxy for 'black' in it's inputs, it will train on that. And as we know, there can be MANY such proxies.
Retraining doesn't help: it makes the same judgement call as the officers, and there's not unbiased sample to test against.
Math can't get you away from bias
Re: (Score:2)
Black people commit offenses and get arrested; white people are sometimes arrested but are much more likely to be let off with a warning.
If that's true, then the algorithm would correctly use race to determine whether an offender will be re-arrested. That doesn't make it "right," but it'll give the right answer to the question, "Will this person be arrested again?"
Re:So... (Score:5, Interesting)
Done properly, this could be used as a way to prevent profiling. An algo can only make decisions based on the data provided to it. If race isn't provided as an input, it won't affect the decision. Humans can't make the same claim, as prejudices can sneak into our decisions unconsciously.
There are so many ways that the algorithm can introduce bias, even if race isn't provided as an input, if other factors that may be highly correlated by race are (e.g., home zip code, occupation, income, etc.).
Re: (Score:2)
The problem is that the ML was trained on existing data, which is itself based on human and systemic biases, which means it's going to reinforce those biases. If someone is more closely watched because of some characteristic (age, gender, race, zip code, online habits), they are more likely to be caught for the same crime as someone who is not watched as closely. It *might* be possible to control for this, but it should at least be identified as a risk.
"Computer says no" (Score:2)
https://www.youtube.com/watch?... [youtube.com]
So will it be more accurite than a human? (Score:2)
And Demolition Man is prophetic again... (Score:5, Interesting)
https://www.youtube.com/watch?... [youtube.com]
(Okay it's not quite AI assessing of a subject but can this type of AI assist be far behind?)
Re: (Score:2)
This is more similar to the Sybil system in Psycho-Pass. Well, what the Sybil system is *supposed* to be anyway...
Is he guilty, let's find out (wait for ads first) (Score:2)
Are we gonna replace Judge Judy with an app?
This might be a non sequiteur, but I'd love a gps with Judhe Judy's mildly irritated voice.
Re: (Score:2)
They've got Mr. T on Waze right now.
"Turn left FOO!"
(really)
I could see Judge Judy next.
98% accurate? (Score:2)
It almost sounds to me like it was 100% accurate - if 2% of those deemed low risk of offending again then went and broke the law again, isn't that ... well, the definition of low risk? Same for the high risk result - if I understand it correctly, only 12% of those didn't break the law again.
Minority Report (Score:3)
Re: (Score:2)
Hello World (Score:2)
"I'll be the judge of that!"
Meanwhile most of us are focused on the war with Oceania while more of this type of stuff comes into being.
Such a system doesn't have to perform well. (Score:3)
It just has to outperform cops.
Re: (Score:1)
This. Even if the performance numbers look good. The cited statistics were for the edge cases, which I'm guessing are the easiest to classify. It only has value if it does better than the professional judgement of the officer who makes the decision normally. And of course, this doesn't even touch on the possibility for systematic discrimination, which would need it's own careful study and monitoring.
Re: (Score:1)
Who gets the blame? (Score:3)
So if (or when) this tool "decides" it is safe to release a suspect, who then goes on to commit another crime after release, who is reprimanded? who carries the can? who pays?
Ultimately the responsibility still lies with the police force. It is their tool, the public safety is their responsibility.There needs to be reinforcement of this at every level, so that nobody can shrug their shoulders and say "the computer said it was OK".
Re: (Score:2)
Indeed. However police is not held responsible for their mistakes more often than not, so "the computer decided it" is exactly what is going to happen.
Re: (Score:2)
and when the defendant requests the source code / logs for the system? They may need to give that out even if there contract / EULA says no.
Court have Ordered Release of DUI 'Breathalyzer' Source Codes
Re: (Score:2)
Well, maybe. Still would not make much difference, as the sources and parametrization is pretty useless without the training data-set.
Re: (Score:2)
Just my thought.
Re: (Score:2)
It might be better to let insurance companies decide whether to release a suspect, and to take the financial risk of doing so, and to get rewarded when the released suspect doesn't commit another crime.
Re: Who gets the blame? (Score:4, Informative)
This would only be valid if Police was actually punished for their mistakes. Most often than not they are not so any computer system which can reduce mistakes even if it is punished for its new mistakes is an improvement.
Counter-app (Score:4, Insightful)
Re: (Score:3)
Unfortunately the top two things to avoid are being poor and being black.
Re: (Score:1)
Re: (Score:2)
Been mostly done already. Watch this Chris Rock video.
https://www.youtube.com/watch?v=uj0mtxXEGE8
Not sure (Score:3)
88% (Score:3)
So 12% that were judged high risk were in fact not.
"It is better 100 guilty Persons should escape than that one innocent Person should suffer." - Benjamin Franklin
Of course, this is the British we are talking about now. Blackstone [wikipedia.org] put that error rate at around 1 in 10. So this app is statistically close enough for them.
Guilty in Advance (Score:4, Interesting)
I thought the idea was to detain people if they had already committed a crime, so I'm a little disturbed at the idea of holding them because you think they are likely to offend in future. If we are going to change the way we do these things, we will need to revamp our entire legal system (which I think would be a terrible mistake).
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I thought the idea was to detain people if they had already committed a crime, so I'm a little disturbed at the idea of holding them because you think they are likely to offend in future. If we are going to change the way we do these things, we will need to revamp our entire legal system (which I think would be a terrible mistake).
The terrible mistake would be assume we still have a legal system.
We don't. We have a justice system. BIG difference.
Re: (Score:2)
The legal systems (at least the western ones) already try to determine whether or not someone is going to reoffend. There's basically two groups - ones with a single infraction, and ones who chronically violate the law.
Since one of the objectives is to reduce recitivism (or at least the first offense), that's why there's s
Re: (Score:2)
Not cool (Score:1)
What could POSSIBLY go wrong? (Score:3)
Heh... (Score:3)
Precog system came earlier than I expected...