Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Crime Software United States Technology

Software 'No More Accurate Than Untrained Humans' At Predicting Recidivism (theguardian.com) 166

An anonymous reader quotes a report from The Guardian: The credibility of a computer program used for bail and sentencing decisions has been called into question after it was found to be no more accurate at predicting the risk of reoffending than people with no criminal justice experience provided with only the defendant's age, sex and criminal history. The algorithm, called Compas (Correctional Offender Management Profiling for Alternative Sanctions), is used throughout the U.S. to weigh up whether defendants awaiting trial or sentencing are at too much risk of reoffending to be released on bail. Since being developed in 1998, the tool is reported to have been used to assess more than one million defendants. But a new paper has cast doubt on whether the software's predictions are sufficiently accurate to justify its use in potentially life-changing decisions.

The academics used a database of more than 7,000 pretrial defendants from Broward County, Florida, which included individual demographic information, age, sex, criminal history and arrest record in the two year period following the Compas scoring. The online workers were given short descriptions that included a defendant's sex, age, and previous criminal history and asked whether they thought they would reoffend. Using far less information than Compas (seven variables versus 137), when the results were pooled the humans were accurate in 67% of cases, compared to the 65% accuracy of Compas. In a second analysis, the paper found that Compas's accuracy at predicting recidivism could also be matched using a simple calculation involving only an offender's age and the number of prior convictions.

This discussion has been archived. No new comments can be posted.

Software 'No More Accurate Than Untrained Humans' At Predicting Recidivism

Comments Filter:
  • by Anonymous Coward

    so what would you expect?
    Add some blockchain goodies and everything will work perfectly....or at least the next round of funding...

  • No bad software (Score:4, Insightful)

    by hcs_$reboot ( 1536101 ) on Thursday January 18, 2018 @02:19AM (#55951569)
    Only bad programmers/designers.
    • by Togden ( 4914473 ) on Thursday January 18, 2018 @02:49AM (#55951623)
      Isn't this precisely what you would expect when the information gathered to make the decision isn't influential enough on the outcome. It says they have 137 variables, which were as useful as 2. It suggests that the additional variables are either unrelated to the outcome, or are strongly related to the 2 suggested such that either way they provide no additional accuracy.
      • by Anonymous Coward

        But 137 sounds better than 2 for marketing.

        • by sjames ( 1099 )

          It's a classic flim flam technique. Take a normally simple transaction and weave a complicated story around it with many steps and complicated procedures. Make sure the story involves many (fictional) people with their own idiosyncrasies. Keep them focused on the seeming logic of each individual step so they won't look at the overall situation and see the scam coming.

      • It suggests that the additional variables are either unrelated to the outcome

        Or their application was slapped together haphazardly in order to start bringing in those contract dollars as soon as possible. The other 135 variables could provide useful information, if their analysis of that information were better. Until this study, there was no incentive to perform well because the cost of poor performance is borne by convicts.

        I am far more willing to believe that crapware is being shoveled into government computers at taxpayer expense.

        Bear in mind that this system performs as well as

        • Bear in mind that this system performs as well as untrained humans. If humans with correctional or psychological expertise perform better than untrained humans (which I assume is true), then this system is embarrassingly bad. It's doing a worse job than the people it's supposed to help.

          With this software they'd need a handful of trained people to author the algorithm and then many more lesser trained people can apply this software without introducing personal bias. It's also a kind of double blind testing. The people writing the algorithm don't see the offenders, and the people applying the algorithm don't know the algorithm to introduce a bias to it. This is presumably more fair as personal biases cannot be applied to individual offenders. If someone involved in creating the algorith

    • by hey! ( 33014 )

      Well, there's bad (i.e., stupid) clients too. They're responsible for a lot of bad software.

      If a customer wants to buy magic software without an understanding of what it does or proof that it even works, what are the programmers supposed to do about that? They just report to work and build what their boss tells them to build, and he tells them to build what the customer will buy.

    • by rhazz ( 2853871 )
      Deciding which data points are predictors of a certain outcome in regards to crime has nothing to do with software, programmers, or designers. That's the job of statisticians, who probably weren't involved in the creation of this software.
  • Question (Score:4, Insightful)

    by Kokuyo ( 549451 ) on Thursday January 18, 2018 @02:35AM (#55951605) Journal

    It seems obvious that someone with more relapses in the past will also be more likely to do it again. However, I will assume that at that point, a judge wont allow for bail anyway so if this is about people with three or less offenses on their record, I'd imagine that ONLY going by the criminal history is going to be inaccurate no matter who or what is looking at it.

    Isn't this more a case of bad data as opposed to bad programming? Because "no more accurate than an untrained person" implies pure chance.

    • Re:Question (Score:4, Interesting)

      by Actually, I do RTFA ( 1058596 ) on Thursday January 18, 2018 @03:57AM (#55951753)

      An untrained person isn't pure chance. Pure chance is rolling a die. Untrained person is common sense.

      • by gweihir ( 88907 )

        Since common sense is very rare, and untrained person is likely just a feeling ob being more or less threatened, with no connection to the actual threat.

        • Since common sense is very rare...

          Common sense itself is common by definition. You might not think so, given all the things that people claim are common sense. However it is rare that an appeal to common sense refers to something that is actually common sense, because if that thing actually had been common sense, it would have been unlikely for a disagreement to have arisen around it, so there would have been no cause for making an appeal to common sense.

          Obvious really. It's just common sense.

      • by mjwx ( 966435 )

        An untrained person isn't pure chance. Pure chance is rolling a die. Untrained person is common sense.

        That would be a good point if common sense were common.

        For every person who applies logic and reason to a case (I.E. shows remorse, now has steady job, impetus for theft no longer present) there are 2 or 3 people who apply batshit insane rules (I.E. He's Ginger, so he'll steal again).

        • People are actually quite good at making judgements of people based on very little information, https://www.webmd.com/balance/... [webmd.com], it makes sense really, in the real world we have very little actual information about someone.

        • by ceoyoyo ( 59147 )

          Yes. And that process does better than what's actually being used.

          Scary.

          It would be interesting to see how actual experts perform.

    • by AmiMoJo ( 196126 )

      It seems obvious that someone with more relapses in the past will also be more likely to do it again.

      Unless those relapses caused them to enter a programme or get additional support to avoid relapsing again. Or maybe their personal circumstances changed. It's very difficult to write software that can evaluate an individual's personal circumstances.

    • If humans are no better than a machine then how can a machine programmed by humans do any better
      • If humans are no better than a machine then how can a machine programmed by humans do any better

        This is a stupid, Luddite sentiment. Computers are already far better than humans at a wide range of tasks. This increases dramatically if your usage of "machine" include robots performing physical tasks.

        Deep learning is opening the doors to new machine skills. Quantum computing will likely open a few more in the next few decades. It's anyone's guess what comes after that.

        Humans were smart enough to make a machine that could do some things better than they could. We used those new capabilities to make bette

        • If humans are no better than a machine then how can a machine programmed by humans do any better

          This is a stupid, Luddite sentiment.

          Only if you're the sort who immediately gets butt hurt and defensive when you see something you assume offends you.

          A person who isn't that sort might look at the question with a greater philosophical viewpoint, e.g. "if the man who wrote the program is flawed, wouldn't it follow suit that the program may be flawed as well, in the same fashion?"

          Gedankenexperiments tend to be wasted on plebs and the easily offended.

      • One measure of intelligence is the speed at which one can make a decision. A very basic intelligence test is the speed at which a person can match patterns. A test like this run over perhaps 15 minutes will get a very high correlation to a pen and paper test run over an hour or two where one must perform mathematics problems or word comprehension.

        So, if speed of making a decision is defined as intelligence then we can define the criteria for a decision, put that in a way a machine can compute quickly, the

  • by Anonymous Coward

    A big part of risk of reoffending lies in the person's relationships in their life after prison. If they are reconnected to normal society they will be less likely to reoffend than if they lack connections or connect with cother criminals.

  • The programmers spent all their time on writing the acronym for the name of the software, and not enough time actually making the software work.
  • by locater16 ( 2326718 ) on Thursday January 18, 2018 @02:42AM (#55951617)
    Tl;Dr Single old program tested in situation vendor says is inaccurate use of software, software doesn't work well. Thus all programs will forever be terrible at this task and these computer guys should give up and do something useful. Like writing headlines for news sites!
    • by hey! ( 33014 )

      Alternatively: vendor oversells effectiveness of its proprietary, secret sauce methodology and doesn't like any independent evaluation of its products unless it's favorable. Customers, having a naive faith in technology, buy anyways, which produces exactly the results you mention: programs will be forever terrible at this task. Why should anyone bother to make a program good when customers will shell out good money for mediocre?

    • by gurps_npc ( 621217 ) on Thursday January 18, 2018 @09:11AM (#55952423) Homepage

      No. The problem is that people have realized the software is racist. What happens is this:

      Black citizens tend to get more minor criminal issues than white ones because of institutional racism. Then this software sees that a black man has two citations for, say crossing the street away from a crosswalk, while the white man does not. So it gives him a higher risk of recidivism, which means more bail/longer jail time.

      Then the software guys complain and say they aren't racist, they are just applying the algorithm.

      This article is trying to shut them up by saying their algorithm, in addition to being racist, doesn't work any better than simple common sense.

      It is not an attack on the business model, just of the current state of the art.

      • Sorry but criminal history is not a proxy for race*. Race predicts recidivism independently**. Community disadvantage is also an independent predictor**, so you can't just blame poverty either. There's deep problems that have led to this situation, and we're never going to fix them if we put on our social justice blinders and deny the reality that certain races commit more crimes than others in a lot of categories, especially violent crimes. It's critical to address the large scale societal mechanisms behin
        • I disagree with what you say on multiple levels. I did NOT claim that criminal history is a proxy for race. Instead I claimed that blacks are disproportionately likely to have a criminal history. I also do not agree that race predicts recidivism independently, your blatantly racist belief that certain races commit more crimes. One study (or two or three) does not confirm your racist beliefs.

          There are multitude other studies that contradict yours - and they have major holes in them. One of the big hol

          • Of course you disagree, disagreeing with basic facts about reality is all the rage these days; as is screaming "racist!" at everyone who disagrees with you (Asians commit less crimes than whites; does saying that fact make me a white Asian-supremecist?). Serious violent crimes are not ignored because the perpetrator is white, so what's the cause of the large disparity there? (In fact, case clearance rate is *higher* with white people, making them less likely to get away with it). Same goes for crime with a
  • by istartedi ( 132515 ) on Thursday January 18, 2018 @03:21AM (#55951687) Journal

    So. Is there any better algorithm? You'd think that if there were a consensus among people studying this, they'd code in the consensus. Maybe the interesting thing here is that age and priors are the only useful information for predicting recidivism. This doesn't seem like rocket science. We've got decades of data. We ought to be able to run some other algorithms over it--something that takes into account a 3rd variable, and see if it helps. Maybe it does. Maybe it doesn't.

    • by AmiMoJo ( 196126 ) on Thursday January 18, 2018 @05:17AM (#55951899) Homepage Journal

      They are trying to solve the wrong problem. Rather than trying to quantify people, the solution to people reoffending is to provide better support to everyone. Stop wasting money on software and start investing in programmes that help reform offenders.

      Reform programmes are also a much better way to evaluate people, because their progress in the programme is much easier to measure and requires them to meet goals that change their behaviour and future life chances. That's why sensible systems hand out a sentence which can then be reduced through participation and good behaviour.

      • They are trying to solve the wrong problem. Rather than trying to quantify people, the solution to people reoffending is to provide better support to everyone. Stop wasting money on software and start investing in programmes that help reform offenders.

        Reform programmes are also a much better way to evaluate people, because their progress in the programme is much easier to measure and requires them to meet goals that change their behaviour and future life chances. That's why sensible systems hand out a sentence which can then be reduced through participation and good behaviour.

        They are trying to solve the wrong problem. Rather than trying to quantify people, the solution to people reoffending is to provide better support to everyone. Stop wasting money on software and start investing in programmes that help reform offenders.

        Support does work. California's Folsom Prison is an example. At one point it had low recidivism numbers, and a lot of prisoner support. The we got "tough on crime" overcrowded the place, eliminated help for the prisoners, and now the place is a holding area between crimes.

        Reform programmes are also a much better way to evaluate people, because their progress in the programme is much easier to measure and requires them to meet goals that change their behaviour and future life chances.

        There are some more physical factors that are obvious. Age is one. Given that most prisoners are men, testosterone level will lower as they age. But I think there is one issue that is really important - impulse control.

        Impulse control

    • It's relatively easy to compare the algorithm to untrained people and confirm it isn't any better. It is harder to make a better algorithm, because the database may not even contain all the variables that really are important to gather, nor may gathering the best ones even be possible. (e.g. if the truly best variable was "positive relationship with grandfather", this information has likely not been collected nor collated; variables that seem important may merely be correlated with actual variables, e.g.
    • by AHuxley ( 892839 )
      Does the citizen have a high school education?
      Is the citizen married?
      Did the citizen have a job?
      The decades of data shows who is doing crime, what part of the USA and who the victims are.
      The data sets exist but the virtue signalling politics and optics around them are not good for inner city party politics.
      Political correctness is holding the correct use of the inner city crime data sets back.
      Decades of huge new amounts of education support from the gov, private sector did not change parts of th
  • by nospam007 ( 722110 ) * on Thursday January 18, 2018 @04:16AM (#55951789)

    The software works for free 24 hours a day, 7 times a week, doesn't need sick days, vacation, maternity leave nor does it want a pension when it ill be replaced by a much better AI version.

    • The software works for free 24 hours a day, 7 times a week, doesn't need sick days, vacation, maternity leave nor does it want a pension when it ill be replaced by a much better AI version.

      137 variables within an algorithm has been running untouched for twenty years now. Had this study not taken place, it would have gone untouched for another twenty years, regardless of AI advancement, because everyone would be sitting back just assuming that it's doing a "good" job.

      The most valuable question in the world is Why. Repeat it as many times as necessary until you get stupid people to stop saying "Because we've always done it this way."

      • 137 variables within an algorithm has been running untouched for twenty years now. Had this study not taken place, it would have gone untouched for another twenty years, regardless of AI advancement, because everyone would be sitting back just assuming that it's doing a "good" job.

        Even setting aside advances in machine learning, the assumption that the data sample used to train the algorithm would be equally valid 20 years later is stunningly foolish.

        Any such tool should regularly be updated and revalidated to establish its performance.

        I wonder how well this program was ever vetted and validated. Like the Liebold voting machines sold at about this same time that were proprietary pieces of security-hole ridden garbage, I suspect this product was sold simply through connections and the

  • by fatp ( 1171151 ) on Thursday January 18, 2018 @05:27AM (#55951927) Journal
    Are they much more accurate? How much?
    • Comment removed based on user account deletion
    • I was thinking the same thing. I remember an episode of This American Life, or maybe it was Radiolab, that talked about using software to predict whether somebody was likely to re-offend. It seemed pretty clear that "trained" humans were about as good as a coin flip (ugh, I wish I could find the study, now all the Google hits are for this latest news). I suspect it's because some of the training overrules some common sense things that the unwashed masses might be subconsciously considering. We might be in t
    • Came here to ask the same question: Yeah, what about comparing untrained humans vs trained humans? /It's a fair question, seeing that what put them in jail in the first place was a jury of untrained humans, no?
  • ... the problem with ALL "artificial" intelligence systems (and it is pure arrogance on the part of humans to declare intelligence to be "artificial" in the first place) is that you *cannot ask them how they arrived at a decision*.

    only when humanity is ready to create *conscious* computers (and not torture them so that they are perfectly justified to start the "Skynet" scenario), will it be possible to actually ask them, "so what's the logic behind that decision, please can you explain it to me, computer-to

    • I've seen neuroscience articles (and I will admit I don't know how solid they are) that indicate that via MRI studies, people make decisions before they are conscious of this, and appear to post hoc build a rationale to agree with what their subconscious decided. So, you may actually have an easier time at getting the AI to spit out its parameters than a human.
      • I've seen neuroscience articles (and I will admit I don't know how solid they are) that indicate that via MRI studies, people make decisions before they are conscious of this, and appear to post hoc build a rationale to agree with what their subconscious decided. So, you may actually have an easier time at getting the AI to spit out its parameters than a human.

        I've listened to a radio program on the same subject. This jibes with the concept of impulse control. I suppose a good example might be heterosexual men. If a man sees a woman he finds attractive, in most cases he will think about having sexual intercourse with her. A fleeting thought. But sensible impulse control will have him not do that. In relation to other crimes, a person might see a laptop sitting unattended and yes, there might be a thought of "Hey, I could grab this and walk off. Again, a thought

        • I would imagine that it seems like we have more free will than we do, but agree that we have at least some.
          • I would imagine that it seems like we have more free will than we do, but agree that we have at least some.

            Humans will certainly come with a preset number of likely responses. I look at psychotic breaks and schizophrenia as what happens when expected actions and reactions don't happen. And heaven help us if we were to all act on our impulses. I know I would have been killed by hundreds of people at this point (I'm what they call an irritating bastard)

      • I've seen neuroscience articles (and I will admit I don't know how solid they are) that indicate that via MRI studies, people make decisions before they are conscious of this, and appear to post hoc build a rationale to agree with what their subconscious decided. So, you may actually have an easier time at getting the AI to spit out its parameters than a human.

        That is a very questionable interpretation of the experiments. Is there actual data showing that the "post hoc rationale" is not actually retrieving the information used to make the decision "subconsciously"? Or this just a guess that it does not?

    • Comment removed based on user account deletion
  • I'd like to see more effort into prevention of recidivism than prediction of it. What is the most cost effective way to encourage people to avoid a life of crime? I bet locking them up isn't it.
  • The accuracy of COMPASS is what it is. That it can be matched by untrained humans, or by a simpler calculation, does not make it any less accurate. If age and number of priors between them cover all the predictive factors among the 137 variables, no algorithm can do better than using just them.

  • Most of the comments here are studiously examining the tree bark with a microscope while not noticing they're in a forest.

    The courts are supposed to be filled with wisdom and thoughtfulness. The popularity of this software and the court's failure to notice that it's nearly useless is more indicative of a bunch of people thoughtlessly going through the motions.

    Keeping in mind that anyone can be suspected if they're in the wrong place at the wrong time, is this the system you want deciding your fate?

  • Someone obviously did not understand Multivariate Analysis.

  • When the Bible says God made man in his image, it doesn't draw special attention to how God configured the more primitive elements of the human brain such that man was predestined to make our punitive system Old Testament primitive.

    There's a memforyless version of justice: do the crime, do the time, get kicked back out into society with no lingering black marks, to sin again or not. This is a nice version of justice because no process treads overtly on free will.

    Of course, when a previously convicted sex o

A committee takes root and grows, it flowers, wilts and dies, scattering the seed from which other committees will bloom. -- Parkinson

Working...