Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
The Courts Math Technology

Algorithm Predicts US Supreme Court Decisions 70% of Time 177

stephendavion writes A legal scholar says he and colleagues have developed an algorithm that can predict, with 70 percent accuracy, whether the US Supreme Court will uphold or reverse the lower-court decision before it. "Using only data available prior to the date of decision, our model correctly identifies 69.7 percent of the Court's overall affirm and reverse decisions and correctly forecasts 70.9% of the votes of individual justices across 7,700 cases and more than 68,000 justice votes," Josh Blackman, a South Texas College of Law scholar, wrote on his blog Tuesday.
This discussion has been archived. No new comments can be posted.

Algorithm Predicts US Supreme Court Decisions 70% of Time

Comments Filter:
  • Re:biased algorith (Score:4, Informative)

    by Chatterton ( 228704 ) on Thursday August 07, 2014 @05:47AM (#47621153) Homepage

    That why you train your algorithm on all the available cases but the last year ones. Then you can test it on that last year of cases. For the system the last year is the "future" on which you do your testing.

  • Re:Missing info (Score:4, Informative)

    by mwvdlee ( 775178 ) on Thursday August 07, 2014 @06:07AM (#47621189) Homepage

    It would be useful to know how many of the court's decisions are affirm vs reverse. []

    I did some tallying on table 3 and found the following numbers on total decissions;
    Reversed: 58.48%
    Vacated: 12.58%
    Affirmed: 28.94%

    The article doesn't mention whether "vacated" is counted separately or as a reversal.
    The graph shows only reversed and affirmed, so I'm assuming vacated counts as a reversal.
    If this is the case, reversed and vacated together is 71.06%.
    So if you'd guess "Reversed" all the time, you'd be slighly more accurate than the algorithm.

  • Re:biased algorith (Score:5, Informative)

    by Anonymous Coward on Thursday August 07, 2014 @06:11AM (#47621203)

    Yes, and then when the algorithm doesn't work you finetune it a bit and test again and suddenly you end up with an algorithm that has been trained on all data without actually training it against all data.

    One should be very skeptical against future predicting algorithms. Until they have been released in the wild for a while without the developer tampering with it it is pretty safe to guess that it more or less is another version of the Turk [], even if its inventor doesn't realize it.

    The same principle can be applied to market research or climate studies. If the algorithm used is tampered with to produce more accurate results one can assume that it is useless.

  • by mrvan ( 973822 ) on Thursday August 07, 2014 @07:12AM (#47621371)

    That is correct, but not what the GP meant. If you can model the distribution (e.g. you 'know' that B is 90%) then you can weigh your random guessing such that it is correct in >50% of the cases, even without looking at the case itself (it is still 'random' in that sense)

    Extreme case: I can predict whether someone has Ebola without even looking at them with >99.99% accuracy by just guessing "no" every time, since the prevalence of Ebola is >.001%.

    Suppose the supreme court has 70% chance of overturning (e.g. because they choose to hear cases that have 'merit'), then an algorithm that guesses 'overturn' 100% will have a 70% accuracy. A random guess that follows the marginal of the target distribution (e.g. guess 70% overturn) also scores >50% (58% to be precise).

  • Re:Useless (Score:5, Informative)

    by AthanasiusKircher ( 1333179 ) on Thursday August 07, 2014 @07:22AM (#47621395)

    70% accuracy is ridiculously low if you can get 73% accuracy *without* taking into consideration the records of each justice or any other kind of details.

    First, your link only deals with the past court term. TFA deals with predicting cases back to 1953. Is your 73% stat valid for the entire past half century?

    And even if it were, the algorithm is much more granular than that, predicting the way individual justices will vote. From TFA:

    69.7% of the Courtâ(TM)s overall affirm and reverse decisions and correctly forecasts 70.9% of the votes of individual justices across 7,700 cases and more than 68,000 justice votes. Also, before someone objects, please note that (contrary to popular belief) SCOTUS does not always vote 5-4 according to party lines. For instance, your own link notes that 2/3 of last year's opinions were UNANIMOUS. 5-4 decisions usually amount for only 25% of cases or so in recent years, and of those, usually a 1/3 or so don't divide up according to supposed "party line" votes.

    So, I agree with you that simply predicting reverse/affirm at 70% accuracy may be easy, but predicting 68000 individual justice votes with similar accuracy might be a significantly greater challenge.

  • Re:Useless (Score:4, Informative)

    by fph il quozientatore ( 971015 ) on Thursday August 07, 2014 @07:56AM (#47621519)
    Yeah, that's why we have better error measures [] than "70% accuracy", and competent people should use them.
  • Re:Useless (Score:4, Informative)

    by fph il quozientatore ( 971015 ) on Thursday August 07, 2014 @08:02AM (#47621551)
    To be more fair, I am not bashing the original paper here -- that one reports the full confusion matrix (
  • by AthanasiusKircher ( 1333179 ) on Thursday August 07, 2014 @09:12AM (#47621913)

    I wouldn't be surprised if the primary predictive trait used is simply to check the biases of each judge and then assume they will vote along those biases. Assuming conservative judges will vote conservative and liberal judges will vote liberal should give you a pretty good score right off the bat.

    Only in a small minority of cases. Contrary to popular belief, most SCOTUS cases aren't highly politicized cases with a clear conservative/liberal divide. Most cases deal with rather technical issues of law which are much less susceptible to this sort of political analysis.

    The Roberts Court, for example, has averaged 40-50% unanimous rulings in recent years (last year about 2/3 of rulings were unanimous). So, your idea of "assume conservative vote conservative, liberal vote liberal" would tell you nothing about maybe half of the cases that have come before the court in recent years. (Historically, I believe about 1/3 or so of rulings tend to be unanimous.)

    And even with the closely divided cases, you have a problem. Of the 5-4 rulings (which in recent years have been only about 20-30% of the total rulings), about 1/4 to 1/3 of them don't divide up according to supposed "party lines."

    In sum, I don't know what factors this model ends up using, but "conservative vs. liberal" is way too simplistic to predict the vast majority of SCOTUS rulings. If you could factor in detailed perspectives on law (which often have little to do with the stereotyped political spectrum), you might have something... but that would require a lot more work, particularly over the 50 years of rulings TFA deals with.

Money is better than poverty, if only for financial reasons.