from the month-here-a-month-there dept.
shrik writes "Slate has a look at the efforts of Emily Owens, in 2005 a Ph.D student in economics at the University of Maryland, who 'came across thousands of inconsistencies and errors in the sentencing recommendations provided to judges' by the Maryland State Commission on Criminal Sentencing Policy. Quoting: 'The sentencing guidelines for judges were based on a work-sheet [PDF] that "graded the severity of a convict's crime and his risk to society", ostensibly to make the rulings meted out more objective in nature. But on carefully studying her data, Owens noticed something wasn't adding up — the system seemed to be producing 1 error in every ten trials. She also realized that this "recommendation system" actually mattered: crimes and criminals analyzed to be quite similar were resulting in systematically different punishments correlated with the work-sheet.' The source of these discrepancies was ultimately found to be a simple, but very significant, PEBKAC: 'More than 90 percent of errors resulted from the person completing the work sheet [usually the DA, but signed off by the defense attorney] entering the figure from a cell next to the correct one. ... The remaining errors came mostly from incorrect choice of criminal statute in calculating the offense score and from a handful of math errors (in operations that were literally as simple as adding two plus two).' Timo Elliott's BI Questions Blog lists the morals of the story."
"Summit meetings tend to be like panda matings. The expectations are always
high, and the results usually disappointing."
-- Robert Orben