Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Courts Crime Math

How Statistics Can Foul the Meaning of DNA Evidence 215

azoblue writes with a piece in New Scientist that might make you rethink the concept of "statistical certainty." As the article puts it, "even when analysts agree that someone could be a match for a piece of DNA evidence, the statistical weight assigned to that match can vary enormously, even by orders of magnitude." Azoblue writes: "For instance, in one man's trial the DNA evidence statistic ranged from 1/95,000 to 1/13, depending on the different weighing methods used by the defense and the prosecution."
This discussion has been archived. No new comments can be posted.

How Statistics Can Foul the Meaning of DNA Evidence

Comments Filter:
  • Isn't it the case that we are more often in the way of our own discovery and explana-tative power
    • You can prove anything with statistics.

      Also 99.9% of all statistics are made up.

      • by Kitten Killer ( 766858 ) on Thursday August 19, 2010 @04:19PM (#33307522)

        You can prove anything with statistics.

        No. You can prove anything with BAD statistics. Unfortunately, most statistics are bad.

        -Scientist Statistician (enough to know that I don't know statistics)

        • by pjt33 ( 739471 ) on Friday August 20, 2010 @01:35AM (#33311018)

          And most people don't understand statistics, however good they are, and draw wrong conclusions. Case in point: the author of TFA doesn't seem to have a clue what a likelihood ratio (LR) is. In the article it comes across as a type of comparison - contrasted with RMP (random match probability) and RMNE (random man not excluded), which are different tests to apply to the data. But actually LR is a way of presenting a probability which is used by forensic scientists because it's supposed to be easier for juries to understand - so you could present an RMP result as an LR, or an RMNE result as an LR.

          FWIW, I'm not defending any of the statistics in TFA as good. I notice a complete absence of any error estimates. And I distrust forensic match probabilities in general because I've seen forensic fingerprint analysis software which uses pseudorandom numbers in the computation of the match probability and can vary the LRs presented (to 16s.f., believe it or not) by an order of magnitude if you recalculate.

      • You can prove anything with statistics.

        Also 99.9% of all statistics are made up.

        Tongue-in-cheek aside, the real problem with statistics is that it only deals in probabilities. As such, It can never describe something conclusive. It can only tell you something is probable. (or very, very probable) It can never identify an answer/solution/event as certain. Thus, it can never tell you the absolute truth. It can only give you the most probable occurrence.

        Therefore, somewhere in the evaluation of court evidence, one must decide what probability constitutes "beyond a reasonable doubt".

        • Tongue-in-cheek aside, the real problem with statistics is that it only deals in probabilities. As such, It can never describe something conclusive. It can only tell you something is probable. (or very, very probable) It can never identify an answer/solution/event as certain. Thus, it can never tell you the absolute truth. It can only give you the most probable occurrence.

          In my opinion, that is not the problem with statistics, that is the problem with reality. Statistics are just the way to quantify that, so that we may still make somewhat informed decision. So statistics (meaning math, not numbers) are actually a solution to your problem.

          • by gd2shoe ( 747932 )

            In my opinion, that is not the problem with statistics, that is the problem with reality. Statistics are just the way to quantify that, so that we may still make somewhat informed decision.

            ???

            Are you trying to reference quantum mechanics or transcendentalism (or other philosophy)? I must admit, you have me a bit confused.

            Either a particular genetic sample came from a specific suspect, or it did not. The actual truth is rather binary in this regards. It is all, or nothing. It is merely unknown, not part one and part the other. This is where statistics struggles, or rather, where we struggle to appropriately understand and apply statistics.

            So statistics (meaning math, not numbers) are actually a solution to your problem.

            Oh, and what problem would that be? I thought tha

    • Re: (Score:3, Interesting)

      by Jurily ( 900488 )

      DNA evidence is the new fingerprint. News at 11.

      • by HangingChad ( 677530 ) on Thursday August 19, 2010 @04:47PM (#33307858) Homepage

        Sadly true, but there's so much about DNA analysis that you don't get on an episode of CSI. On TV DNA analysis only takes a few minutes and matches are proudly announced by flashing messages on the DNA machine.

        In real life good DNA matching takes days, cost a lot of money and, as the article points out, matching can be in the eye of the beholder. DNA samples are incredibly easy to contaminate, whole labs can become contaminated over time if they don't have and follow strict contamination protocols. And there has been more than one reported case of harried techs gun-decking DNA analysis when police and prosecutors were certain they had the right guy.

        Well done DNA analysis can be an amazing crime fighting tool but the science is not perfect and it's okay to be skeptical. There is no magic identification test that's completely fool proof. And DNA tests are only as good as the fool running the test.

    • by finarfinjge ( 612748 ) on Thursday August 19, 2010 @04:06PM (#33307348)
      The trouble that this paper and many others illustrate is the HUGE ignorance of proper statistical methods in the scientific community. Such things like a students T test are - statistically speaking - simple. Yet they are often beyond many in the science community. Thus, there is a tendency for misuse of technique, which in turn leads to divergent interpretations of what a data set means. The legal profession is even worse, as they don't care about the laws of mathematics. In a court, you are not required to answer to a professor of mathematics, hence you can assert anything. If your opponent doesn't have the necessary skill or knowledge to call BS on what you say, you can win an argument with a completely baseless assertion. Take an example. A man is fired for missing work on a Monday. The company's lawyer states "Fully 40% of this employee's absenteeism occurs on Mondays and Fridays. It is appalling that this weekend extending behaviour continues, and we must do something about it". The mathematically challenged lawyer for the poor sap can't see the issue with this and lets it stand.

      JE (always wanted to use that example. May have the justification a bit!)
      • It's always more fun to write a random generator to compare real results to, and it always winds up taking more factors into account than I would otherwise.
      • The legal profession is even worse, as they don't care about the laws of mathematics. In a court, you are not required to answer to a professor of mathematics, hence you can assert anything. If your opponent doesn't have the necessary skill or knowledge to call BS on what you say, you can win an argument with a completely baseless assertion

        Well, then, if you expect your opponent to pull something like that, bring in a statistician, qualify him as an expert witness and let him rip the assertion to shreds.

        • by tirefire ( 724526 ) on Thursday August 19, 2010 @10:53PM (#33310354)

          Well, then, if you expect your opponent to pull something like that, bring in a statistician, qualify him as an expert witness and let him rip the assertion to shreds.

          That doesn't always work so well. Read about John Puckett sometime...

          Rather than try to sort out the disparities between its numbers and database findings, the FBI has fought to keep this information under wraps. After Barlow subpoenaed the Arizona database searches, the agency sent the state's Department of Public Safety a cease-and-desist letter. Eventually, the Arizona attorney general obtained a court order to block Barlow's distribution of the findings. In other instances, the FBI has threatened to revoke access to the bureau's master DNA database if states make the contents of their systems available to defense teams or academics. Agency officials argue they have done so because granting access would violate the privacy of the offenders (although researchers generally request anonymous DNA profiles with no names attached) and tie up the FBI's computers, impeding investigations. These justifications baffle researchers.

          Source: DNA's Dirty Little Secret [washingtonmonthly.com]

  • 1/13 (Score:5, Funny)

    by nacturation ( 646836 ) * <nacturation&gmail,com> on Thursday August 19, 2010 @03:53PM (#33307160) Journal

    "Members of the jury, there's only a 1 in 13 chance that the defendant is actually the killer based on the DNA evidence. If the defendant were sitting in the jury with you, then there's an equal chance that it was any one of you. And since we can eliminate all 12 of you, that leaves only the defendant left over. So you must find the defendant guilty of all charges since he's the only one left out of 13 people. The prosecution rests."

  • Numbers don't lie. (Score:3, Insightful)

    by Beelzebud ( 1361137 ) on Thursday August 19, 2010 @03:58PM (#33307254)
    But liars love to use numbers!
    • So anybody who uses a number must be a liar?

      • That's a fallacy of composition. What he's saying is that enough liars use numbers that numbers aren't terribly informative without knowing how they're derived and what they refer to.
    • No, no, no, that's not how it goes. It's "Numbers don't lie, but liars sure number [in the billions]!"

    • I wish I could mod you up for that. Statistics don't mean anything in particular until you interpret them. While it's legitimate to advertise a group of job openings with a mean salary figure, it's not a terribly informative number to use if the position aren't close to each other. Likewise, the median is often times a great choice for survival rates, if not perfect in terms of indicating what sort of a tale the distribution has. None of those uses is wrong per se, but they are somewhat less informative tha
  • Whaa? (Score:5, Insightful)

    by esocid ( 946821 ) on Thursday August 19, 2010 @04:00PM (#33307282) Journal

    In the Smith case, the sample containing another person's DNA showed alleles at seven out of a possible 15 loci, but at four of these loci, the alleles matched those of both the victim and the defendant. "The 1 in 95,000 figure in effect treated these alleles as full-weight evidence that the DNA came from the victim, ignoring the alternative possibility that the allele we saw could have been from the defendant," says Balding. If the opposite position is taken, and these alleles are ignored, you come up with a figure closer to 1 in 13. "It's a question of which loci you consider," he says.

    Since when in the hell do you count common matches as proof that it comes from one person? Some of these labs are doing something very wrong, and I hate to think of both the false positives, and negatives, that came from their "expert" opinions.

    • Re:Whaa? (Score:4, Insightful)

      by sjames ( 1099 ) on Thursday August 19, 2010 @04:49PM (#33307878) Homepage Journal

      Since when you're a prosecutor that doesn't really care if the defendant is guilty or not as long as he gets locked up and you get credit.

      The way the "justice system" currently works, if crime magically stopped right now, prosecutions and convictions would continue unabated.

    • What the fuck are you talking about? Matching DNA is kind of exactly the sort of thing that's evidence that a sample came from the same person.
    • Re: (Score:3, Interesting)

      by hedwards ( 940851 )
      DNA is really meant as a rule out at present, it's not perfect, but it's more likely that a lab will be able to rule somebody out as a suspect than demonstrate that it was them. For the main reason that we can't yet decode the entire genome efficiently enough to do it each time and there's frequently an imperfect sample at the crime scene to compare it to in the first place.
  • Juries (Score:2, Interesting)

    by gibson123 ( 1740752 )
    And add to the fact if you have served on a jury before, many times this information is highly technical and is very easily miss-represented by the lawyers to jury members from all walks of life.
    • Add to that the fact that most people are stupid and it gets really ugly. I'd never be able to get a jury of my peers because my peers would be smart enough to get out of jury duty. Unless they were bored and just wanted something to do for a day or three.

  • It seems to me that the simple defense to such a charge would be to go get the person who recently sucked the defendant's dick, test that person's DNA, and compare it to the other DNA samples. Presumably, however, even a bad lawyer would have thought of this, so I must not really understand the case.

    Three cheers, for sure, for an opportunity to topically discuss fellatio on Slashdot.

    • I was thinking the exact same thing!

      Its not like they randomly selected this guy off the street, swabbed his johnson, and said "This might be the rapist!"

      They must have already had pretty substantial evidence to be able to get him, so really the low probability doesn't matter, because 1/13 means that 12/13 chance it would have cleared him by finding someone else's spit on his stick. The DNA evidence still makes the case against him stronger.

      GUILTY!

      • by winwar ( 114053 )

        "They must have already had pretty substantial evidence to be able to get him, so really the low probability doesn't matter, because 1/13 means that 12/13 chance it would have cleared him by finding someone else's spit on his stick."

        It didn't matter in this case because of the witness (and of course witnesses never make mistakes....) But it matters in other cases where the only lab evidence may be DNA. In that case, the difference is reasonable doubt.

        I'd say there is a serious problem with DNA evidence.

      • Its not like they randomly selected this guy off the street, swabbed his johnson, and said "This might be the rapist!" They must have already had pretty substantial evidence to be able to get him,

        Oh, really?

        TRURO, Mass., Jan. 7 - In an unusual last-ditch move to find clues to the three-year-old killing of a freelance fashion writer, police investigators are trying to get DNA samples from every man in this Cape Cod hamlet, all 790 or so, or as many as will agree.
        Sgt. David Perry of the Truro Police Department and other law enforcement authorities here say that the program is voluntary but that they will pay close attention to those who refuse to provide DNA. "We're trying to find that person who ha

  • by hessian ( 467078 ) on Thursday August 19, 2010 @04:16PM (#33307488) Homepage Journal

    Genetics means "out of your control" and touches on some raw nerve issues, so there's a lot of throwing around of "statistical" information and unrealistic mental models.

    For example of statistical confusion:

    New research shows that at least 10 percent of genes in the human population can vary in the number of copies of DNA sequences they contain--a finding that alters current thinking that the DNA of any two humans is 99.9 percent similar in content and identity.

    http://www.hhmi.org/news/scherer20061123.html [hhmi.org]

    And broken mental models:

    http://en.wikipedia.org/wiki/Lewontin's_Fallacy [wikipedia.org]

    Until our knowledge improves, you're going to see more "politicization" of DNA-related science.

  • uncertainty.

    Each time the answer comes up the same, even in a test with much less accuracy, you improve the chances it's the right answer.

    Throw some statistical analysis at it and come up with a way to combine the tests you have into one probability, which will be higher than all the probabilities you got from the measurements.

    Unless the tests disagree, and then you're talking a homework problem.

    • by sjames ( 1099 )

      We see how well that worked out on Wall street!

      The problem with low accuracy tests is that they may not be RANDOMLY inaccurate. If some common factor causes each of the tests to read wrong, then the combined result of all of the tests remains no more accurate then the least of them.

      • by blair1q ( 305137 )

        There isn't much in common between different DNA tests. Temperature effects, maybe. And the fact that the DNA is the same. Which is the thing you're trying to show is the same. Which is why you chose a different test instead of running the same test twice.

        I'm not sure how Wall Street fits in. The Stock Market is a random walk, not a sequence of nucleotides.

        • Re: (Score:3, Informative)

          by sjames ( 1099 )

          Temperature effectsm, the same properties of the nucleotides (even if from distinct DNA), the same crime scene contamination effects, the same statistical laws (if it turns out not to be as unique as we think, all tests necessarily fail) and at least some of the chemical reactions are in common.

          In one documented case, a surprise common factor was a quality control person who didn't know the non-sterile swabs still couldn't be touched before packaging. (The CSI:NY episode borrowed that from real life).

          Wall s

    • True, but this only works if you disclose the results of every test that you used to try to demonstrate that you have the correct identity. After all, if you ran 13 independent tests that each had a 1 in 13 false-positive ratio, you're virtually guaranteed that at least one of them will come back positive. If you then only show the jury the results of that test, even the 1 in 13 ratio is false since you selected that test *after* you knew the results.

      Combining clean science with an adversarial prosecution

    • uncertainty.

      Each time the answer comes up the same, even in a test with much less accuracy, you improve the chances it's the right answer.

      Throw some statistical analysis at it and come up with a way to combine the tests you have into one probability, which will be higher than all the probabilities you got from the measurements.

      Unless the tests disagree, and then you're talking a homework problem.

      So basically, you know absolutely nothing about DNA analysis.

      • by tibit ( 1762298 )

        Heck, GP doesn't know much about basic data analysis. Let me cite from my hero Feynman -- he was on a commission that was to recommend textbooks to a school district.

        This question of trying to figure out whether a book is good or bad by looking at it carefully or by taking the reports of a lot of people who looked at it carelessly is like this famous old problem: Nobody was permitted to see the Emperor of China, and the question was, What is the length of the Emperor of China's nose? To find out, you go all over the country asking people what they think the length of the Emperor of China's nose is, and you average it. And that would be very "accurate" because you averaged so many people. But it's no way to find anything out; when you have a very wide range of people who contribute without looking carefully at it, you don't improve your knowledge of the situation by averaging.

    • Example: We know the killer has red hair. Testing the defendant for red hair turns up positive. Since there are many others with red hair, we should test the defendant many times to be REALLY SURE that he has red hair. Lock'em up.
    • by tibit ( 1762298 )

      Be very, very careful with that way of thinking. Suppose you asked 1000 randomly selected people about the height of the Empire State Building. You average their answers, and you get a mean and its standard deviation. So you think if you ask more people, you will get a better result? No. When you ask 10,000 people instead, or even a million people, your mean is not going to be any better. The truth is that most people have no fucking clue about the number in question, so your mean is at best a result belong

  • by 91degrees ( 207121 ) on Thursday August 19, 2010 @04:20PM (#33307538) Journal
    I speak from personal experience. I use them al the time and still don't really understand them. Not how they apply in criminal investigations anyway.

    Let's say you have evidence that matches 1 in a thousand people. You search through your database of all 1000 suspects and you get a single match. Did he do it? Logically you'd expect this to mean you can be 99.9% sure. You then search through the database of a million random people. You get 1000 matches. Does this mean there's only a 0.1% chance that your original suspect was guilty? Well, maybe there's some other compelling evidence that makes it most likely that one of those 1000 people were the culprits. But you have 10000 outliers. They're each a tenth as likely to have committed the crime. You get 10 matches. So, once again we're at the 50% probability of guilt, or something in that ballpark.

    I'm sure this is a somewhat different example than that given in the article but that's not the point. The point is that is there a 99.9% probability, a 0.1% probability, a 50% probability or some other probability of guilt? Or am I just trying to confuse you by throwing numbers at you?
    • Re: (Score:3, Funny)

      Or am I just trying to confuse you by throwing numbers at you?

      Bingo! You now know how Stats work in the court room!

    • by terrymr ( 316118 ) <terrymr@@@gmail...com> on Thursday August 19, 2010 @04:58PM (#33307974)

      DNA testing probabilities go something like this ....
      we found say 5 markers that match the defendant and the sample. (I picked a small number to make the example shorter)
      each of those has the following probabilities of occuring in a random person :

      1) 1 in 1000
      2) 1 in 10
      3) 1 in 10000
      4) 1 in 7
      5) 1 in 100

      so we multiply all those together and get a probability of mismatch of : 1 in 7,000,000,000

      I even told a guy at the state crimelab that was stupid - not that he cared.

    • by HungryHobo ( 1314109 ) on Thursday August 19, 2010 @05:25PM (#33308246)

      If the only thing that pointed you at him was that search of the database then it tells you almost nothing about how likely he is to be guilty on it's own.

      If you find a suspect be searching through a database of a million people with a test that has a 1 in a million chance of making a false positive and no other evidence exists then the chances of that match should not be used in any way to establish guilt in court.
      But then lawyers don't care about using stats correctly.

      If however you find someone, they have a knife with the victims blood on it and they have a motive and you compare their DNA to the DNA found at the scene then that same test with a 1 in a million chance of a false positive is a perfectly valid piece of data to submit in court.

    • One problem is that DNA variation isn't uniformly distributed across all population samples. Suppose a defendant has a 1 in a million match, relative to the 'general' population. Usually, the population that had access to the victim and motive is much more closely related to the defendant than the general population. Suppose for example that both defendant and victim are members of a small ethnic minority. Do prosecutors and defense attorneys take this into account correctly?

      In other arenas, the same is

    • Re: (Score:2, Informative)

      by Anonymous Coward

      Your first example is known as the prosecutor's fallacy, and your second example is known as the defense attorney's fallacy. It's distressing that people get convicted on statistical evidence when the evidence really doesn't support it. For example, in Britain, someone named Sally Clark was convicted of killing her children after two apparent SIDS cases. The prosecution had someone testify that the chance of two children in the same family dying of SIDS was 1 in 73 million (erroneously assuming that the

  • by burtosis ( 1124179 ) on Thursday August 19, 2010 @04:23PM (#33307570)
    This should only suprise people who think court cases are about facts and justice. It is well known that facts just get in the way of what's true and real.
  • When expert A says he's certain of a match to "1 in a billion" he's really saying he's certain to 0.999999999. When expert A says he's certain to 1 in a million that's certain to 0.999999.

    Compare this to the "not so far apart" difference between expert A saying "he's 1 in 10" and expert B saying "he's 1 in 5." The difference between 0.9 and 0.8 certainty is a lot greater than the difference in certainty in the first example.

    By the way, if I'm on a jury, I'm interested in "who else could've done it" not ra

    • by BobMcD ( 601576 )

      By the way, if I'm on a jury, I'm interested in "who else could've done it" not raw numbers. If two people leave the crime scene and blood is a "certain to 1 in 5" match to the defendant, that is, there's a 20% chance of a mistake, and the only other person who was at the crime scene has been ruled out, the only way I'll acquit is if the defendant either makes a very very strong case he didn't do it or provides some explanation for the evidence that doesn't require either of the initial suspects to be guilty.

      This. From TFA it seems that not only had this defendant been suspected to the point of being lawfully arrested, but he had also received oral sex from someone with genetics similar to the victim. If this were the ONLY evidence in the case, it might not meet the 'reasonable' standard. But we're told it wasn't. I'd like to see the evidence before making up my mind completely, but the point seems pretty moot. Even the low-certainty figures point that he could have done it. All he'd have to do to obfusca

  • by udin ( 30514 ) on Thursday August 19, 2010 @05:15PM (#33308130)
    I served on a jury in which DNA evidence was presented, along with the expert witness' estimation of probability that two random people would have the same number of matching points of comparison (DNA is only matched at a relatively small number of points in the strand).

    In this case, however, there were many people present at the discovery of the object from which the DNA was taken for analysis. As it happens, several of these people were relatives (brother, mother) of the person the prosecution were trying to persuade us was the person that possessed (in legal terms) the object.

    The question that I kept hoping the defense attorney would ask was "what are the probabilities of an erroneous match if the people are relatives, not just two random people off the street"? Unfortunately, he didn't.

    As it happened, there were so many other peculiarities in this case as well as some pretty bizarre testimony from prosecution witnesses that we voted to acquit without making much of the DNA evidence.

    • I served on a jury in which DNA evidence was presented, along with the expert witness' estimation of probability that two random people would have the same number of matching points of comparison

      Me too, and I have to say that I was highly unimpressed because the context was whether a particular blood sample was either a father's or his son's. In the end, it wasn't particularly relevant to the verdict though as the son had admitted to the crime anyway.

  • by fermion ( 181285 ) on Thursday August 19, 2010 @05:20PM (#33308200) Homepage Journal
    Lack of match of DNA found at a crime scene, like a fingerprint, provides reasonable doubt, so that suspect tends to go free. Note that it does not certainty of innocence, merely reasonable doubt. Since, in the US, we are required to prove beyond a reasonable doubt, it is often sufficient to kill a case.

    DNA, like a fingerprint, should not be enough to convict. Many articles have been written on the faulty statistics that are used by prosecutors to posit faulty odds like 1 in a million, when in fact the odds are more like there are many possible people who could have done this, and we have randomly chosen one. The job is then to prove that this is not just a random choice from a database, but, based on other evidence, this is person who actually committed the crime.

    This is going to become more of an issue as we get more DNA in databases and solve crimes by matching DNA to the database. In this case, the match will be a random choice between several people, and it will be a mistake to convict based primarily on DNA evidence.

  • I'm being purely hypothetical, but let's work out a word problem.

    So let's assume that there's some DNA in question, and that dna according to all 10 other calculations (assumed accurate) has a unique match number of 1 in 100,000.

    Nobody ever asked if the defendant has a twin. (Again, assuming that an identical twin would have matching DNA which I don't actually know for certain).

    Let's assume for simple math that in the real world twins occur 1/100 times.

    Is the statistical uniqueness now 1/1000, 1/2, 1/50,00

  • by goodmanj ( 234846 ) on Thursday August 19, 2010 @05:36PM (#33308384)

    ... in a case I was on the jury for. (Sorry for the bait-and-switch title, couldn't resist.)

    This was a case of armed home invasion. The victim was a big bruiser of a man, a multiple convicted drug addict. The defendant was a scrawny young Cape Verdean guy. (Cape Verdean drug gangs are common in the area: this is important later.) The victim testified that, after buying drugs from the defendant, he got a series of enraged voicemails demanding the return of the defendant's cell phone. A few hours later, the defendant allegedly shows up at the victim's house with a gun and barges in yelling. A struggle ensued, a shot was fired into the floor, and the guy with the gun fled.

    Evidence against the defendant included eyewitness testimony from the defendant, matching ammunition found at the defendant's house, and crucially a do-rag found at the scene of the scuffle. DNA tests matched the do-rag to a mixture of at least 3 people, including the defendant. The DNA mixing was probably due to really awful police work: a paper bag borrowed from the defendant's cupboard is not a proper evidence collection container.

    As in TFA, mixed DNA dramatically affected the "probability of exclusion" statistics: the state's expert testified there was a 1 in 50 chance that a random man on the street would match the DNA on the do-rag. The odds that a random *black* man on the street would match were much higher, like 1 in 20; the defense pointed out that the odds that a random *Cape Verdean* would match would be much higher.

    We've grown used DNA evidence saying things like, "not one other person on the planet could match this DNA", but in this case, the odds were good that the DNA evidence would match at least one other person sitting in the *courtroom*. The defense also took the unusual tactic of introducing the defendant's sister, who testified that her *other* brother looks very much like the defendant, and she said it was *his* voice on the enraged voicemails. What are the odds that the DNA matches the *brother* instead? Damned good.

    Between the fact that the eye witness seemed shifty and unreliable and was probably on crack at the time of the incident, and the fact that all the physical evidence could just as well implicate the brother as the defendant, we couldn't rule out the possibility that the cops got the wrong guy, so we found him not guilty. If I had to take a bet, I'd say he did it, but I wouldn't bet his life on it.

    Anyway. Moral of the story is: on cop shows and in the public awareness, DNA evidence is rock solid and incontrovertible. But in the real world, the statistics of DNA mixtures make things a whole lot less cut-and-dried.

    • by mhajicek ( 1582795 ) on Thursday August 19, 2010 @06:58PM (#33309004)

      ...we couldn't rule out the possibility that the cops got the wrong guy, so we found him not guilty. If I had to take a bet, I'd say he did it, but I wouldn't bet his life on it.

      I'm glad you think the way that you do. Too many people would call him guilty if they figured there was a 55% chance of it being him. Hmm, someday someone will have to draw a line in the sand as to what odds constitute reasonable doubt. If a trial could be conducted in a completely unbiased, Bayesian way and a probability of guilt were established, what number would be needed to convict?

      • Re: (Score:3, Interesting)

        by goodmanj ( 234846 )

        I'm glad you think the way that you do.

        What I was surprised at was the unanimity of the jury in the case: *everyone* thought that while the defendant was probably a gangster, the prosecution didn't meet the burden of proof. The jury had everything from suburban housewives to college professors (me) to retired black civil servants to a young hispanic man with gold chains and obvious 'hood experience, and everybody came to the same conclusion with the same rationale. Deliberation took about half an hour, m

  • The right way to use DNA is this. First, you find some suspects using traditional methods. When you have a set of suspects, you then do a DNA test against DNA from the crime scene that is known not to be from the victims or witnesses. It likely only matches one of your suspects. That's very likely the criminal.

    The wrong way to use DNA is this. First, you find some DNA at the crime scene that doesn't belong to anyone there who is known not to be the criminal (e.g., it doesn't belong to the victims or witness

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...