Forgot your password?
typodupeerror
The Courts Government Privacy Software News

Court Orders Breathalyzer Code Opened, Reveals Mess 707

Posted by timothy
from the take-a-sober-look-at-this dept.
Death Metal writes with an excerpt from the website of defense attorney Evan Levow: "After two years of attempting to get the computer based source code for the Alcotest 7110 MKIII-C, defense counsel in State v. Chun were successful in obtaining the code, and had it analyzed by Base One Technologies, Inc. By making itself a party to the litigation after the oral arguments in April, Draeger subjected itself to the Supreme Court's directive that Draeger ultimately provide the source code to the defendants' software analysis house, Base One. ... Draeger reviewed the code, as well, through its software house, SysTest Labs, which agreed with Base One, that the patchwork code that makes up the 7110 is not written well, nor is it written to any defined coding standard. SysTest said, 'The Alcotest NJ3.11 source code appears to have evolved over numerous transitions and versioning, which is responsible for cyclomatic complexity.'" Bruce Schneier comments on the same report and neatly summarizes the take-away lesson: "'You can't look at our code because we don't want you to' simply isn't good enough."
This discussion has been archived. No new comments can be posted.

Court Orders Breathalyzer Code Opened, Reveals Mess

Comments Filter:
  • But does it work? (Score:4, Insightful)

    by will this name work (1548057) on Thursday May 14, 2009 @03:35PM (#27955325)
    Poorly written code is one thing, but does it ultimately work?
    • by Jason1729 (561790) on Thursday May 14, 2009 @03:37PM (#27955359)
      Does it matter? The real question is "Can a prosecutor convince a computer illiterate judge beyond reasonable doubt that it does ultimately work?".
      • by Yold (473518) on Thursday May 14, 2009 @03:42PM (#27955433)

        I read the report earlier, and there are some very valid issues with the source. The first is that in incorrectly averages readings taken, assigning more weight to the first reading than the subsequent ones. It also has a buffer overflow issue, where an array is being written past its end, and even if this results in an error, it goes unreported.

        You would have to be a fricken moron not to have a problem with mis-averaging, however in my experiences with law-people, they can be even worse than PHBs.

        • by internerdj (1319281) on Thursday May 14, 2009 @04:00PM (#27955779)
          Also it looks like their out of range error scheme was to set it to the closest legal value and report it if it was recurring and continuous. Assume for a moment you took a test right after the last good reading, you took 32 samples. It would only report an error if all 32 samples failed. Otherwise 31 of the 32 will report the maximum legal extreme closest to that reading. Couple that with the fact that the averages were taken incorrectly, this isn't just reasonable doubt it is worse than using a RNG to find if they are drunk.
          • by JCSoRocks (1142053) on Thursday May 14, 2009 @04:18PM (#27956143)
            I'm not generally someone that insists everything needs to be open source. However, in a situation like this, where this device makes the difference between a life changing conviction and exoneration, it's pretty obvious that people should have the right to examine it. The court was able to order it opened here, but it makes you wonder how many people have been screwed by this.

            Sadly in the majority of cases where evidence based on something like this (DNA, hair analysis, etc) is shown to be based on someone or something that's not good - nothing comes of it. I saw a blurb about a "forensic expert" that would give the prosecution any testimony they wanted. The state he was based in refused to reexamine the cases he was involved in even after he was shown to be a liar.

            It's depressing but it's one reason I steer clear of the law as much as I can. As much as we Americans like to think of our legal system as dispensing justice, the sad fact is that it frequently doesn't.
        • Re:But does it work? (Score:5, Informative)

          by Anonymous Coward on Thursday May 14, 2009 @04:07PM (#27955909)

          >> assigning more weight to the first reading than the subsequent ones.

          It seems to apply more weight to later readings:

          where a1=1, b1=2, c1=3, d1=4
          (A1+B1+C1+D1)/4 = 2.5 (the correct average)
              and
          (((((A1+B1)/2)+C1)/2)+D1)/2 = 3.125

          • Re:But does it work? (Score:5, Interesting)

            by TheEldest (913804) <theeldest.gmail@com> on Thursday May 14, 2009 @04:47PM (#27956737)

            This seems to make sense to me. The breathalizer is supposed to measure the blood alcohol content, and this is done by measuring the alcohol content in air expelled by the *lungs* (with a knowlege of partial pressures).

            But if you equally weight beginning readings with ending readings, then you can be skewed by the first reading, which comes from the air in the mouth, instead of the lungs (giving low scores for people with time since their last drink, and people high scores with a recent last drink).

            I would think that this method would give a more accurate reading by filtering out the readings from 'mouth air' and giving preference to 'lung air'.

            But regardles, tests should have been done using both methods, and comparing to blood test to see which returns more consistantly accurate results. I wonder if those tests need to be made public as well.

        • by fracai (796392) on Thursday May 14, 2009 @04:10PM (#27955993)

          Presuming it's the same summary that I read, it contained a mistake.

          Readings are Not Averaged Correctly: When the software takes a series of readings, it first averages the first two readings. Then, it averages the third reading with the average just computed. Then the fourth reading is averaged with the new average, and so on. There is no comment or note detailing a reason for this calculation, which would cause the first reading to have more weight than successive readings.

          This actually places more weight on the final reading, not the first.

          • by DeadCatX2 (950953) on Thursday May 14, 2009 @04:31PM (#27956395) Journal

            You are correct. In the biz, we refer to this as an exponentially-weighted moving-average-filter. Recent samples are weighted more heavily than older samples.

            y(n) = alpha*x(n) + (1 - alpha)*y(n-1)

            The alpha value controls how much of the current input makes it to the output and how much of the old output stays. i.e. with an alpha value of 0.5, half of the new value is added to half of the old value. With an alpha of 0.1, 10% of the new value gets added to 90% of the old value.

            This filter is nice because it doesn't require you to remember all the values that you want to average together, but it's a horrible way to get over the inherent noisiness in sensors.

            • by pjt33 (739471) on Thursday May 14, 2009 @04:49PM (#27956763)

              This filter is nice because it doesn't require you to remember all the values that you want to average together

              Why would you need to remember all the values? As long as you remember the number of values and their total you're fine.

              • Re:But does it work? (Score:5, Informative)

                by evanbd (210358) on Thursday May 14, 2009 @05:06PM (#27957047)

                If you have a noisy sensor and are trying to keep a low-noise estimate of the input, while that input is changing, you do some sort of filtering on the data. The weighted rolling average described above is nice for a number of reasons, mainly it's simple to implement and simple to analyze. In some cases, other filters are better.

                If you have a noisy sensor and want to measure a single unchanging input, you would want a different sort of filter. In this case, the simple arithmetic average works quite well.

                As you correctly observe, the two filters of similar complexity. Which one you use should depend on the sort of input you're trying to measure. In this case, they used the former type of filter on the latter type of data, which is a definite no-no. This will result in data that is far noisier than you would otherwise expect from the raw sensor noise and the number of samples taken. When that noise could be the difference between a DUI conviction and and the cop telling you to drive home carefully, I'd say that's worth worrying about.

              • Re: (Score:3, Informative)

                by Chirs (87576)

                A moving average is useful if you don't have a large enough data type to store the sum of all the values.

                • Re: (Score:3, Insightful)

                  by Timmmm (636430)

                  You know you don't actually need to do that.

                  average = (new_value-average)/++n + average;

                  I think that should work.

          • Re: (Score:3, Interesting)

            by Anonymous Coward

            To get more convictions... this makes sense now.

            Common wisdom holds that the end of a breath from the "bottom" of the lungs contains a higher percentage of alcohol than the main body of the breath, this is held to be why the officer will tend to tell you to push harder to get that last higher sample into the device. If anything sets off the machine, it'll be that last bit with a more concentrated sample.

            Whether that reflects the *actual* blood alcohol level in any well defined and useful fashion needs to be

      • by MozeeToby (1163751) on Thursday May 14, 2009 @03:55PM (#27955675)

        I'd be more interested in their test plan and test results than their source code if I were trying to convince a computer illiterate judge of something. Find a missing test case or an uncovered corner condition and you might have a decent case, code that doesn't pass static analysis and is ugly... well that pretty much defines 99% of the code out there.

        • by plague3106 (71849) on Thursday May 14, 2009 @03:59PM (#27955763)

          code that doesn't pass static analysis and is ugly... well that pretty much defines 99% of the code out there.

          It's more than ugly, it's difficult to maintain. Also, this point is largely irrelevent; 99% of the code out there isn't spitting out a number that says you're guilty of a serious offense.

          • Re:But does it work? (Score:5, Interesting)

            by MozeeToby (1163751) on Thursday May 14, 2009 @04:13PM (#27956039)

            No, but some no trivial amount of code is running the x-ray machine at the dentist, processing my credit card, managing my fuel injection, saving my thesis paper, and timing stoplights throughout my city.

            We trust our lives and livelihoods to shitty code every day and the plain fact of the matter is that shitty code usually works. As programmers we like to think of ourselves as artists; creating a master piece of perfectly engineered code. In reality, all projects face budget and time constraints, most projects have legacy code which is hard to maintain, and most teams have at least one guy who just doesn't get it.

            If the code works, and you can show empirically that the code works, that is proven beyond a reasonable doubt it my opinion. Not beyond any doubt, but that isn't the standard that our justice system is based upon.

            • by sexconker (1179573) on Thursday May 14, 2009 @04:20PM (#27956185)

              Show me a programmer creating "perfectly engineered code", and I'll show you a programmer building up its resume.

            • by Darkness404 (1287218) on Thursday May 14, 2009 @04:29PM (#27956357)
              The problem isn't that it can break, the problem is it can return bad readings. For example, a dentist's X Ray machine isn't suddenly going to show cavities everywhere because there is no code in the X-ray. Worst thing with a credit card machine is that it doesn't work, most of the time it doesn't overcharge you or something like that, or if it does a few phone calls will sort it out. Again, the worst thing that happens with fuel injectors is they break, your car doesn't run, you pay a few hundred and get it fixed. Worst thing with stoplights is they break, there is always a human driver who can figure out if all the lights are on red or green and call the police to manage traffic.

              Breathalyzers are basically black boxes, there is no human to really check them out. With the code more apt to return false readings then simply break, it is dangerous code, when those readings can be the difference between a crime and a non-crime.
            • by camperdave (969942) on Thursday May 14, 2009 @04:59PM (#27956925) Journal
              We trust our lives and livelihoods to shitty code every day

              Well, like the saying goes: If builders built buildings the way that programmers write programs, the first woodpecker to come along would destroy civilization.
            • Re:But does it work? (Score:5, Interesting)

              by Grishnakh (216268) on Thursday May 14, 2009 @05:02PM (#27956973)

              I disagree. Anything upon which guilt or innocence rests on needs to be held to a higher standard.

              For many other applications, especially non-government ones, if the code doesn't work well, then customers probably aren't going to buy it, and changes will be made. For instance, your example of fuel-injection code. If you don't do that correctly, you're going to have an engine that runs like crap and get poor economy. Cars that run poorly generally don't sell well. They might sell some, but as we see with GM and Chrysler, you have to do better than that to avoid bankruptcy.

              Saving your thesis paper? The code in TeX is probably some of the most bug-free code around. At least I hope you're using TeX and not something crappy like MS Word for a thesis. But even MS Word isn't that bad, since so many businesses rely on it and don't have problems with random data corruption to my knowledge.

              Timing stoplights is a good counterpoint to your example. In my experience, stop lights have horrible timing most places I go. It's almost like they're intentionally designed to make you stop at every single light, unless you drive at > 80mph on surface streets. Why is such poor performance accepted from our traffic lights? Because they're run by the government, and we the people don't have a choice. That's exactly the same as this breathalyzer crap: if you're accused, you don't get a choice about which breathalyzer they use on you. It's decided by the government (probably with help from bribes), and that's what they use, whether it works well or not.

          • Re: (Score:3, Insightful)

            by mea37 (1201159)

            Yes, but to GP's point - if the code had been subjected to proper tests, then it wouldn't matter how hard it was to maintain. Either the maintainers overcame that difficulty and it passed the test, or they didn't and it failed.

      • Re: (Score:3, Insightful)

        by wiredlogic (135348)

        Regardless of the state of the code no breathalyzer truly "works". None of them can directly detect blood alcohol content. All they do is use a proxy to estimate using the reaction products from your breath. These devices are wholly unscientific. There is no possible way they can derive a credible estimate with a precision of 0.001% or even 0.01%. There is no accounting for body size, type, or metabolic rate. Furthermore these devices can be triggered by more than just ethanol. Chocolate is reported to caus

        • Re: (Score:3, Insightful)

          by Yold (473518)

          In Minnesota (and other states), it is a crime to refuse a roadside breathalyzer test due to "implied consent" laws.

          • Re:But does it work? (Score:5, Informative)

            by Yold (473518) on Thursday May 14, 2009 @04:08PM (#27955949)

            correction, you may refuse a roadside breath test, but not one at the police station.

        • Re:But does it work? (Score:5, Interesting)

          by SoupGuru (723634) on Thursday May 14, 2009 @04:48PM (#27956749)

          Remember when it used to be you couldn't drunk drive?
          Then it was you couldn't be behind the wheel while drunk?
          Then it became you couldn't even be in the driver's seat with the car off while drunk?
          Then it became you couldn't drive if you couldn't get out and walk in a straight line?
          Then it became reciting your alphabet backwards...
          Then suddenly, you couldn't have an arbitrary percentage of alcohol in your blood to do all those things.
          Then it became whatever the machine said your blood alcohol might be.

          There are no laws against drunk driving anymore. There are laws about not being able to potentially operate a vehicle if a machine determines you have enough alcohol on your breath.

      • Re: (Score:3, Insightful)

        by DnemoniX (31461)

        Well the problem with calculating the averages should honestly be enough to get this tossed. The defense can put up an exhibit with a set of numbers using the flawed methodology which shows a person to be over the limit. Then call an expert witness with a math degree, or an accountant for that matter. Show that the average when calculated normally is below the the legal limit. Even better is if you can show that the machine has calculated an average that falls below the legal limit but should have been abov

      • Re: (Score:3, Funny)

        by Mateo13 (1250522)

        I wonder if they're hiring QA testers...

    • by gd2shoe (747932) on Thursday May 14, 2009 @03:42PM (#27955437) Journal

      Good question, but it needs to be reworded. Does it always work for all inputs?

      Also important, if it's a poorly written mess, why is the company claiming that it works? I see no indication that they've done due diligence for a device used to convict people. Just because they've never observed it to fail, doesn't mean a thing.

      • by Volante3192 (953645) on Thursday May 14, 2009 @04:00PM (#27955789)

        If I read the report right, they coded the thing to never actually fail in the first place. It'll always return what can be passed off as a legitimate answer.

      • by digitalunity (19107) <digitalunity.yahoo@com> on Thursday May 14, 2009 @04:01PM (#27955809) Homepage

        Looks like the answer is no. It's a black box that doesn't report internal errors except when it can't ultimately decide on an answer.

        The source code is useful only for showing the machines can be unreliable in certain circumstances, but unless he has substantiating evidence to show it gave an incorrect result he is unlikely to prevail.

        Example: Guy blows .09 after drinking 2 beers. He might have a case that the machine was wrong. Example 2: Guy drinks 8 beers and blows .18. Machine might be wrong, but even if it was off by a bit due to rounding averages, he's still guilty as sin.

        Sucks, but that's just the way the law looks at it.

        Someone mentioned earlier that the weighting of samples under repeat tests give weight to the first blow, which is a big red flag. The initial blow is probably the sample most likely to be contaminated by liquid from the mouth which will skew the reading dramatically, leading to higher BAC's than actuality. If someone blew a .12 and then a .07 on the same machine, he could be found guilty but it's possible the second sample is more accurate.

      • Re:But does it work? (Score:5, Informative)

        by wfstanle (1188751) on Thursday May 14, 2009 @04:20PM (#27956187)

        "Just because they've never observed it to fail, doesn't mean a thing."

        Correct! This is a point that many people fail to understand. Testing can't prove that there aren't bugs. All it proves is that a bug did not occur. Failing a test just proves that a bug exists while passing all test just proves that you failed to find a bug. Passing many tests can boost your confidence that there are no bugs. Verification can prove that your code is correct but for most programs it is unfeasible.

    • Re:But does it work? (Score:5, Informative)

      by geekgirlandrea (1148779) <andrea+slashdot@persephoneslair.org> on Thursday May 14, 2009 @03:43PM (#27955447) Homepage
      Read the article. The code in question, among other things, calculates an arithmetic mean of a sequence of values by successively averaging each value with the mean of all the previous ones, and reduces 12 bits of precision coming from the hardware sensor to 4 for some unspecified but undoubtedly stupid reason.
      • Re: (Score:3, Insightful)

        The code in question, among other things, calculates an arithmetic mean of a sequence of values by successively averaging each value with the mean of all the previous ones, and reduces 12 bits of precision coming from the hardware sensor to 4 for some unspecified but undoubtedly stupid reason.

        Well, it's not hard to imagine why they throw away all those bits. Prospective LEO customer: "Wow, this thing never gives the same reading twice. How am I supposed to secure convictions with numbers this flaky?"

        The a

        • Re:But does it work? (Score:5, Interesting)

          by geekgirlandrea (1148779) <andrea+slashdot@persephoneslair.org> on Thursday May 14, 2009 @04:28PM (#27956345) Homepage

          Well, if we assume the machine was sensitive up to the LD50 for ethanol of 0.5% BAC, then with only 4 bits of precision the uncertainty just from the rounding error is comparable to the difference between being over the limit and being completely sober. This was covered in the comments on Bruce Schneier's blog [schneier.com]. That one's probably wrecked a few peoples' lives too.

          • Re: (Score:3, Informative)

            Very true. To some extent, it's reasonable to truncate a few bits of precision if the noise floor of the BAC sensor is substantially higher than the dynamic range of a 12-bit ADC. No reason to display a bunch of meaningless flickering digits extending far to the right of the decimal point.

            But when you're displaying a decimal value, every place value with full 0-9 range takes about 3.3 bits of precision. If you're going to return numbers like "0.18" from a device with a range of 0.00 to 0.99, you need to

    • by mea37 (1201159) on Thursday May 14, 2009 @03:44PM (#27955459)

      My first thought as well.

      Of course, with poorly written code, it's hard to show whether or not the code ultimately works by examination of the code.

      Then again, proving that the code works (which should be the standard when the code is analyzed in court) by code examination is very difficult even for well-written code.

      Perhaps a better approach would be documented, repeatable testing of the device. When I challenge a radar gun, I get to ask about its calibration documents, but I don't think I get to debate the blueprints from which it was built.

      My personal opinion - and before getting on an "innocent until proven guilty" kick bear in mind that I'm not a part of the court system in this case - is that the defense realizes that almost all software systems look awful and are trying to game their way out of a conviction they've probably earned.

      That said, if for no other reason then to eliminate such gaming, there should be standards for testing and documenting the proper function of these devices. Any device that can't be calibrated and tested with sufficient certainty should be banned from use as evidence in court. If the device passes the test, then exactly how it does it shouldn't really matter.

      • by vertinox (846076) on Thursday May 14, 2009 @03:58PM (#27955731)

        Of course, with poorly written code, it's hard to show whether or not the code ultimately works by examination of the code.

        Of course it works because it gives an end result instead of an error message.

        The question every should ask is "Does it work accurately?" or "Does poorly written code skew the results?"

        Can the defense prove that the code was written so haphazardly that it ignores some data or does it round incorrectly like Excel does? These things do and can happen with sloppy code.

        That said, if the code is just poorly commended and indented correctly (*wink*) but does the math right and makes sure there isn't a sampling or rounding problem, then it isn't a problem.

      • by Carnildo (712617) on Thursday May 14, 2009 @04:04PM (#27955871) Homepage Journal

        Perhaps a better approach would be documented, repeatable testing of the device. When I challenge a radar gun, I get to ask about its calibration documents, but I don't think I get to debate the blueprints from which it was built.

        Calibration and testing won't reveal all the edge cases that might cause errors. Consider a radar gun designed to take the average of five samples. You've got a car moving away from you at 70 MPH, and a duck flies into the beam for one sample, moving towards you at 5 MPH. This gives the following five samples:

        70 70 70 -5 70

        I can see a way that badly-written code would turn that into an average speed of 106 MPH (storing a signed char as an unsigned char, which would turn the -5 into a 251), and yet it would pass calibration and every test someone's likely to perform.

        • Re: (Score:3, Insightful)

          by mea37 (1201159)

          "Calibration and testing won't reveal all the edge cases that might cause errors"

          Then you aren't testing correctly.

          "I can see a way that badly-written code would turn that into an average speed of 106 MPH "

          You may need to revisit the legal definition of "reasonable doubt". Being able to contemplate a scenario where the evidence could be wrong is not sufficient to overturn the evidence.

          Regardless, if the testers don't know that "an input suffered overflow or underflow" is an edge case they need to test, the

    • No. (Score:5, Informative)

      by SanityInAnarchy (655584) <ninja@slaphack.com> on Thursday May 14, 2009 @03:47PM (#27955525) Journal

      Just read Schneier's comments. He cites some of the more important things:

      Readings are Not Averaged Correctly: When the software takes a series of readings, it first averages the first two readings. Then, it averages the third reading with the average just computed... There is no comment or note detailing a reason for this calculation, which would cause the first reading to have more weight than successive readings.

      That alone should be enough -- the readings are not averaged correctly. But it goes on:

      The A/D converters measuring the IR readings and the fuel cell readings can produce values between 0 and 4095. However, the software divides the final average(s) by 256, meaning the final result can only have 16 values to represent the five-volt range (or less), or, represent the range of alcohol readings possible. This is a loss of precision in the data; of a possible twelve bits of information, only four bits are used. Further, because of an attribute in the IR calculations, the result value is further divided in half. This means that only 8 values are possible for the IR detection, and this is compared against the 16 values of the fuel cell.

      So we know it's buggy and inaccurate, to a moronic degree. If that wasn't enough:

      Catastrophic Error Detection Is Disabled: An interrupt that detects that the microprocessor is trying to execute an illegal instruction is disabled, meaning that the Alcotest software could appear to run correctly while executing wild branches or invalid code for a period of time. Other interrupts ignored are the Computer Operating Property (a watchdog timer), and the Software Interrupt.

      So, basically, it's designed to always return some value, even if it's wildly inaccurate, and even if the software is executing garbage at the time.

      In other words: It appears to be a very low-level equivalent of Visual Basic's "on error resume next".

      Whiskey. Tango. Foxtrot.

      So to answer your question: No, it does not work. Even if it did somehow work, there's obviously an unacceptably poor level of quality control here.

      • Re:No. (Score:5, Insightful)

        by Ohio Calvinist (895750) on Thursday May 14, 2009 @04:07PM (#27955911)
        The problem in a lot of states is that .01 can make a huge difference between a DUI, a DUI with a "high BAC kicker", a wet-reckless, or nothing at all. It has to be accurate to at least a few 9's or for those "on the bubble" cases do have a severe level of doubt. Because driving with a .07 is not illegal (for the most part), but .08 is. The question in court is not "were you drinking tonight", but "how much did you drink" which is a very specific very objective, very deturminable piece of information.

        As states lower their legal limits to the point where they intersect with non-impaired drinking drivers, especially with a .01 or more margin of error, you're going to get a lot of overzealous cops in cities with revenue shortfalls taking innocent people in for DUIs and hopefully more and more of these "border cases" will bring these devices into question more than the over-the-top blacking out, pissing his pants multiple-offender does in court.
      • Re:No. (Score:4, Insightful)

        by Lord Ender (156273) on Thursday May 14, 2009 @04:14PM (#27956053) Homepage

        In embedded systems programming, it is common practice to disable interrupts if they are not used. It is certainly possible that this app simply does not need to handle these interrupts, whether they are enabled or not.

        It is also possible that the other flaws mentioned, which clearly reduce accuracy, do not do so sufficiently to change the outcome in a meaningful way.

        The problem with drunk driving law is not primarily one of testing. It is that it presumes someone is incapable of driving with even trace amounts of alcohol, while treating other forms of more dangerous driving (such as driving while texting or on the phone) as being OK or far far less severe.

        The way the laws themselves are written is a horrible miscarriage of justice. This is the result of the perverse and hypocritical views of MADD and its ilk, the bastard children of the prohibition movement.

    • by erroneus (253617) on Thursday May 14, 2009 @04:18PM (#27956125) Homepage

      Whether or not it "works" isn't quite enough in my opinion. It needs to be clearly written in such a way that the purpose and methods used in sampling input from hardware and the making of calculations are verifiably accurate and true in all cases. This is an instrument that measures whether or not someone is within a prescribed legal limit and needs to be as provably clear and accurate as possible. We are talking about taking away freedoms from people as a result of this test machine and there should be as little room for error as possible.

      If I were to prescribe a system for analyzing breath for alcohol content, I would require that a single test unit be comprised of two machines from two different manufacturers and that any single sample be split equally between the two machines for measurement such that when both machines return results and are both in agreement within a prescribed "reasonable" difference from one another, then we might begin to say we have a reasonably accurate measure from which judgements can be reasonably made.

      In the mean time, software architecture needs to be held to the same legal standards as ACTUAL architecture and engineering. I recall being involved in a cabling project where all terminations were reading perfectly, but when I inspected the raceways, the bed radius of the cabling was way too tight and much of the cable was tied to various pipes and conduits and not fixed to the hardware intended for the handling of the cable. All of the cabling was not installed according to clear and complete specification and I was furious at what I found. The first answer offered to me was "but it all works right?"

      If you took your car in for repair and they charged you the full price of the repair with parts and you found that it was repaired with duct tape and bailing wire, would you accept "but it works!" as a reasonable answer to your complaint? I think not!

      Back to this situation: "Does it work?" The real answer? If you cannot read the code and make clear sense of it, you cannot prove that it works, only that it works under the practical conditions of testing. That is simply NOT good enough for any scientific measurement and especially not good enough for measurements that may be used to determine whether or not a person is sent to prison.

  • Code (Score:5, Insightful)

    by Quiet_Desperation (858215) on Thursday May 14, 2009 @03:39PM (#27955389)

    not written well, nor is it written to any defined coding standard

    Ah, so it's like most of the code in the world.

  • Good! (Score:5, Insightful)

    by SanityInAnarchy (655584) <ninja@slaphack.com> on Thursday May 14, 2009 @03:41PM (#27955409) Journal

    Ok, I'm not happy that some people almost certainly were measured inaccurately by these things. I'm not happy that this company was allowed to pull this kind of shit -- when you do government contracting, the government should own what you do.

    However, I am very glad that the precedent has been set.

    And I am especially glad that not only is there precedent, but there's a real live example of why we need this stuff to be open.

    • Re:Good! (Score:5, Insightful)

      by Red Flayer (890720) on Thursday May 14, 2009 @03:44PM (#27955455) Journal

      when you do government contracting, the government should own what you do

      But they weren't doing government contracting. The produced a good that was purchased by the government. There's a very big difference.

      The key here is not that the government, or anyone, should own what they produced -- it's that when what they produced is used to convict someone, that person has the right to examine the methods used.

      It's not about openness, at all. It's about the right to a fair trial; openness is just a side effect.

      • Re:Good! (Score:4, Insightful)

        by Midnight Thunder (17205) on Thursday May 14, 2009 @04:43PM (#27956655) Homepage Journal

        The key here is not that the government, or anyone, should own what they produced -- it's that when what they produced is used to convict someone, that person has the right to examine the methods used.

        I will call out the company for doing shoddy work. The question is whether the device was ever certified for the purpose, and if it was who did it and what was the process used. If you are going to use something to prosecute, then there needs to be evidence that the device was tested and certified using a publicly documented process. This is black box testing and if the government never did it, then why is it allowed in court?

  • by tcopeland (32225) <tom&thomasleecopeland,com> on Thursday May 14, 2009 @03:41PM (#27955423) Homepage

    ...from the article:

    Several sections are marked as "temporary, for now".

    So, make sure to strip out those TODOs before checking in the code. Bah!

  • No surprise (Score:5, Insightful)

    by infinite9 (319274) on Thursday May 14, 2009 @03:42PM (#27955425)

    80% of the code in business fits this description. With 20 year old legacy code written by 50 consultants, then upgraded in India, then ported from one platform to another to another, and a database engine switch or two. Code gets senile. What do they expect? Good thing we're all just commodities... human lego bricks easily replaced with cheaper plastic.

  • by AliasMarlowe (1042386) on Thursday May 14, 2009 @03:44PM (#27955471) Journal
    Just because code is not written to some official standard does not mean it is guaranteed to be buggy. Undisciplined coding is as bad as undisciplined specifications - results can indeed be ugly. It is preferable if the coders follow good practices, and there ideally would be a clear system for specifying program behaviour in testable ways. It is easier to produce good code with robust behaviour if good practices are followed from design through coding to testing and documentation, but it is not impossible to achieve good results in other ways also.
    Did they find any coding bugs, or did they just criticize the approach to coding?
    • by SanityInAnarchy (655584) <ninja@slaphack.com> on Thursday May 14, 2009 @03:57PM (#27955729) Journal

      Did they find any coding bugs,

      Yes. RTFA.

      2. Readings are Not Averaged Correctly: When the software takes a series of readings, it first averages the first two readings. Then, it averages the third reading with the average just computed.

      There you go. It's also inaccurate:

      The A/D converters measuring the IR readings and the fuel cell readings can produce values between 0 and 4095. However, the software divides the final average(s) by 256... Further, because of an attribute in the IR calculations, the result value is further divided in half. This means that only 8 values are possible for the IR detection...

      And, if there were a catastrophic bug, you wouldn't know it, you'd just keep getting readings:

      An interrupt that detects that the microprocessor is trying to execute an illegal instruction is disabled, meaning that the Alcotest software could appear to run correctly while executing wild branches or invalid code for a period of time. Other interrupts ignored are the Computer Operating Property (a watchdog timer), and the Software Interrupt.

      This belongs on The Daily WTF.

  • Just remember (Score:5, Insightful)

    by captnbmoore (911895) on Thursday May 14, 2009 @03:47PM (#27955543)
    This will not stop the state from using this to make a felon of you.
  • DUH.... (Score:5, Interesting)

    by Lumpy (12016) on Thursday May 14, 2009 @03:48PM (#27955549) Homepage

    If you got your hands on and analyzed the sourcecode to most DVD' players, TV's (Panasonic runs linux!) and other devices that are complex you will discover that in order to ship it earlier the code is an utter mess.

    Programmers are not joking when we complain about the "It compiles? Ship it!" statement.

    the fault is the Executive staff that refuse to listen to their experts (programmers) and do what they recommend. Instead we get morons that know nothing about programming making unrealistic deadlines and forcing death march coding marathons to give up the mess we have today.

    • Re: (Score:3, Insightful)

      The fault is the State using output of a device which is an undocumented, unverified black box in legal proceedings.

      Yes, of course, most of code out there is a similar mess. But if it fails, the worst that can happen is that your desktop crashes, or your iPod hangs... which is bad, of course, but not as bad as getting a criminal conviction for drunk driving.

      These things should be held to the same standards as code in military equipment or nuclear reactors - mistakes are inexcusable.

    • Re: (Score:3, Insightful)

      by D Ninja (825055)

      the fault is the Executive staff that refuse to listen to their experts (programmers) and do what they recommend. Instead we get morons that know nothing about programming making unrealistic deadlines and forcing death march coding marathons to give up the mess we have today.

      To some extent, you are correct. However, I also blame the developers. There are many "software engineers" and "computer scientists" I have worked with who didn't understand the basics of algorithms, design, testing, and other topics that are necessary to our field.

  • It appears that the NJ Supreme Court wasn't swayed too much [thenewspaper.com] by the source code evaluation. They're planning on reinstating the device with only minor modifications.
  • by Linker3000 (626634) on Thursday May 14, 2009 @04:03PM (#27955857) Journal

    10 REM Alky 0.1 A. Coder 2001
    20 REM Turn off lights and buzzer
    24 POKE 201,0
    26 POKE 202,0
    28 POKE 53280,0
    29 REM Any Breath?
    30 IF PEEK(200) = 0 THEN GOTO 30
    32 REM Buzzer
    33 POKE 53280,1
    34 PAUSE(2)
    35 POKE 53280,0
    36 REM Lights...
    40 A = 10 * RND(1)
    50 IF A > 5 GOTO 80
    60 REM Red light
    70 POKE 201,1
    75 GOTO 100
    76 REM Green Light
    80 POKE 202,1
    100 PAUSE(3)
    120 GOTO 20

  • by bcrowell (177657) on Thursday May 14, 2009 @04:05PM (#27955883) Homepage

    If I were the manufacturer, at this point I'd say: (1) lawyers are expensive; (2) competent programmers are expensive, but less expensive than lawyers; (3) our business is selling the beathalyzer, not the software, so we gain nothing by keeping the source secret; (4) this publicity is hurting us; (5) let's hire some more competent programmers to clean up the code, and then we can make it public; (6) profit!

    This is different from the case of the voting machines. In the case of a voting machine, there are lots of people who might be motivated to hack it, lots of people have access to the machines, and it only takes one compromised machine to throw a close election. If you believe in security by obscurity, then there is at least some logical argument for keeping the voting machine code secret. In the case of the breathalyzer, there's not even that lame argument.

  • The good: This particular breathalyzer has been proven to be the unreliable POS that it apparently is. This unit, and others like it, will finally start being held to a stronger coding standard.

    The bad: every sleezeball, ambulance chasing, "call lee free", douchebag of a lawyer will use this case to attack the credibility of any and all breathalyzers made in the past, present, or future, spreading enough FUD to juries everywhere that an unacceptable number of drunken idiots get the God given right to keep their license until they finally end up killing someone.

    As a person, I think groups like MADD spend most of their time trying to scare monger politicians into pushing us as close to prohibition as possible. I believe that alcohol can be used responsibly. But I also know that this case is going to result in DUI's getting overturned for people that damn sure don't deserve it. Borderline cases will get knocked down, cases will get thrown out, and the people that broke the law, that did something wrong, will walk out of a court room 'vindicated.' They didn't do anything wrong when they had six beers and drove home, it was that confounded *machine* that *said* they broke the law. The *machine* was busted, ergo they didn't break the law. In short, this case is going to make a lot of O.J. Simpson's. The jury said they didn't commit a crime, so they didn't. No harm no foul. Technicality? Bah! They're as innocent as the sweet baby Jesus.

    I'd like to think things will wash out in the end. This case will probably end up making it harder to get off on this particular technicality in the long term. In the short term? Here come the appeals. Maybe the state is partially at fault for buying shoddy equipment. (Or maybe not. Did they do a code review? Do they have the resources to one? Probably not. Did you do a code review of the 3com switch in your server room? Their selection criteria can certainly be questioned, but it probably doesn't change the fact that someone drank enough to blow a .22 then decided to drive home.)

    But in the end, the drunks are still going to be drunks. And tomorrow some of them will probably get to file appeals, and some of the ones that shouldn't be on the road, or even in public, will get to slip out of this brand new loophole. I'm not sure that that deserves a cork-popping celebration.

    (and yes: We all handle our booze differently. Arbitrary limits that determine "drunk" may or may not be the answer. Hardcore drunks will keep driving even after losing their license. DUI's are as much moneymakers for the States as speeding tickets. Yadda yadda yadda.)

  • by tjonnyc999 (1423763) <tjonnyc@gmaLIONil.com minus cat> on Thursday May 14, 2009 @04:42PM (#27956649)
    "DUI defendant finally gets access to breathalyzer code, ironically finds developers were probably drunk when they wrote it". http://www.fark.com/cgi/comments.pl?IDLink=4387892 [fark.com]
  • by swordgeek (112599) on Thursday May 14, 2009 @05:39PM (#27957677) Journal

    OK, LOTS of strange posts from people who claim to have read the article but only see that it's bad code, not actually broken.

    Read it again. It's broken from a legal liability and trustworthiness standpoint. It's broken from a precision standpoint. It's broken from an algorithm standpoint. It is not trusworthy, precise, accurate, or correct.

    "It is clear that, as submitted, the Alcotest software would not pass development standards and testing for the U.S. Government or Military. It would fail software standards for the Federal Aviation Administration (FAA) and Federal Drug Administration (FDA), as well as commercial standards used in devices for public safety. This means the Alcotest would not be considered for military applications such as analyzing breath alcohol for fighter pilots. If the FAA imposed mandatory alcohol testing for all commercial pilots, the Alcotest would be rejected based upon the FAA safety and software standards."

    Nobody in the government or military would be allowed to trust this, if it weren't already in use.

    "Results Limited to Small, Discrete Values"

    Sixteen values is all it displays! It throws away almost all of the precision of the 12-bit ADC, and reduces it to 4 bits! This is NOT precise enough!

    "Catastrophic Error Detection Is Disabled"
    "Diagnostics Adjust/Substitute Data Readings"
    "Range Limits Are Substituted for Incorrect Average Measurements"
    "The software design detects measurement errors, but ignores these errors unless they occur a consecutive total number of times."

    It's not correct. It's not accurate. It's not good enough. The odds are VERY good that some people over the limit have gotten off lucky, and also that some people below the limit now have criminal records.

Receiving a million dollars tax free will make you feel better than being flat broke and having a stomach ache. -- Dolph Sharp, "I'm O.K., You're Not So Hot"

Working...