Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Encryption Open Source Privacy Software

OpenSSL Cleanup: Hundreds of Commits In a Week 379

A new submitter writes with this news snipped from BSD news stalwart undeadly.org: "After the news of heartbleed broke early last week, the OpenBSD team dove in and started axing it up into shape. Leading this effort are Ted Unangst (tedu@) and Miod Vallat (miod@), who are head-to-head on a pure commit count basis with both having around 50 commits in this part of the tree in the week since Ted's first commit in this area. They are followed closely by Joel Sing (jsing@) who is systematically going through every nook and cranny and applying some basic KNF. Next in line are Theo de Raadt (deraadt@) and Bob Beck (beck@) who've been both doing a lot of cleanup, ripping out weird layers of abstraction for standard system or library calls. ... All combined, there've been over 250 commits cleaning up OpenSSL. In one week.'" You can check out the stats, in progress.
This discussion has been archived. No new comments can be posted.

OpenSSL Cleanup: Hundreds of Commits In a Week

Comments Filter:
  • I would think (Score:2, Insightful)

    by Pikoro ( 844299 )

    Well, I would think that this is mostly to do with publicity. Once someone calls your software into question in a very public light, you will be more willing to go through your project with a fine toothed comb and clean up all that old cruft you've been meaning to clear out.

    This is not a sign of inherent insecurity, but one of obvious house cleaning.

    • Re:I would think (Score:5, Informative)

      by Anonymous Coward on Sunday April 20, 2014 @06:51AM (#46798619)

      Note that OpenSSL isn't part of the OpenBSD project. It's a separate project.

    • Re:I would think (Score:5, Informative)

      by badger.foo ( 447981 ) <peter@bsdly.net> on Sunday April 20, 2014 @07:01AM (#46798661) Homepage
      This is actually the OpenBSD developers diving in because the upstream (OpenSSL) was unresponsive. If you look at the actual commits, you will see removal of dead code such as VMS-specific hacks, but also weeding out a lot of fairly obvious bugs, unsafe practices such as trying to work around the mythical slow malloc, feeding your private key to the randomness engine, use after free, and so on.

      It would look like it's been a while since anybody did much of anything besides half hearted scratching in very limited parts of the code. This is a very much needed effort which is likely to end up much like OpenSSH, maintained mainly as part of OpenBSD, but available to any takers. We should expect to see a lot more activity before the code base is declared stable, but by now it's clear that the burden of main source maintainership moved to a more responsive and responsible team.
      • by smash ( 1351 )
        Yup. I can't believe that there were such dodgy trade-offs made for SPEED (at the expense of code readability and complexity) in openSSL. SSL/TLS is one of the things I don't care about speed on. I want a secure connection - i'll take my ADSL running like a 28.8k modem if that's what it takes.
        • Re:I would think (Score:5, Insightful)

          by arth1 ( 260657 ) on Sunday April 20, 2014 @11:36AM (#46799539) Homepage Journal

          Yup. I can't believe that there were such dodgy trade-offs made for SPEED (at the expense of code readability and complexity) in openSSL.

          At least a couple of reasons:
          - First of all, OpenSSL was designed with much slower hardware in mind. MUCH slower. And much of is in still in use - embedded devices that last for 15+ years, for example.
          - Then there's the problem that while you can dedicate your PC to SSL, the other end seldom can. A single server may serve hundreds or thousands of requests, and doesn't have enough CPUs to dedicate one to each client. Being frugal with resources is very important when it comes to client/server communications, both on the network side and the server side.

          Certain critical functionality should be written highly optimized in low level languages, with out-of-the-box solutions for cutting Gordian knots and reduce delays.
          A problem is when you get code contributes who think high level and write low level, like in this case. Keeping unerring mental track of what's data, pointers and pointers to pointers and pointers to array elements isn't just a good idea in C - it's a must.
          But doing it correctly does pay off. The often repeated mantra that high level language compilers do a better job than humans isn't true, and doesn't become true through repetition. The compilers can do no better than the person programming them, and for a finite size compiler, the optimizations are generic, not specific. And a good low level programmer can take knowledge into effect that the compiler doesn't have.
          The downside is a higher risk - the programmer has to be truly good, and understand the complete impact of any code change. And the APIs have to be written in stone, so an optimization doesn't break something when an API changes.

          • Re:I would think (Score:4, Insightful)

            by am 2k ( 217885 ) on Sunday April 20, 2014 @02:17PM (#46800473) Homepage

            The often repeated mantra that high level language compilers do a better job than humans isn't true, and doesn't become true through repetition. The compilers can do no better than the person programming them, and for a finite size compiler, the optimizations are generic, not specific. And a good low level programmer can take knowledge into effect that the compiler doesn't have.

            While I agree, there are also specific cases where a human cannot reasonably do an optimisation a compiler has no troubles with. For example, a CPU can parallelize subsequent operations that are distinct, like talking to different units (floating point math, integer math, memory access) and not using the same registers. A human usually thinks in sequences, which require using the result of an operation as the input for the next operation. Everything else would be unreadable source.

            Finding distinct operations and re-ordering commands is easy in compilers, and the optimized result has no requirement of being readable.

            C tries to find a middle ground there, where the user still has a lot of possibilities to control the outcome, but the compiler knows enough about the code to do some optimizations. The question is where the correct balance lies.

        • Re:I would think (Score:4, Interesting)

          by beelsebob ( 529313 ) on Sunday April 20, 2014 @02:37PM (#46800575)

          Actually, you (oddly) do very much care about speed in OpenSSL. One of the most successful types of attack against security algorithms involves measuring how long it takes to do things, and inferring the complexity of the calculation from that. Taking any significant amount of time makes measurement easier, and errors smaller, and hence this type of attack easier.

      • Re:I would think (Score:5, Informative)

        by Halo1 ( 136547 ) on Sunday April 20, 2014 @08:47AM (#46798911)

        This is actually the OpenBSD developers diving in because the upstream (OpenSSL) was unresponsive. If you look at the actual commits, you will see removal of dead code such as VMS-specific hacks

        That code is not dead, there are actually still people using OpenSSL on OpenVMS and actively providing patches for it: https://www.mail-archive.com/o... [mail-archive.com]

        • Does OpenVMS still require the byzantine workarounds that were in OpenSSL, or can it compile modern software without substantial changes?

          I think part of the problem is that the OpenSSL developers are publishing code paths that they never test; this was tedu@'s original frustration when trying to disable the OpenSSL internal memory management; there was a knob to turn that nobody had tested, and the code was too hard to read to make the bug obvious.

          If there's a demand for OpenVMS SSL libraries, they obviousl

          • by Halo1 ( 136547 )

            Does OpenVMS still require the byzantine workarounds that were in OpenSSL, or can it compile modern software without substantial changes?

            The message I linked to at least adds several lines to a file called "symhacks.h" to deal with limits regarding the length of symbol names (which is probably required due to a limitation of the used object file format on that platform, and hence not easily resolvable by changing the compiler or linker).

            I think part of the problem is that the OpenSSL developers are publishing code paths that they never test;

            Conversely, I think part of the current cleanup is that it's not just a cleanup of bad/dangerous code, but also throwing away functionality/support that the people performing said cleanup personally don't con

        • That code is not dead, there are actually still people using OpenSSL on OpenVMS and actively providing patches for it

          Then maybe they should be maintaining their own version of OpenSSL, rather than inflicting their unnecessary complexity on the other 99.9% of users and dragging down everyone else's code quality along with them.

      • Re:I would think (Score:5, Insightful)

        by Enigma2175 ( 179646 ) on Sunday April 20, 2014 @09:57AM (#46799145) Homepage Journal

        This is actually the OpenBSD developers diving in because the upstream (OpenSSL) was unresponsive. If you look at the actual commits, you will see removal of dead code such as VMS-specific hacks, but also weeding out a lot of fairly obvious bugs, unsafe practices such as trying to work around the mythical slow malloc, feeding your private key to the randomness engine, use after free, and so on.

        It would look like it's been a while since anybody did much of anything besides half hearted scratching in very limited parts of the code. This is a very much needed effort which is likely to end up much like OpenSSH, maintained mainly as part of OpenBSD, but available to any takers. We should expect to see a lot more activity before the code base is declared stable, but by now it's clear that the burden of main source maintainership moved to a more responsive and responsible team.

        But the whole heartbleed issue was caused by someone who was doing more than "half hearted scratching", he was adding an entirely new feature (heartbeats). Does anyone else think that hundreds of commits in a week is a BAD thing? It seems to me like committing that much code would make it so each change doesn't get as much of a review as it would if the changes were committed gradually. Poor review is what caused this problem in the first place, they run the risk of adding another critical vulnerability.

        • Re:I would think (Score:4, Insightful)

          by Chaos Incarnate ( 772793 ) on Sunday April 20, 2014 @10:45AM (#46799291) Homepage
          The goal here isn't to review individual commits; it's to get the code to a more maintainable place so that it can be audited as a whole for the vulnerabilities that have cropped up over the years.
        • by sjames ( 1099 )

          This is being done in a fork of the code. Step one is to chainsaw out the pile of deadwood and obvious errors as well as a particularly nasty 'feature' that made heartbleed more than a curiosity (that would be the replacement malloc/free).

          Looking at the commits, in some cases they're zeroing pointers after free to make it fail hard on use after free bugs rather than usually getting away with it.

    • Re:I would think (Score:5, Interesting)

      by khchung ( 462899 ) on Sunday April 20, 2014 @07:33AM (#46798757) Journal

      Well, I would think that this is mostly to do with publicity. Once someone calls your software into question in a very public light, you will be more willing to go through your project with a fine toothed comb and clean up all that old cruft you've been meaning to clear out.

      This is not a sign of inherent insecurity, but one of obvious house cleaning.

      And how many bugs and vulnerabilities will they put in with such high volume of commits in such short time?

      - If a change is only "house cleaning" which is unrelated to security, why do it in such a rush?

      - If a change is security related, and obviously needed, then why wasn't it made earlier? Didn't that make a mockery of all the "many eyes" arguments oft touted in favor of Open Source?

      - If a change is security related and non-obvious, then won't doing it in such a rush probably introduce new bugs/vulnerability into the code?

      No matter how you look at it, making so many changes in a rush is not a good idea.

      • Re:I would think (Score:5, Informative)

        by serviscope_minor ( 664417 ) on Sunday April 20, 2014 @08:02AM (#46798815) Journal

        And how many bugs and vulnerabilities will they put in with such high volume of commits in such short time?

        You mean how many will they remove? They're largely replacing nasty portability hacks and unsafe constructs with safe, clean code. The chances are they'll be removing more bugs than they are adding.

        Secondly, this is phase 1: massive cleanup. Once they've done that they are then planning on auditing the code.

        f a change is only "house cleaning" which is unrelated to security, why do it in such a rush? -

        it is security related: they can't audit the code (very overdue) until it's clean.

        If a change is security related, and obviously needed, then why wasn't it made earlier?

        Good question. Seurity people have been omplaining about the state of OpenSSL for years. But they've always had other day-jobs. I guess now there is the incentive.

        Didn't that make a mockery of all the "many eyes" arguments oft touted in favor of Open Source?

        Nope. Once the bug was noticed it was fixed very quickly: i.e. it was a shallow bug. If you think than phrase means OSS is bug free, you have misunderstood it.

        - If a change is security related and non-obvious, then won't doing it in such a rush probably introduce new bugs/vulnerability into the code?

        No, the code is too unclean for a lot of stuff to be obvious. They can't audit it until it is clean. There is a chance they will introduce some problems, but given the nature of the changes and the people involved it's likely they'll fix more than they introduce.

        No matter how you look at it, making so many changes in a rush is not a good idea.

        Seems like a fine idea to me.

        • Also, this does not seem like a rush. Competent people who can write good code quickly, committing changes as they are found, can result in huge numbers of commits.

          They just happen to have a reason to make all of the changes people have been suggesting for years to make it clean.

          "rush" was a false premise. Mentioned twice, probably for maximum troll, because rush code is bad. Experienced coders writing in familiar, known patterns, is not rushed.

      • Re:I would think (Score:5, Insightful)

        by InvalidError ( 771317 ) on Sunday April 20, 2014 @08:12AM (#46798843)

        From the looks of it, many of the (potential) bugs in OpenSSL are caused by the use of a custom memory allocation scheme instead of a standard C allocator. Removing the custom memory management in favor of standard memory management alone implies dozens if not hundreds of relatively trivial code changes to all the places where the custom (de-)allocator get used. In the process of tracking down all of these, they come across stuff that does not look right and fix it while they are already in there.

        As for why so many bugs, "so many eyes" only works if you still have tons of people actively participating in the project's development. At a glance, it seems like the OpenBSD guys are saying the OpenSSL project was getting stale. Stale projects do not have anywhere near as many eyes going through their code nor as many people actively looking for potential bugs to fix before they get reported in the wild.

        In short: OpenSSL was long overdue for a make-over.

        • As for why so many bugs, "so many eyes" only works if you still have tons of people actively participating in the project's development. At a glance, it seems like the OpenBSD guys are saying the OpenSSL project was getting stale. Stale projects do not have anywhere near as many eyes going through their code nor as many people actively looking for potential bugs to fix before they get reported in the wild.

          Yes the "logic" used by many in this thread is specious at best.

          Premise: when there are many eyes looking at open source, it leads to more bugs getting fixed.

          Faulty reasoning (of too many people): this project didn't have many eyes, therefore the premise is false. Herp derp.

          Correct reasoning: when the condition of "many eyes" was met, the premise is shown to be true.

          Conclusion: some people dislike Open Source for ideological reasons and saw this as a chance to take cheap shots at it and sho

          • Premise: when there are many eyes looking at open source, it leads to more bugs getting fixed.
            Faulty reasoning (of too many people): this project didn't have many eyes, therefore the premise is false.
            Correct reasoning: when the condition of "many eyes" was met, the premise is shown to be true.

            Faulty common inference from the true premise: When source is open, many eyes will be looking at it.

            It seems the reverse occurs: People start to trust the *small* group actually doing work on any particular project, and since it is nobody's assigned responsibility to review it, nobody does. Yes, when a fire occurs there are many volunteer firemen; but that's a little late.

        • From the looks of it, many of the (potential) bugs in OpenSSL are caused by the use of a custom memory allocation scheme instead of a standard C allocator

          not necessarily - when I saw a commit that said "removed use after free" (ie still using a structure after it had been freed) then you've got to think the code is just generally sloppy.

          • I saw that too. It does seem sloppy, but of course it was not actuall "free" but sitting in the lifo.
            It was an un-dead chunk of memory :)
        • The irony behind the argument that "many eyes" didn’t work here, is that the code was only tested so thoroughly [readwrite.com] because it was open source. We've no idea how many bugs like heartbleed there might be in proprietary libraries that simply hasn't been found yet (outside the NSA)
        • From a brief glance it seems the freelist was barely useful, since if the chunksize you wanted was not the chunksize of elements in the list, you had to resort to a normal malloc anyway. Seems the size of the first entry in the freelist lifo dictated the size of all the chunks in the list. Seems a lot of effort for little return.
      • - If a change is security related, and obviously needed, then why wasn't it made earlier? Didn't that make a mockery of all the "many eyes" arguments oft touted in favor of Open Source?

        "Many eyes" rarely helps; you need to get the right eyes to look at a bug. If you follow vulnerabilities, you'll notice a handful of people find most of the bugs. The main advantage of open source is that the code is available for those eyes to view.

    • Re:I would think (Score:5, Interesting)

      by Richard_at_work ( 517087 ) on Sunday April 20, 2014 @07:34AM (#46798763)

      As the other poster says, OpenSSL isn't an OpenBSD project - what is going on here is a full blown fork of OpenSSL by the OpenBSD team, who are putting their money where their mouths are because when the heartbleed bug came out it was noted that the issue could have been mitigated on OpenBSD if the OpenSSL team had used the system provided memory allocation resources.

      So this is less OpenSSL and much more OpenBSD SSL being created.

    • Re:I would think (Score:5, Insightful)

      by stor ( 146442 ) on Sunday April 20, 2014 @08:23AM (#46798861)

      IMNSHO this is more about the OpenBSD folks doing what they do best.

      I sincerely respect their approach to programming, especially with regards to security: rather than just introduce security layers and mechanisms they program defensively and try to avoid or catch fundamental, exploitable mistakes.

      Hats off to the OpenBSD folks.

  • Lotta work,good reason, good job guys.

  • by Anonymous Coward on Sunday April 20, 2014 @06:50AM (#46798615)

    Because "over 250 commits" in a week by a third party to a mature security product suggests either a superhuman level of understanding or that they're going to miss a lot of the implications of their changes.

  • by dutchwhizzman ( 817898 ) on Sunday April 20, 2014 @06:54AM (#46798633)
    From what I understood earlier, this will be a fork of the official OpenSSL release, or will all these patches be incorporated in "generic" OpenSSL and not just the OpenBSD implementation?
    • by badger.foo ( 447981 ) <peter@bsdly.net> on Sunday April 20, 2014 @07:06AM (#46798679) Homepage
      The work by the OpenBSD developers happens in the OpenBSD tree. Whether or not the OpenSSL project chooses to merge back the changes into their tree is yet to be seen. Given the activity level in the OpenSSL tree lately I find it more likely that the primary source of a maintained open source SSL library shifts to the OpenBSD project. To the extent that portability goo is needed it will likely be introduced after the developers consider the code base stable enough.
      • by toonces33 ( 841696 ) on Sunday April 20, 2014 @07:18AM (#46798715)

        Well they seem to be ripping out a lot of things related to portability, so my guess is that this new effort is a dead end that the rest of us will never see. All the OpenBSD developers care is that the thing works on OpenBSD.

        • by petermgreen ( 876956 ) <plugwash.p10link@net> on Sunday April 20, 2014 @07:24AM (#46798735) Homepage

          We may see a model similar to openssh where the core code in openbsd is kept free of "portability goop" and then a seperate group maintains a "portable" version based on the openbsd version.

        • by smash ( 1351 ) on Sunday April 20, 2014 @07:33AM (#46798751) Homepage Journal
          Not necessarily. They are ripping out a lot of crap, much of which is portability done badly. The priority, it appears to is get back to a minimalist, secure code base, and then re-port it (to selected, actually used architectures, not big-endian x86 for example - which was some of the code removed) as time permits.
          • It's good that they are ripping out some of the portability. For most people, x86-64 and Linux/BSD/Windows support should be enough in modern days. Many OSS projects are overly portable and they unnecessarily brag how they support all the crusty HP-UX and SGI workstations with ages-old libraries and special legacy implementations. By cutting that support out, problems are avoided and the code is made more clean.
            • by laffer1 ( 701823 )

              Ironically, OpenBSD is one of the projects that supports a lot of crusty old architectures using the logic that it helps them find bugs.

              This logic is often used by Linux people not to support *BSD. I don't agree with the crusty old platform argument.

        • by serviscope_minor ( 664417 ) on Sunday April 20, 2014 @08:08AM (#46798825) Journal

          Well they seem to be ripping out a lot of things related to portability, so my guess is that this new effort is a dead end that the rest of us will never see.

          No: OpenBSD is a straightforward, clean, modern unix.

          They are ripping out all the stuff for portability to ancient unix and even long obsolete non-unit platforms.

          Much software compiles cleanly on OpenBSD, FreeBSD and Linux. If they do it well---and every interation I've had with OpenBSD code indicates that they will---it will e very easy to port it to Linux (and other modern operating systems).

          I expect what will happen is they will get it working on OpenBSD with enough API compatibility to compile the ports tree. Once it begins to stabilise, I expect people will maintain branches with patches to allow portability to other operating systems.

          Historical portaility causes hidoeus rot. I know: i've had it happen to me. There are old systems out there so perverse, they poison almost every part of your code. I think a semi-clean break like this (keep the good core, mercilessly rip out historically accumulated evil) is necessary.

          • Re: (Score:3, Insightful)

            by cold fjord ( 826450 )

            There are old systems out there so perverse, they poison almost every part of your code

            There are people out there deeply attached to their 6, 9, or 12 bit bytes and 36 or 60 bit words, you insensitive clod! ;)

          • by laffer1 ( 701823 )

            I would argue that things don't just compile on FreeBSD. In fact, if you change the uname to something else, many things don't work. There are a lot of FreeBSD specific hacks in source code from many popular projects.

            I would know since I have to deal with it all the time in my BSD project.

            • I would argue that things don't just compile on FreeBSD. In fact, if you change the uname to something else, many things don't work. There are a lot of FreeBSD specific hacks in source code from many popular projects.

              To some extent yes. However, much of encryption is heavily algorithmic code, and that should be portable. When I mean old, perverse systems, I'm referring to really old perverse systems. Like ones so ad they couldn't even rely on something as deeply embedded in the C standard as malloc() to wo

  • it's a good effort (Score:5, Interesting)

    by tero ( 39203 ) on Sunday April 20, 2014 @06:59AM (#46798647)

    Right now, I think the team is mostly focused on having "something usable" in OpenBSD and I doubt they care too much about anything else outside their scope.

    Having said that - forking OpenSSL to something usable and burning the remains with fire is a great idea, however there is considerable risk that the rush will cause new bugs - even though right now those commits have been mostly pulling out old crap.

    Fixing the beast is going to take a long while and several things will need to happen:
    - Upstream hurry to put more crap into the RFC needs to cease for a while. We don't need more features at the moment, we need stability and security.
    - Funding. The project needs to be funded somehow. I think a model similar to Linux Foundation might work - as long as they find a suitable project leads. But major players need to agree on this - and that's easier said than done (who will even pull them to the table?)
    - Project team. Together with funding, we need a stable project team. Writing good crypto code in C, is bloody hard, so the team needs to be on the ball - all the time. And the modus operandi should be "refuse features, increase quality". Requires a strong Project Lead.
    - Patience.. fixing it is a long process, so you can't go into it hastily. You need to start somewhere (and here I applaud the OpenBSD team), but to get it done, assuming that above is in place - expect 1-3 years of effort.

    • by aliquis ( 678370 )

      refuse features, increase quality

      Excuse me while I smile and laugh inside while I think about the different projects and their view or behavior in that regard.

  • by beezly ( 197427 ) on Sunday April 20, 2014 @07:03AM (#46798671)

    The article doesn't make it completely clear that this doesn't have much to do with the fixing problems in OpenSSL.

    Commits to the true OpenSSL source can be seen through the web interface at https://github.com/openssl/ope... [github.com]. What the article is talking about is tidying up the version that is built in to OpenBSD. Not that that isn't worthwhile work, but it's unlikely to fix many hidden problems in OpenSSL itself, unless the OpenBSD devs find something and hand it back to the upstream.

    • by beezly ( 197427 )

      By "fixing SSL" I meant "fixing OpenSSL". Duh! :(

    • Take a look at the actual commits. Quite a bit of 'KNF', but far from all of it. There's a lot of bugs removal that will benefit everyone.
  • Comment removed (Score:4, Interesting)

    by account_deleted ( 4530225 ) on Sunday April 20, 2014 @07:04AM (#46798677)
    Comment removed based on user account deletion
    • by Anonymous Coward on Sunday April 20, 2014 @07:55AM (#46798793)

      You don't have to wonder why. A quick search shows that they've already blogged about why Coverity didn't detect heartbleed.
      http://security.coverity.com/blog/2014/Apr/on-detecting-heartbleed-with-static-analysis.html

    • by ledow ( 319597 ) on Sunday April 20, 2014 @08:29AM (#46798875) Homepage

      Because static analysis cannot catch all problems.

      It's as simple as that.

      Their "fix" is to mark all byte-swapping as "tainted" data... basically it's a heuristic they've decided on, not proof of foul play (which is almost impossible to get on static analysis of someone else's code).

      Relying on things like Coverity to find everything will always end in disappointment. What you do is fix what it finds, when and where it's a problem. The fact is, they simply had no way to detect this issue whatsoever, but fudged one for now. The next "big hole" will be just the same.

      All due to them, Coverity is a very powerful and helpful tool. But you can't just give the impression that because it's been "scanned by Coverity" (or Symantec Antivirus, or Malwarebytes, or ANYTHING AT ALL WHATSOEVER) that's it's "safe". That's just spreading false confidence in duff code.

  • For a handful of developers to just "run through the code" and fix everything that easily? And do it quickly, without introducing other bugs?

    I am not a developer, but I can remember writing software whether for BASIC, Pascal or Perl and going back to fix or extend something and seeing stuff and saying "Why did I do it that way?" and making changes that I'm not honestly sure were "improvements" except for they seemed like improvements at the time even though they may have created new bugs.

    I don't know anyth

    • The main part of this s to tidy things up. One commit removes a load of custom functions and replaces it with a single include of unistd.h - which is really removing stuff put in way, way back because a platform didn't have unistd back then. Similarly, they get rid of weird stuff that is more standardised today.

      I think the real code auditing and fixing will happen later.

  • Code fixes are all fine and well but where the real thought needs to be going is how to verify these protocols. The basic problem with security is that "working" doesn't mean "secure". Most people focus on testing for "working" and given the bugs that have shown up in OpenSSL and its cousin in the last month or so, the problem is not that they don't work (that is, interoperate and transmit data) but that they have corner cases and API holes that are major security concerns. Some real thought needs to be

  • What's the tag for the NSA guy charged with putting holes back in? I'd like to follow how he's doing it.

    Seriously, it took 2 years to find the big one after it was committed. How much vetting have each of these 250 commits undergone? Who's watching the watchers?
      1. clean up
      2. tighten up
      3. inspect
      4. test
      5. field test

      In a clean-up operation, you don't vet each change, especially when the change is reformatting instead of a real code change. It's clear from the commits I've looked at that the people doing this are working to eliminate the cruft that inevitability builds up in any project as it matures. See http://en.wikipedia.org/wiki/C... [wikipedia.org] -- you take baby steps, and check your work as you go.

      In the process of clean-up, of re-factoring, one may find and fix subtle bugs

  • by strredwolf ( 532 ) on Sunday April 20, 2014 @07:40AM (#46798777) Homepage Journal

    A Tumblr site popped up a few days ago called OpenSSL Valhalla Rampage [opensslrampage.org]. The blogger there is going through all the commits and posting the juicy funny comments there. This includes killing... and rekilling... VMS support (which reminds me of Maxim 37: there is no such thing as overkill... [ovalkwiki.com]), stripping out now-stupid abstractions and optimizations of the unoptimizables, and more.

    • I love this :)

      My favorite comment:

      now that knf carpet bombing is finished, switch to hand to hand combat. still not sure what to make of mysteries like this: for (i = 7; i >= 0; i—) { /* increment */

      • I'm wondering what is supposed to be mysterious about that code. The "/* increment */" comment seems to apply to the code inside the loop, not what is being done to the i variable, so I don't think that's it. Is it because the loop goes from 7 down to 0 instead of the other way around? I remember reading a programming book back in the 80's that advocated doing that for better speed since the assembly code generated to compare to 0 was faster than comparing to some other integer (which seems to no longer

        • by Megol ( 3135005 ) on Sunday April 20, 2014 @09:45AM (#46799091)

          I'm wondering what is supposed to be mysterious about that code. The "/* increment */" comment seems to apply to the code inside the loop, not what is being done to the i variable, so I don't think that's it. Is it because the loop goes from 7 down to 0 instead of the other way around? I remember reading a programming book back in the 80's that advocated doing that for better speed since the assembly code generated to compare to 0 was faster than comparing to some other integer (which seems to no longer be the case, and I suspect could even cause cache misses for a bigger loop, although I don't know enough about how CPUs fill the cache to know for sure).

          Comparing to zero is faster in most architectures and still is a valid optimization. There shouldn't be any problems with cache misses either, if the architecture does stream detection it should do it for reversed streams too and if it doesn't (only detecting actual misses) doing it in reverse isn't a problem.

  • by Lehk228 ( 705449 )
    which one of the patches is the new NSA subversion?
  • Thank You (Score:5, Insightful)

    by Anonymous Coward on Sunday April 20, 2014 @08:08AM (#46798827)

    just a simple thank you to all the coders out there who donate of their skills and time to produce this and other very important software, for free folks! Thank You for making the world a better place

    • For free? This January, the very OpenBSD project almost had to halt operations as they didn't have the money to pay for the power bill for the build servers. Thankfully they did get a donation package to get over it.
  • Quantity doesn't equal quality
  • KNF can wait (Score:2, Insightful)

    by gatkinso ( 15975 )

    It is most annoying trying to hunt bugs while wading thru massive diffs caused by formatting changes.

    Deal with that later.

    • It's most annoying, and couter-productive, to audit code when the lack of formatting gets in the way. The first thing I do when I get a piece of messy code is run it through a beautifer first. In one case, that one action made the bug shine like the sun on a clear day. Who audits using diffs? The audit needs to cover ALL the code.
  • list of changes (Score:5, Informative)

    by monkey999 ( 3597657 ) on Sunday April 20, 2014 @09:17AM (#46799001) Journal
    A summary of the changes is here [undeadly.org] :

    Changes so far to OpenSSL 1.0.1g since the 11th include:

    • Splitting up libcrypto and libssl build directories
    • Fixing a use-after-free bug
    • Removal of ancient MacOS, Netware, OS/2, VMS and Windows build junk
    • Removal of “bugs” directory, benchmarks, INSTALL files, and shared library goo for lame platforms
    • Removal of most (all?) backend engines, some of which didn’t even have appropriate licensing
    • Ripping out some windows-specific cruft
    • Removal of various wrappers for things like sockets, snprintf, opendir, etc. to actually expose real return values
    • KNF of most C files
    • Removal of weak entropy additions
    • Removal of all heartbeat functionality which resulted in Heartbleed

    See [twitter.com] also [freshbsd.org]:

    Do not feed RSA private key information to the random subsystem as entropy. It might be fed to a pluggable random subsystem.... What were they thinking?!

    So far as all the "won't this introduce more bugs than it fixes" comments go, this is a recurring argument I have at work. I am of the "clean as you go", "refactor now" school.
    Everyone else says "If it works don't fix it"(IIWDFI), "don't rock the boat" etc.
    Heartbleed is what happens when the IIWDFI attitude wins. Bugs lurk under layers of cruft, simple changes become nightmares of wading through a lava flow of wrappers around hacks around bodges.
    Whenever anyone says IIWDFI, remind them that testing can only find a small proportion of possible bugs, so if you can't see whether it has bugs or not by reading the code, then no matter how many test cases it passes, it DOESN'T WORK.

  • by bokmann ( 323771 ) on Sunday April 20, 2014 @09:33AM (#46799053) Homepage

    With all the other tripe on this thread, I thought it necessary to say this loud and clear:

    Hey OpenSSL Contributors - thanks for your hard work on OpenSSL, and thanks for the hard work under this spotlight cleaning this up.

    Any serious software engineer with a career behind them has worked on projects with great source code, bad source code, and everything in between. It sounds like OpenSSL is a typical project with tons of legacy code where dealing with legacy is lower priority than dealing with the future. Subtracting out all the ideological debate and conspiracy theories, please realize there are plenty of 'less noisy' people out there who appreciate everything you're doing. And even more who would appreciate it if they understood the situation.

    Its now time for companies who depend on OpenSSL (and other projects) to realize that Open Source software can lower their development costs, but some of that savings needs to be put back into the process or we will all suffer from "the tragedy of the Commons".

  • Now do the same to all the other important components of the OSS server software stack.

What is research but a blind date with knowledge? -- Will Harvey

Working...