Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Encryption Open Source Privacy Software

OpenSSL Cleanup: Hundreds of Commits In a Week 379

A new submitter writes with this news snipped from BSD news stalwart undeadly.org: "After the news of heartbleed broke early last week, the OpenBSD team dove in and started axing it up into shape. Leading this effort are Ted Unangst (tedu@) and Miod Vallat (miod@), who are head-to-head on a pure commit count basis with both having around 50 commits in this part of the tree in the week since Ted's first commit in this area. They are followed closely by Joel Sing (jsing@) who is systematically going through every nook and cranny and applying some basic KNF. Next in line are Theo de Raadt (deraadt@) and Bob Beck (beck@) who've been both doing a lot of cleanup, ripping out weird layers of abstraction for standard system or library calls. ... All combined, there've been over 250 commits cleaning up OpenSSL. In one week.'" You can check out the stats, in progress.
This discussion has been archived. No new comments can be posted.

OpenSSL Cleanup: Hundreds of Commits In a Week

Comments Filter:
  • I would think (Score:2, Insightful)

    by Pikoro ( 844299 ) <init&init,sh> on Sunday April 20, 2014 @06:43AM (#46798591) Homepage Journal

    Well, I would think that this is mostly to do with publicity. Once someone calls your software into question in a very public light, you will be more willing to go through your project with a fine toothed comb and clean up all that old cruft you've been meaning to clear out.

    This is not a sign of inherent insecurity, but one of obvious house cleaning.

  • by Anonymous Coward on Sunday April 20, 2014 @06:50AM (#46798615)

    Because "over 250 commits" in a week by a third party to a mature security product suggests either a superhuman level of understanding or that they're going to miss a lot of the implications of their changes.

  • by Anonymous Coward on Sunday April 20, 2014 @07:02AM (#46798665)

    The fact that these 250 commits are mostly coding-style changes was conveniently hidden behind the acronym "KNF". Honi soit qui mal y pense!

  • Re:I would think (Score:2, Insightful)

    by Anonymous Coward on Sunday April 20, 2014 @07:03AM (#46798669)

    Kinda like closing the barn door after all the horse are out.

    The comparison is not even remotely valid. Even if, what would your suggestion be? They do nothing about it? They proceed at the pace they have already? That they go on like nothing has happened? They take this seriously and act accordingly.

    You do not need to mock them for a mistake. Or are you so perfect that you never have made a coding mistake? ROFL

    Better late than never I guess.

    Yes, better late than never. If they had known about the bug earlier I bet money on they would have fixed it earlier. Bad press or not.

  • by drolli ( 522659 ) on Sunday April 20, 2014 @07:16AM (#46798711) Journal

    I cant talk for C, but in Java the tools which warn you about potentially dangerous constructs are great (e.g. Sonar). You can easily identify many *suspicous* contructs and change them to something more safe. 250 commits per week with 4 devs on a moderatly sized project do not see much to me, much at the "quality" and not the "quantity" side.

    What annoys me is that - with all due respect - the companies which embed openssl in their products could have done a review of the code for quality. To me it seems that it's a fundamental library.

  • by toonces33 ( 841696 ) on Sunday April 20, 2014 @07:18AM (#46798715)

    Well they seem to be ripping out a lot of things related to portability, so my guess is that this new effort is a dead end that the rest of us will never see. All the OpenBSD developers care is that the thing works on OpenBSD.

  • by aliquis ( 678370 ) on Sunday April 20, 2014 @07:19AM (#46798719)

    And if there are that many, would a new start not be better?

    How about no?

    Also I don't see why lots of fixes would necessarily mean poor fixes. They likely do what they feel is obvious fixes / stuff they consider wrong. Or something such. What do I know really.

    Possibly they know what they are doing.

  • Thank You (Score:5, Insightful)

    by Anonymous Coward on Sunday April 20, 2014 @08:08AM (#46798827)

    just a simple thank you to all the coders out there who donate of their skills and time to produce this and other very important software, for free folks! Thank You for making the world a better place

  • Re:I would think (Score:5, Insightful)

    by InvalidError ( 771317 ) on Sunday April 20, 2014 @08:12AM (#46798843)

    From the looks of it, many of the (potential) bugs in OpenSSL are caused by the use of a custom memory allocation scheme instead of a standard C allocator. Removing the custom memory management in favor of standard memory management alone implies dozens if not hundreds of relatively trivial code changes to all the places where the custom (de-)allocator get used. In the process of tracking down all of these, they come across stuff that does not look right and fix it while they are already in there.

    As for why so many bugs, "so many eyes" only works if you still have tons of people actively participating in the project's development. At a glance, it seems like the OpenBSD guys are saying the OpenSSL project was getting stale. Stale projects do not have anywhere near as many eyes going through their code nor as many people actively looking for potential bugs to fix before they get reported in the wild.

    In short: OpenSSL was long overdue for a make-over.

  • Re:I would think (Score:5, Insightful)

    by stor ( 146442 ) on Sunday April 20, 2014 @08:23AM (#46798861)

    IMNSHO this is more about the OpenBSD folks doing what they do best.

    I sincerely respect their approach to programming, especially with regards to security: rather than just introduce security layers and mechanisms they program defensively and try to avoid or catch fundamental, exploitable mistakes.

    Hats off to the OpenBSD folks.

  • by ledow ( 319597 ) on Sunday April 20, 2014 @08:29AM (#46798875) Homepage

    Because static analysis cannot catch all problems.

    It's as simple as that.

    Their "fix" is to mark all byte-swapping as "tainted" data... basically it's a heuristic they've decided on, not proof of foul play (which is almost impossible to get on static analysis of someone else's code).

    Relying on things like Coverity to find everything will always end in disappointment. What you do is fix what it finds, when and where it's a problem. The fact is, they simply had no way to detect this issue whatsoever, but fudged one for now. The next "big hole" will be just the same.

    All due to them, Coverity is a very powerful and helpful tool. But you can't just give the impression that because it's been "scanned by Coverity" (or Symantec Antivirus, or Malwarebytes, or ANYTHING AT ALL WHATSOEVER) that's it's "safe". That's just spreading false confidence in duff code.

  • KNF can wait (Score:2, Insightful)

    by gatkinso ( 15975 ) on Sunday April 20, 2014 @08:38AM (#46798899)

    It is most annoying trying to hunt bugs while wading thru massive diffs caused by formatting changes.

    Deal with that later.

  • Re:I would think (Score:5, Insightful)

    by Kythe ( 4779 ) on Sunday April 20, 2014 @09:04AM (#46798967)
    All of what you say may be correct. But one major bug doesn't prove it. I do recall seeing quite a few life-or-death bugs in closed source projects over the years -- including stuff that most people use. So it's unclear to me whether you have additional evidence to support your statement, or are simply saying something self-serving.

    There's a reason why the plural of anecdote isn't evidence.

    On a related note, there's a reason why most of the most prominent security researchers seem to prefer open access to source code. I have yet to hear any of them change their mind over this bug.
  • Re:I would think (Score:5, Insightful)

    by Antique Geekmeister ( 740220 ) on Sunday April 20, 2014 @09:05AM (#46798971)

    > Multiple eyes on code, security, these are things that are great about open source, except they aren't. This is a prime example of how bugs get through anyhow, major bugs. So it is now shown beyond a shadow of anyones doubt, open source is NOT superior in these respects.

    Really, no. The horses are still pulling plows, and carts, and carriages, every day. The library is still in use in operating systems world wide.

    This is more visiting the barn that had horses stolen and making sure the locks and doors actually work the way they should before it's trusted at all again.

  • by bokmann ( 323771 ) on Sunday April 20, 2014 @09:33AM (#46799053) Homepage

    With all the other tripe on this thread, I thought it necessary to say this loud and clear:

    Hey OpenSSL Contributors - thanks for your hard work on OpenSSL, and thanks for the hard work under this spotlight cleaning this up.

    Any serious software engineer with a career behind them has worked on projects with great source code, bad source code, and everything in between. It sounds like OpenSSL is a typical project with tons of legacy code where dealing with legacy is lower priority than dealing with the future. Subtracting out all the ideological debate and conspiracy theories, please realize there are plenty of 'less noisy' people out there who appreciate everything you're doing. And even more who would appreciate it if they understood the situation.

    Its now time for companies who depend on OpenSSL (and other projects) to realize that Open Source software can lower their development costs, but some of that savings needs to be put back into the process or we will all suffer from "the tragedy of the Commons".

  • Re:I would think (Score:5, Insightful)

    by Enigma2175 ( 179646 ) on Sunday April 20, 2014 @09:57AM (#46799145) Homepage Journal

    This is actually the OpenBSD developers diving in because the upstream (OpenSSL) was unresponsive. If you look at the actual commits, you will see removal of dead code such as VMS-specific hacks, but also weeding out a lot of fairly obvious bugs, unsafe practices such as trying to work around the mythical slow malloc, feeding your private key to the randomness engine, use after free, and so on.

    It would look like it's been a while since anybody did much of anything besides half hearted scratching in very limited parts of the code. This is a very much needed effort which is likely to end up much like OpenSSH, maintained mainly as part of OpenBSD, but available to any takers. We should expect to see a lot more activity before the code base is declared stable, but by now it's clear that the burden of main source maintainership moved to a more responsive and responsible team.

    But the whole heartbleed issue was caused by someone who was doing more than "half hearted scratching", he was adding an entirely new feature (heartbeats). Does anyone else think that hundreds of commits in a week is a BAD thing? It seems to me like committing that much code would make it so each change doesn't get as much of a review as it would if the changes were committed gradually. Poor review is what caused this problem in the first place, they run the risk of adding another critical vulnerability.

  • Re:I would think (Score:5, Insightful)

    by causality ( 777677 ) on Sunday April 20, 2014 @10:06AM (#46799169)

    disagree: mocking people for making mistakes that they should know better is a way to help that person permanently try harder to avoid those mistakes in the future.

    with failure, comes mockery, especially if you are skilled and it should never have happened.

    mistakes can't go unpunished, even if the person doing the punishing is yourself, you can't tell other people to back off, you deserve it, sit back and take it on the chin and try harder next time otherwise people won't have any reason to try, because the penalty for failure is barely noticeable.

    That's the old-school view, in which one's self-esteem is based on achievement of some kind. Those who achieve little or nothing had low self-esteem and this was a principal incentive to identify one's own weaknesses and overcome them with directed effort. The extreme form is Japanese students throwing themselves off buildings (etc.) because their grades didn't quite measure up, making them nobodies.

    The newer view is that everyone is a special snowflake. No matter what. The extreme form is shown by the public schools that play soccer without keeping score, because scoring implies winners and losers and that might hurt someone's feelings.

    I mostly agree with you in that actions have consequences and you should accept the consequences of your own actions. Otherwise nothing really matters and there is no reason to improve yourself and you turn into one of these "perpetual victims" who never take responsibility for anything while simultaneously wondering why nothing ever changes. But that should be tempered with the fact that some mistakes are much more preventable (less understandable) than others, and as Orlando Battista once said, an error doesn't become a mistake until you refuse to correct it.

    There's no reason to metaphorically crucify someone for an honest mistake, but certainly there is going to be a reaction to it and people aren't going to like it. That's to be expected. It's reasonable to expect someone to accept that and yes, it is an incentive to learn something from the experience and be more careful in the future. If I were a programmer and found that completely unacceptable, I could always choose not to work on such an important project critical to the security of so many.

    As an aside: I think replying to you is much more edifying than being like the cowards who modded you down to -1 without once putting forth their own viewpoint which they clearly think is superior. There's too much of that going on at this site. There is no "-1 Disagree" mod for a reason.

  • Re:I would think (Score:4, Insightful)

    by Chaos Incarnate ( 772793 ) on Sunday April 20, 2014 @10:45AM (#46799291) Homepage
    The goal here isn't to review individual commits; it's to get the code to a more maintainable place so that it can be audited as a whole for the vulnerabilities that have cropped up over the years.
  • by cold fjord ( 826450 ) on Sunday April 20, 2014 @10:49AM (#46799311)

    There are old systems out there so perverse, they poison almost every part of your code

    There are people out there deeply attached to their 6, 9, or 12 bit bytes and 36 or 60 bit words, you insensitive clod! ;)

  • Re:I would think (Score:5, Insightful)

    by arth1 ( 260657 ) on Sunday April 20, 2014 @11:36AM (#46799539) Homepage Journal

    Yup. I can't believe that there were such dodgy trade-offs made for SPEED (at the expense of code readability and complexity) in openSSL.

    At least a couple of reasons:
    - First of all, OpenSSL was designed with much slower hardware in mind. MUCH slower. And much of is in still in use - embedded devices that last for 15+ years, for example.
    - Then there's the problem that while you can dedicate your PC to SSL, the other end seldom can. A single server may serve hundreds or thousands of requests, and doesn't have enough CPUs to dedicate one to each client. Being frugal with resources is very important when it comes to client/server communications, both on the network side and the server side.

    Certain critical functionality should be written highly optimized in low level languages, with out-of-the-box solutions for cutting Gordian knots and reduce delays.
    A problem is when you get code contributes who think high level and write low level, like in this case. Keeping unerring mental track of what's data, pointers and pointers to pointers and pointers to array elements isn't just a good idea in C - it's a must.
    But doing it correctly does pay off. The often repeated mantra that high level language compilers do a better job than humans isn't true, and doesn't become true through repetition. The compilers can do no better than the person programming them, and for a finite size compiler, the optimizations are generic, not specific. And a good low level programmer can take knowledge into effect that the compiler doesn't have.
    The downside is a higher risk - the programmer has to be truly good, and understand the complete impact of any code change. And the APIs have to be written in stone, so an optimization doesn't break something when an API changes.

  • Re:I would think (Score:4, Insightful)

    by am 2k ( 217885 ) on Sunday April 20, 2014 @02:17PM (#46800473) Homepage

    The often repeated mantra that high level language compilers do a better job than humans isn't true, and doesn't become true through repetition. The compilers can do no better than the person programming them, and for a finite size compiler, the optimizations are generic, not specific. And a good low level programmer can take knowledge into effect that the compiler doesn't have.

    While I agree, there are also specific cases where a human cannot reasonably do an optimisation a compiler has no troubles with. For example, a CPU can parallelize subsequent operations that are distinct, like talking to different units (floating point math, integer math, memory access) and not using the same registers. A human usually thinks in sequences, which require using the result of an operation as the input for the next operation. Everything else would be unreadable source.

    Finding distinct operations and re-ordering commands is easy in compilers, and the optimized result has no requirement of being readable.

    C tries to find a middle ground there, where the user still has a lot of possibilities to control the outcome, but the compiler knows enough about the code to do some optimizations. The question is where the correct balance lies.

To program is to be.

Working...