OpenSSL Cleanup: Hundreds of Commits In a Week 379
A new submitter writes with this news snipped from BSD news stalwart undeadly.org: "After the news of heartbleed broke early last week, the OpenBSD team dove in and started axing it up into shape. Leading this effort are Ted Unangst (tedu@) and Miod Vallat (miod@), who are head-to-head on a pure commit count basis with both having around 50 commits in this part of the tree in the week since Ted's first commit in this area. They are followed closely by Joel Sing (jsing@) who is systematically going through every nook and cranny and applying some basic KNF. Next in line are Theo de Raadt (deraadt@) and Bob Beck (beck@) who've been both doing a lot of cleanup, ripping out weird layers of abstraction for standard system or library calls. ... All combined, there've been over 250 commits cleaning up OpenSSL. In one week.'"
You can check out the stats, in progress.
I would think (Score:2, Insightful)
Well, I would think that this is mostly to do with publicity. Once someone calls your software into question in a very public light, you will be more willing to go through your project with a fine toothed comb and clean up all that old cruft you've been meaning to clear out.
This is not a sign of inherent insecurity, but one of obvious house cleaning.
Re:I would think (Score:5, Informative)
Note that OpenSSL isn't part of the OpenBSD project. It's a separate project.
Re:I would think (Score:5, Informative)
It would look like it's been a while since anybody did much of anything besides half hearted scratching in very limited parts of the code. This is a very much needed effort which is likely to end up much like OpenSSH, maintained mainly as part of OpenBSD, but available to any takers. We should expect to see a lot more activity before the code base is declared stable, but by now it's clear that the burden of main source maintainership moved to a more responsive and responsible team.
Re: (Score:3)
Re:I would think (Score:5, Insightful)
Yup. I can't believe that there were such dodgy trade-offs made for SPEED (at the expense of code readability and complexity) in openSSL.
At least a couple of reasons:
- First of all, OpenSSL was designed with much slower hardware in mind. MUCH slower. And much of is in still in use - embedded devices that last for 15+ years, for example.
- Then there's the problem that while you can dedicate your PC to SSL, the other end seldom can. A single server may serve hundreds or thousands of requests, and doesn't have enough CPUs to dedicate one to each client. Being frugal with resources is very important when it comes to client/server communications, both on the network side and the server side.
Certain critical functionality should be written highly optimized in low level languages, with out-of-the-box solutions for cutting Gordian knots and reduce delays.
A problem is when you get code contributes who think high level and write low level, like in this case. Keeping unerring mental track of what's data, pointers and pointers to pointers and pointers to array elements isn't just a good idea in C - it's a must.
But doing it correctly does pay off. The often repeated mantra that high level language compilers do a better job than humans isn't true, and doesn't become true through repetition. The compilers can do no better than the person programming them, and for a finite size compiler, the optimizations are generic, not specific. And a good low level programmer can take knowledge into effect that the compiler doesn't have.
The downside is a higher risk - the programmer has to be truly good, and understand the complete impact of any code change. And the APIs have to be written in stone, so an optimization doesn't break something when an API changes.
Re:I would think (Score:4, Insightful)
The often repeated mantra that high level language compilers do a better job than humans isn't true, and doesn't become true through repetition. The compilers can do no better than the person programming them, and for a finite size compiler, the optimizations are generic, not specific. And a good low level programmer can take knowledge into effect that the compiler doesn't have.
While I agree, there are also specific cases where a human cannot reasonably do an optimisation a compiler has no troubles with. For example, a CPU can parallelize subsequent operations that are distinct, like talking to different units (floating point math, integer math, memory access) and not using the same registers. A human usually thinks in sequences, which require using the result of an operation as the input for the next operation. Everything else would be unreadable source.
Finding distinct operations and re-ordering commands is easy in compilers, and the optimized result has no requirement of being readable.
C tries to find a middle ground there, where the user still has a lot of possibilities to control the outcome, but the compiler knows enough about the code to do some optimizations. The question is where the correct balance lies.
Re:I would think (Score:4, Interesting)
Actually, you (oddly) do very much care about speed in OpenSSL. One of the most successful types of attack against security algorithms involves measuring how long it takes to do things, and inferring the complexity of the calculation from that. Taking any significant amount of time makes measurement easier, and errors smaller, and hence this type of attack easier.
Re:I would think (Score:5, Informative)
This is actually the OpenBSD developers diving in because the upstream (OpenSSL) was unresponsive. If you look at the actual commits, you will see removal of dead code such as VMS-specific hacks
That code is not dead, there are actually still people using OpenSSL on OpenVMS and actively providing patches for it: https://www.mail-archive.com/o... [mail-archive.com]
Re: (Score:3)
Does OpenVMS still require the byzantine workarounds that were in OpenSSL, or can it compile modern software without substantial changes?
I think part of the problem is that the OpenSSL developers are publishing code paths that they never test; this was tedu@'s original frustration when trying to disable the OpenSSL internal memory management; there was a knob to turn that nobody had tested, and the code was too hard to read to make the bug obvious.
If there's a demand for OpenVMS SSL libraries, they obviousl
Re: (Score:2)
Does OpenVMS still require the byzantine workarounds that were in OpenSSL, or can it compile modern software without substantial changes?
The message I linked to at least adds several lines to a file called "symhacks.h" to deal with limits regarding the length of symbol names (which is probably required due to a limitation of the used object file format on that platform, and hence not easily resolvable by changing the compiler or linker).
I think part of the problem is that the OpenSSL developers are publishing code paths that they never test;
Conversely, I think part of the current cleanup is that it's not just a cleanup of bad/dangerous code, but also throwing away functionality/support that the people performing said cleanup personally don't con
Re: (Score:3)
That code is not dead, there are actually still people using OpenSSL on OpenVMS and actively providing patches for it
Then maybe they should be maintaining their own version of OpenSSL, rather than inflicting their unnecessary complexity on the other 99.9% of users and dragging down everyone else's code quality along with them.
Re:I would think (Score:5, Insightful)
This is actually the OpenBSD developers diving in because the upstream (OpenSSL) was unresponsive. If you look at the actual commits, you will see removal of dead code such as VMS-specific hacks, but also weeding out a lot of fairly obvious bugs, unsafe practices such as trying to work around the mythical slow malloc, feeding your private key to the randomness engine, use after free, and so on.
It would look like it's been a while since anybody did much of anything besides half hearted scratching in very limited parts of the code. This is a very much needed effort which is likely to end up much like OpenSSH, maintained mainly as part of OpenBSD, but available to any takers. We should expect to see a lot more activity before the code base is declared stable, but by now it's clear that the burden of main source maintainership moved to a more responsive and responsible team.
But the whole heartbleed issue was caused by someone who was doing more than "half hearted scratching", he was adding an entirely new feature (heartbeats). Does anyone else think that hundreds of commits in a week is a BAD thing? It seems to me like committing that much code would make it so each change doesn't get as much of a review as it would if the changes were committed gradually. Poor review is what caused this problem in the first place, they run the risk of adding another critical vulnerability.
Re:I would think (Score:4, Insightful)
Re: (Score:3)
This is being done in a fork of the code. Step one is to chainsaw out the pile of deadwood and obvious errors as well as a particularly nasty 'feature' that made heartbleed more than a curiosity (that would be the replacement malloc/free).
Looking at the commits, in some cases they're zeroing pointers after free to make it fail hard on use after free bugs rather than usually getting away with it.
Heartbleed != malloc (Score:3)
hearbeats just exposed their faulty malloc replacement.
Heartbleed had nothing to do with their malloc replacement (at least not directly [*] ).
Heartbleed is just a very basic case of missing nested bound checking. (They check bounds for the heartbeat request it self, but fail to check is the super structure containing the hearthbeet - i.e.: the packet - passes the bound checks too. XKCD's explanation [xkcd.com] is actually spot-on: it's more or less equivalent to forgetting to check if the requested number of caracter doesn't exceed the size of the speachbubble).
This is n
Re:I would think (Score:5, Interesting)
Well, I would think that this is mostly to do with publicity. Once someone calls your software into question in a very public light, you will be more willing to go through your project with a fine toothed comb and clean up all that old cruft you've been meaning to clear out.
This is not a sign of inherent insecurity, but one of obvious house cleaning.
And how many bugs and vulnerabilities will they put in with such high volume of commits in such short time?
- If a change is only "house cleaning" which is unrelated to security, why do it in such a rush?
- If a change is security related, and obviously needed, then why wasn't it made earlier? Didn't that make a mockery of all the "many eyes" arguments oft touted in favor of Open Source?
- If a change is security related and non-obvious, then won't doing it in such a rush probably introduce new bugs/vulnerability into the code?
No matter how you look at it, making so many changes in a rush is not a good idea.
Re:I would think (Score:5, Informative)
And how many bugs and vulnerabilities will they put in with such high volume of commits in such short time?
You mean how many will they remove? They're largely replacing nasty portability hacks and unsafe constructs with safe, clean code. The chances are they'll be removing more bugs than they are adding.
Secondly, this is phase 1: massive cleanup. Once they've done that they are then planning on auditing the code.
f a change is only "house cleaning" which is unrelated to security, why do it in such a rush? -
it is security related: they can't audit the code (very overdue) until it's clean.
If a change is security related, and obviously needed, then why wasn't it made earlier?
Good question. Seurity people have been omplaining about the state of OpenSSL for years. But they've always had other day-jobs. I guess now there is the incentive.
Didn't that make a mockery of all the "many eyes" arguments oft touted in favor of Open Source?
Nope. Once the bug was noticed it was fixed very quickly: i.e. it was a shallow bug. If you think than phrase means OSS is bug free, you have misunderstood it.
- If a change is security related and non-obvious, then won't doing it in such a rush probably introduce new bugs/vulnerability into the code?
No, the code is too unclean for a lot of stuff to be obvious. They can't audit it until it is clean. There is a chance they will introduce some problems, but given the nature of the changes and the people involved it's likely they'll fix more than they introduce.
No matter how you look at it, making so many changes in a rush is not a good idea.
Seems like a fine idea to me.
Re: (Score:3)
Also, this does not seem like a rush. Competent people who can write good code quickly, committing changes as they are found, can result in huge numbers of commits.
They just happen to have a reason to make all of the changes people have been suggesting for years to make it clean.
"rush" was a false premise. Mentioned twice, probably for maximum troll, because rush code is bad. Experienced coders writing in familiar, known patterns, is not rushed.
Re:I would think (Score:5, Insightful)
From the looks of it, many of the (potential) bugs in OpenSSL are caused by the use of a custom memory allocation scheme instead of a standard C allocator. Removing the custom memory management in favor of standard memory management alone implies dozens if not hundreds of relatively trivial code changes to all the places where the custom (de-)allocator get used. In the process of tracking down all of these, they come across stuff that does not look right and fix it while they are already in there.
As for why so many bugs, "so many eyes" only works if you still have tons of people actively participating in the project's development. At a glance, it seems like the OpenBSD guys are saying the OpenSSL project was getting stale. Stale projects do not have anywhere near as many eyes going through their code nor as many people actively looking for potential bugs to fix before they get reported in the wild.
In short: OpenSSL was long overdue for a make-over.
Re: (Score:3)
As for why so many bugs, "so many eyes" only works if you still have tons of people actively participating in the project's development. At a glance, it seems like the OpenBSD guys are saying the OpenSSL project was getting stale. Stale projects do not have anywhere near as many eyes going through their code nor as many people actively looking for potential bugs to fix before they get reported in the wild.
Yes the "logic" used by many in this thread is specious at best.
Premise: when there are many eyes looking at open source, it leads to more bugs getting fixed.
Faulty reasoning (of too many people): this project didn't have many eyes, therefore the premise is false. Herp derp.
Correct reasoning: when the condition of "many eyes" was met, the premise is shown to be true.
Conclusion: some people dislike Open Source for ideological reasons and saw this as a chance to take cheap shots at it and sho
Re: (Score:3)
Premise: when there are many eyes looking at open source, it leads to more bugs getting fixed.
Faulty reasoning (of too many people): this project didn't have many eyes, therefore the premise is false.
Correct reasoning: when the condition of "many eyes" was met, the premise is shown to be true.
Faulty common inference from the true premise: When source is open, many eyes will be looking at it.
It seems the reverse occurs: People start to trust the *small* group actually doing work on any particular project, and since it is nobody's assigned responsibility to review it, nobody does. Yes, when a fire occurs there are many volunteer firemen; but that's a little late.
Re: (Score:3)
From the looks of it, many of the (potential) bugs in OpenSSL are caused by the use of a custom memory allocation scheme instead of a standard C allocator
not necessarily - when I saw a commit that said "removed use after free" (ie still using a structure after it had been freed) then you've got to think the code is just generally sloppy.
Re: (Score:2)
It was an un-dead chunk of memory
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
- If a change is security related, and obviously needed, then why wasn't it made earlier? Didn't that make a mockery of all the "many eyes" arguments oft touted in favor of Open Source?
"Many eyes" rarely helps; you need to get the right eyes to look at a bug. If you follow vulnerabilities, you'll notice a handful of people find most of the bugs. The main advantage of open source is that the code is available for those eyes to view.
Re:I would think (Score:5, Interesting)
As the other poster says, OpenSSL isn't an OpenBSD project - what is going on here is a full blown fork of OpenSSL by the OpenBSD team, who are putting their money where their mouths are because when the heartbleed bug came out it was noted that the issue could have been mitigated on OpenBSD if the OpenSSL team had used the system provided memory allocation resources.
So this is less OpenSSL and much more OpenBSD SSL being created.
Re:I would think (Score:5, Insightful)
IMNSHO this is more about the OpenBSD folks doing what they do best.
I sincerely respect their approach to programming, especially with regards to security: rather than just introduce security layers and mechanisms they program defensively and try to avoid or catch fundamental, exploitable mistakes.
Hats off to the OpenBSD folks.
Re: (Score:2)
Both hats off and donations to OpenBSD are in order.
Re: (Score:2, Insightful)
Kinda like closing the barn door after all the horse are out.
The comparison is not even remotely valid. Even if, what would your suggestion be? They do nothing about it? They proceed at the pace they have already? That they go on like nothing has happened? They take this seriously and act accordingly.
You do not need to mock them for a mistake. Or are you so perfect that you never have made a coding mistake? ROFL
Better late than never I guess.
Yes, better late than never. If they had known about the bug earlier I bet money on they would have fixed it earlier. Bad press or not.
Re:I would think (Score:5, Insightful)
There's a reason why the plural of anecdote isn't evidence.
On a related note, there's a reason why most of the most prominent security researchers seem to prefer open access to source code. I have yet to hear any of them change their mind over this bug.
Re:I would think (Score:5, Insightful)
> Multiple eyes on code, security, these are things that are great about open source, except they aren't. This is a prime example of how bugs get through anyhow, major bugs. So it is now shown beyond a shadow of anyones doubt, open source is NOT superior in these respects.
Really, no. The horses are still pulling plows, and carts, and carriages, every day. The library is still in use in operating systems world wide.
This is more visiting the barn that had horses stolen and making sure the locks and doors actually work the way they should before it's trusted at all again.
Open Source is still software development (Score:3, Interesting)
Our modern malady is to look at methods, not histories.
Great software comes from great leadership and good-to-great talent. But mostly, it involves someone having a good idea and following it through.
Sometimes, that's a single programmer (Bi
Re: (Score:2)
I disagree. Most open source projects have high code quality. For one once you have your code available for everyone to see you make a greater effort to make something decent. For another most developers I know do not like having bugs on their code. I have known a couple of occasional open source projects with poor code quality which were led by artists or other non-programmers who do not know how to do better but the number of bugs is usually less than in equivalent closed source software I know of. Docume
Re:I would think (Score:5, Insightful)
disagree: mocking people for making mistakes that they should know better is a way to help that person permanently try harder to avoid those mistakes in the future.
with failure, comes mockery, especially if you are skilled and it should never have happened.
mistakes can't go unpunished, even if the person doing the punishing is yourself, you can't tell other people to back off, you deserve it, sit back and take it on the chin and try harder next time otherwise people won't have any reason to try, because the penalty for failure is barely noticeable.
That's the old-school view, in which one's self-esteem is based on achievement of some kind. Those who achieve little or nothing had low self-esteem and this was a principal incentive to identify one's own weaknesses and overcome them with directed effort. The extreme form is Japanese students throwing themselves off buildings (etc.) because their grades didn't quite measure up, making them nobodies.
The newer view is that everyone is a special snowflake. No matter what. The extreme form is shown by the public schools that play soccer without keeping score, because scoring implies winners and losers and that might hurt someone's feelings.
I mostly agree with you in that actions have consequences and you should accept the consequences of your own actions. Otherwise nothing really matters and there is no reason to improve yourself and you turn into one of these "perpetual victims" who never take responsibility for anything while simultaneously wondering why nothing ever changes. But that should be tempered with the fact that some mistakes are much more preventable (less understandable) than others, and as Orlando Battista once said, an error doesn't become a mistake until you refuse to correct it.
There's no reason to metaphorically crucify someone for an honest mistake, but certainly there is going to be a reaction to it and people aren't going to like it. That's to be expected. It's reasonable to expect someone to accept that and yes, it is an incentive to learn something from the experience and be more careful in the future. If I were a programmer and found that completely unacceptable, I could always choose not to work on such an important project critical to the security of so many.
As an aside: I think replying to you is much more edifying than being like the cowards who modded you down to -1 without once putting forth their own viewpoint which they clearly think is superior. There's too much of that going on at this site. There is no "-1 Disagree" mod for a reason.
Re: (Score:2)
This is one of the more insightful and reasoned comments I've read on /. in a long while. Sadly I have no mod points.
Re: (Score:3)
The people fixing it aren't the people who made the mistakes, is the thing. This is the clean up crew, and entirely the wrong people to mock. And if you've any doubt at all about whether these guys are ocking the people who did make the mistake, well, this is Theo de Raanter we're talking about here, yes, mocking happened sure as the Sun rose.
Re: (Score:2)
It is now, considering that the most active development is going on there. Will probably have to change to another name though.
Good Job Guys (Score:2)
Lotta work,good reason, good job guys.
I assume this is an R&D exercise? (Score:3, Insightful)
Because "over 250 commits" in a week by a third party to a mature security product suggests either a superhuman level of understanding or that they're going to miss a lot of the implications of their changes.
Re:I assume this is an R&D exercise? (Score:5, Insightful)
The fact that these 250 commits are mostly coding-style changes was conveniently hidden behind the acronym "KNF". Honi soit qui mal y pense!
Merged back or fork? (Score:3)
Re:Merged back or fork? (Score:5, Informative)
Re:Merged back or fork? (Score:4, Insightful)
Well they seem to be ripping out a lot of things related to portability, so my guess is that this new effort is a dead end that the rest of us will never see. All the OpenBSD developers care is that the thing works on OpenBSD.
Re:Merged back or fork? (Score:4, Interesting)
We may see a model similar to openssh where the core code in openbsd is kept free of "portability goop" and then a seperate group maintains a "portable" version based on the openbsd version.
Re:Merged back or fork? (Score:5, Informative)
Re: (Score:2)
Re: (Score:2)
Ironically, OpenBSD is one of the projects that supports a lot of crusty old architectures using the logic that it helps them find bugs.
This logic is often used by Linux people not to support *BSD. I don't agree with the crusty old platform argument.
Re:Merged back or fork? (Score:5, Informative)
Well they seem to be ripping out a lot of things related to portability, so my guess is that this new effort is a dead end that the rest of us will never see.
No: OpenBSD is a straightforward, clean, modern unix.
They are ripping out all the stuff for portability to ancient unix and even long obsolete non-unit platforms.
Much software compiles cleanly on OpenBSD, FreeBSD and Linux. If they do it well---and every interation I've had with OpenBSD code indicates that they will---it will e very easy to port it to Linux (and other modern operating systems).
I expect what will happen is they will get it working on OpenBSD with enough API compatibility to compile the ports tree. Once it begins to stabilise, I expect people will maintain branches with patches to allow portability to other operating systems.
Historical portaility causes hidoeus rot. I know: i've had it happen to me. There are old systems out there so perverse, they poison almost every part of your code. I think a semi-clean break like this (keep the good core, mercilessly rip out historically accumulated evil) is necessary.
Re: (Score:3, Insightful)
There are old systems out there so perverse, they poison almost every part of your code
There are people out there deeply attached to their 6, 9, or 12 bit bytes and 36 or 60 bit words, you insensitive clod! ;)
Re: (Score:2)
I would argue that things don't just compile on FreeBSD. In fact, if you change the uname to something else, many things don't work. There are a lot of FreeBSD specific hacks in source code from many popular projects.
I would know since I have to deal with it all the time in my BSD project.
Re: (Score:2)
I would argue that things don't just compile on FreeBSD. In fact, if you change the uname to something else, many things don't work. There are a lot of FreeBSD specific hacks in source code from many popular projects.
To some extent yes. However, much of encryption is heavily algorithmic code, and that should be portable. When I mean old, perverse systems, I'm referring to really old perverse systems. Like ones so ad they couldn't even rely on something as deeply embedded in the C standard as malloc() to wo
it's a good effort (Score:5, Interesting)
Right now, I think the team is mostly focused on having "something usable" in OpenBSD and I doubt they care too much about anything else outside their scope.
Having said that - forking OpenSSL to something usable and burning the remains with fire is a great idea, however there is considerable risk that the rush will cause new bugs - even though right now those commits have been mostly pulling out old crap.
Fixing the beast is going to take a long while and several things will need to happen:
- Upstream hurry to put more crap into the RFC needs to cease for a while. We don't need more features at the moment, we need stability and security.
- Funding. The project needs to be funded somehow. I think a model similar to Linux Foundation might work - as long as they find a suitable project leads. But major players need to agree on this - and that's easier said than done (who will even pull them to the table?)
- Project team. Together with funding, we need a stable project team. Writing good crypto code in C, is bloody hard, so the team needs to be on the ball - all the time. And the modus operandi should be "refuse features, increase quality". Requires a strong Project Lead.
- Patience.. fixing it is a long process, so you can't go into it hastily. You need to start somewhere (and here I applaud the OpenBSD team), but to get it done, assuming that above is in place - expect 1-3 years of effort.
Re: (Score:2)
refuse features, increase quality
Excuse me while I smile and laugh inside while I think about the different projects and their view or behavior in that regard.
Re: (Score:2)
There are plenty of alternatives that are present in many embedded devices?
This isn't fixing SSL (Score:5, Informative)
The article doesn't make it completely clear that this doesn't have much to do with the fixing problems in OpenSSL.
Commits to the true OpenSSL source can be seen through the web interface at https://github.com/openssl/ope... [github.com]. What the article is talking about is tidying up the version that is built in to OpenBSD. Not that that isn't worthwhile work, but it's unlikely to fix many hidden problems in OpenSSL itself, unless the OpenBSD devs find something and hand it back to the upstream.
Re: (Score:2)
By "fixing SSL" I meant "fixing OpenSSL". Duh! :(
Re: (Score:2)
Comment removed (Score:4, Interesting)
Re:Are they still running it through Coverity ? (Score:5, Informative)
You don't have to wonder why. A quick search shows that they've already blogged about why Coverity didn't detect heartbleed.
http://security.coverity.com/blog/2014/Apr/on-detecting-heartbleed-with-static-analysis.html
Re: (Score:2)
Re:Are they still running it through Coverity ? (Score:4, Insightful)
Because static analysis cannot catch all problems.
It's as simple as that.
Their "fix" is to mark all byte-swapping as "tainted" data... basically it's a heuristic they've decided on, not proof of foul play (which is almost impossible to get on static analysis of someone else's code).
Relying on things like Coverity to find everything will always end in disappointment. What you do is fix what it finds, when and where it's a problem. The fact is, they simply had no way to detect this issue whatsoever, but fudged one for now. The next "big hole" will be just the same.
All due to them, Coverity is a very powerful and helpful tool. But you can't just give the impression that because it's been "scanned by Coverity" (or Symantec Antivirus, or Malwarebytes, or ANYTHING AT ALL WHATSOEVER) that's it's "safe". That's just spreading false confidence in duff code.
Re: (Score:2)
Re: (Score:3)
But is it that easy? (Score:2)
For a handful of developers to just "run through the code" and fix everything that easily? And do it quickly, without introducing other bugs?
I am not a developer, but I can remember writing software whether for BASIC, Pascal or Perl and going back to fix or extend something and seeing stuff and saying "Why did I do it that way?" and making changes that I'm not honestly sure were "improvements" except for they seemed like improvements at the time even though they may have created new bugs.
I don't know anyth
Re: (Score:2)
The main part of this s to tidy things up. One commit removes a load of custom functions and replaces it with a single include of unistd.h - which is really removing stuff put in way, way back because a platform didn't have unistd back then. Similarly, they get rid of weird stuff that is more standardised today.
I think the real code auditing and fixing will happen later.
Testing and validation is what's needed (Score:2)
Code fixes are all fine and well but where the real thought needs to be going is how to verify these protocols. The basic problem with security is that "working" doesn't mean "secure". Most people focus on testing for "working" and given the bugs that have shown up in OpenSSL and its cousin in the last month or so, the problem is not that they don't work (that is, interoperate and transmit data) but that they have corner cases and API holes that are major security concerns. Some real thought needs to be
Re: (Score:3)
I want to know... (Score:2)
Seriously, it took 2 years to find the big one after it was committed. How much vetting have each of these 250 commits undergone? Who's watching the watchers?
Re: (Score:3)
In a clean-up operation, you don't vet each change, especially when the change is reformatting instead of a real code change. It's clear from the commits I've looked at that the people doing this are working to eliminate the cruft that inevitability builds up in any project as it matures. See http://en.wikipedia.org/wiki/C... [wikipedia.org] -- you take baby steps, and check your work as you go.
In the process of clean-up, of re-factoring, one may find and fix subtle bugs
The commits are funny into themselves. (Score:5, Informative)
A Tumblr site popped up a few days ago called OpenSSL Valhalla Rampage [opensslrampage.org]. The blogger there is going through all the commits and posting the juicy funny comments there. This includes killing... and rekilling... VMS support (which reminds me of Maxim 37: there is no such thing as overkill... [ovalkwiki.com]), stripping out now-stupid abstractions and optimizations of the unoptimizables, and more.
Re: (Score:2)
I love this :)
My favorite comment:
now that knf carpet bombing is finished, switch to hand to hand combat. still not sure what to make of mysteries like this: for (i = 7; i >= 0; i—) { /* increment */
Re: (Score:3)
I'm wondering what is supposed to be mysterious about that code. The "/* increment */" comment seems to apply to the code inside the loop, not what is being done to the i variable, so I don't think that's it. Is it because the loop goes from 7 down to 0 instead of the other way around? I remember reading a programming book back in the 80's that advocated doing that for better speed since the assembly code generated to compare to 0 was faster than comparing to some other integer (which seems to no longer
Re:The commits are funny into themselves. (Score:4, Informative)
I'm wondering what is supposed to be mysterious about that code. The "/* increment */" comment seems to apply to the code inside the loop, not what is being done to the i variable, so I don't think that's it. Is it because the loop goes from 7 down to 0 instead of the other way around? I remember reading a programming book back in the 80's that advocated doing that for better speed since the assembly code generated to compare to 0 was faster than comparing to some other integer (which seems to no longer be the case, and I suspect could even cause cache misses for a bigger loop, although I don't know enough about how CPUs fill the cache to know for sure).
Comparing to zero is faster in most architectures and still is a valid optimization. There shouldn't be any problems with cache misses either, if the architecture does stream detection it should do it for reversed streams too and if it doesn't (only detecting actual misses) doing it in reverse isn't a problem.
NSA (Score:2)
Thank You (Score:5, Insightful)
just a simple thank you to all the coders out there who donate of their skills and time to produce this and other very important software, for free folks! Thank You for making the world a better place
Re: (Score:2)
Number of commits is meaningless (Score:3)
Re: (Score:2)
it means something when coming from the people who made openssh
KNF can wait (Score:2, Insightful)
It is most annoying trying to hunt bugs while wading thru massive diffs caused by formatting changes.
Deal with that later.
Re: (Score:2)
list of changes (Score:5, Informative)
See [twitter.com] also [freshbsd.org]:
So far as all the "won't this introduce more bugs than it fixes" comments go, this is a recurring argument I have at work. I am of the "clean as you go", "refactor now" school.
Everyone else says "If it works don't fix it"(IIWDFI), "don't rock the boat" etc.
Heartbleed is what happens when the IIWDFI attitude wins. Bugs lurk under layers of cruft, simple changes become nightmares of wading through a lava flow of wrappers around hacks around bodges.
Whenever anyone says IIWDFI, remind them that testing can only find a small proportion of possible bugs, so if you can't see whether it has bugs or not by reading the code, then no matter how many test cases it passes, it DOESN'T WORK.
Hey - Thanks OpenSSL Contributors (Score:5, Insightful)
With all the other tripe on this thread, I thought it necessary to say this loud and clear:
Hey OpenSSL Contributors - thanks for your hard work on OpenSSL, and thanks for the hard work under this spotlight cleaning this up.
Any serious software engineer with a career behind them has worked on projects with great source code, bad source code, and everything in between. It sounds like OpenSSL is a typical project with tons of legacy code where dealing with legacy is lower priority than dealing with the future. Subtracting out all the ideological debate and conspiracy theories, please realize there are plenty of 'less noisy' people out there who appreciate everything you're doing. And even more who would appreciate it if they understood the situation.
Its now time for companies who depend on OpenSSL (and other projects) to realize that Open Source software can lower their development costs, but some of that savings needs to be put back into the process or we will all suffer from "the tragedy of the Commons".
How about other OSS projects? (Score:2)
Re:Quatity is not quality (Score:5, Insightful)
I cant talk for C, but in Java the tools which warn you about potentially dangerous constructs are great (e.g. Sonar). You can easily identify many *suspicous* contructs and change them to something more safe. 250 commits per week with 4 devs on a moderatly sized project do not see much to me, much at the "quality" and not the "quantity" side.
What annoys me is that - with all due respect - the companies which embed openssl in their products could have done a review of the code for quality. To me it seems that it's a fundamental library.
Re:Quatity is not quality (Score:5, Interesting)
I cant talk for C, but in Java
Haha. Oh man. Java is a VM. Do you check for "dangerous constructs" like Just In Time compiling of data into machine code at Runtime and marking data executable then running it by the Java VM? Because that's how it operates. Even just having that capability makes Java less secure, don't even have to get exploit data marked code and executed, just have to get it in memory then jump to the location of the VM code that does it for me with my registers set right. Do any of your Java code checking tools run against the entire API kitchen sink of that massive attack surface you bring with every Java program called the JRE? Do they prevent users from having tens of old deprecated Java runtimes installed at 100MB a pop, since the upgrade feature just left them there and thus are still able to be targeted explicitly by malicious code? No? Didn't think so.
Don't get me wrong, I get what you're saying, Java code can be secure, but you have to run tests against the VM and API code you'll be using too. Java based checking tools produce programs that are just as vulnerable as C code, and actually demonstrably more so when you factor in their exploit area of their operating environment. Put it this way: The C tools (valgrind) already told us that the memory manager was doing weird shit -- It was expected weird shit. No dangerous construct warning would have caught heartbleed, it's a range check error coupled with the fact that they were using custom memory management. The mem-check warnings are there, but they have been explicitly ignored. It would be like the check engine light coming on, but you know the oil pressure is fine, just the sensor is bad... so no matter how bright of a big red warning light you install it can't help you anymore, it's meaningless. Actually, it's a bit worse than that, it would be like someone knew your check engine lights were on because of some kludge they added for SPEED, and so they knew they could get away with pouring gasoline in your radiator because you wouldn't notice anything wrong until it overheated and blew up -- AND you asked them about the check engine light a few times over the past two years, but they just shrugged and said, "Don't worry about it, I haven't looked under the hood lately, but here's a bit of electrical tape if the light annoys you."
I write provably secure code all the time in C, ASM (drivers mostly), even my own languages. CPUs are a finite state machines, and program functions have finite state as well. It's fully possible to write and test code for security that performs as it should for every possible input. For bigger wordsize CPUs, Instead of testing every input, one just needs to test a sufficiently large number of them to exercise all the bit ranges and edge cases. As you've noted, automation is key. If you want to write secure code you have to think like a cracker. My build scripts automatically generate missing unit test and fuzz testing stubs based on the function signatures. Input fuzzing tests are what a security researching hacker or bug exploiting cracker will use first off on any piece of code to test for potential weakness, so if you're not using these tests your code shouldn't touch crypto or security products, it's simply not been tested. Using a bit of doc-comments to add a additional semantics I can auto generate the tests for ranges, and I don't commit code to the shared repos that doesn't have 100% test coverage in my profiler. If OpenSSL was using even just a basic code coverage tool to ensure each branch of #ifdef was compilable they'd have caught this heartbleed bug. I recompiled OpenSSL without the heartbeat option as soon as my news crawler AI caught wind of it.
Code review, chode review. These dumbasses aren't using basic ESSENTIAL testing methodology you'd use for ANY product even mildly secure: Code Coverage + memory checking is the bare minimum for anything that has to do with "credentials". They apparently also have no fuck
Re: (Score:3, Insightful)
And if there are that many, would a new start not be better?
How about no?
Also I don't see why lots of fixes would necessarily mean poor fixes. They likely do what they feel is obvious fixes / stuff they consider wrong. Or something such. What do I know really.
Possibly they know what they are doing.
Re:Quatity is not quality (Score:4, Informative)
Re: (Score:2, Informative)
A new start would introduce far more issues, so a major cleanup can be preferable.
Some of this code is 18 years old. There are whole swathes of code that is depreciated. Cleanup and standardisation of layout helps. Removal of things like the VAX, Windows, obsolete compilers.... Already with the KNF and cruft being removed there are things being seen and some commits being made to remove scary things.
Seriously, read some of those commits and you will see that this /will/ help wrangle it into a far more secur
Re:Coding style fixes are "news" for nerds? (Score:5, Informative)
Re: (Score:2)
Re:code review time! (Score:5, Funny)
The NSA, obviously.
Next question please.
Re: (Score:2)
Pin one down, patch it around, 128 little bugs on the wall
Re: (Score:2)
Re: (Score:3)
Mr. Anonymous Coward:
What changes are you referring to? The changes I see are good re-factoring: clean up formatting, remove dead code, add missing bounds checks
Are you volunteering to do the code audit?
Massive rush? Evidence, please.
Security testing clearly hasn't been done before? Evidence, please. The counter-evidence is that the security testing tools were found not to work in this one particular case, and that problem has been patched. Security testing costs money; how much have you donated to th
Re: (Score:2)
Re: (Score:2)
You are ignoring WHO is making these changes, they prize correctness, robustness and security above all else. they know what they are doing,
Re: (Score:2)
This just tells me they are putting in hundreds of basically untested code changes, which is what got us into this mess in the first place.
And you don't think those changes are going to all be reviewed, then re-reviewed, then gone over with a dozen fine-toothed combs before anyone actually uses the new code in earnest?
It sounds like the OpenBSD people aren't the ones being stupid here.
Re: (Score:3)
Some day, you should learn about automated regression testing, which allows an entire regression suite of hundreds of tests to run after every commit.
Maybe then you'll have something sensible to say.
Not just that, we're discussing changes in the tip of their source repo. Not "released" code by any strech of the imagination. We're basically spectatiors, watching one of those Bob Ross "joy of painting" shows. The picture has yet to emerge. (hmmm.... accidental Gentoo pun...)
Re: (Score:2)