



OpenSSL Cleanup: Hundreds of Commits In a Week 379
A new submitter writes with this news snipped from BSD news stalwart undeadly.org: "After the news of heartbleed broke early last week, the OpenBSD team dove in and started axing it up into shape. Leading this effort are Ted Unangst (tedu@) and Miod Vallat (miod@), who are head-to-head on a pure commit count basis with both having around 50 commits in this part of the tree in the week since Ted's first commit in this area. They are followed closely by Joel Sing (jsing@) who is systematically going through every nook and cranny and applying some basic KNF. Next in line are Theo de Raadt (deraadt@) and Bob Beck (beck@) who've been both doing a lot of cleanup, ripping out weird layers of abstraction for standard system or library calls. ... All combined, there've been over 250 commits cleaning up OpenSSL. In one week.'"
You can check out the stats, in progress.
it's a good effort (Score:5, Interesting)
Right now, I think the team is mostly focused on having "something usable" in OpenBSD and I doubt they care too much about anything else outside their scope.
Having said that - forking OpenSSL to something usable and burning the remains with fire is a great idea, however there is considerable risk that the rush will cause new bugs - even though right now those commits have been mostly pulling out old crap.
Fixing the beast is going to take a long while and several things will need to happen:
- Upstream hurry to put more crap into the RFC needs to cease for a while. We don't need more features at the moment, we need stability and security.
- Funding. The project needs to be funded somehow. I think a model similar to Linux Foundation might work - as long as they find a suitable project leads. But major players need to agree on this - and that's easier said than done (who will even pull them to the table?)
- Project team. Together with funding, we need a stable project team. Writing good crypto code in C, is bloody hard, so the team needs to be on the ball - all the time. And the modus operandi should be "refuse features, increase quality". Requires a strong Project Lead.
- Patience.. fixing it is a long process, so you can't go into it hastily. You need to start somewhere (and here I applaud the OpenBSD team), but to get it done, assuming that above is in place - expect 1-3 years of effort.
Comment removed (Score:4, Interesting)
Re:Merged back or fork? (Score:4, Interesting)
We may see a model similar to openssh where the core code in openbsd is kept free of "portability goop" and then a seperate group maintains a "portable" version based on the openbsd version.
Re:I would think (Score:5, Interesting)
Well, I would think that this is mostly to do with publicity. Once someone calls your software into question in a very public light, you will be more willing to go through your project with a fine toothed comb and clean up all that old cruft you've been meaning to clear out.
This is not a sign of inherent insecurity, but one of obvious house cleaning.
And how many bugs and vulnerabilities will they put in with such high volume of commits in such short time?
- If a change is only "house cleaning" which is unrelated to security, why do it in such a rush?
- If a change is security related, and obviously needed, then why wasn't it made earlier? Didn't that make a mockery of all the "many eyes" arguments oft touted in favor of Open Source?
- If a change is security related and non-obvious, then won't doing it in such a rush probably introduce new bugs/vulnerability into the code?
No matter how you look at it, making so many changes in a rush is not a good idea.
Re:I would think (Score:5, Interesting)
As the other poster says, OpenSSL isn't an OpenBSD project - what is going on here is a full blown fork of OpenSSL by the OpenBSD team, who are putting their money where their mouths are because when the heartbleed bug came out it was noted that the issue could have been mitigated on OpenBSD if the OpenSSL team had used the system provided memory allocation resources.
So this is less OpenSSL and much more OpenBSD SSL being created.
Open Source is still software development (Score:3, Interesting)
Our modern malady is to look at methods, not histories.
Great software comes from great leadership and good-to-great talent. But mostly, it involves someone having a good idea and following it through.
Sometimes, that's a single programmer (Bill Atkinson). Most commonly, it's a group that needs a leader.
The quality of that leader then determines the quality of the product. But both industry and open source find this idea terrifying.
Industry would prefer to avoid this and promote exchangeable, replaceable cogs to the position of program/project manager. These people tend to be aggressive and thoughtless and produce gunk software.
Open source would prefer to avoid it because the big secret in open source is that people do what they want to do, not what needs doing. This is why products usually have the "fun, interesting parts" done but lag behind in the stuff no one finds thrilling, including finishing the boring parts of the code, debugging, documentation, etc.
Leadership is essential. The difference is that in open source, you can't fire people, so you can't tell them what to do.
Re:Quatity is not quality (Score:5, Interesting)
I cant talk for C, but in Java
Haha. Oh man. Java is a VM. Do you check for "dangerous constructs" like Just In Time compiling of data into machine code at Runtime and marking data executable then running it by the Java VM? Because that's how it operates. Even just having that capability makes Java less secure, don't even have to get exploit data marked code and executed, just have to get it in memory then jump to the location of the VM code that does it for me with my registers set right. Do any of your Java code checking tools run against the entire API kitchen sink of that massive attack surface you bring with every Java program called the JRE? Do they prevent users from having tens of old deprecated Java runtimes installed at 100MB a pop, since the upgrade feature just left them there and thus are still able to be targeted explicitly by malicious code? No? Didn't think so.
Don't get me wrong, I get what you're saying, Java code can be secure, but you have to run tests against the VM and API code you'll be using too. Java based checking tools produce programs that are just as vulnerable as C code, and actually demonstrably more so when you factor in their exploit area of their operating environment. Put it this way: The C tools (valgrind) already told us that the memory manager was doing weird shit -- It was expected weird shit. No dangerous construct warning would have caught heartbleed, it's a range check error coupled with the fact that they were using custom memory management. The mem-check warnings are there, but they have been explicitly ignored. It would be like the check engine light coming on, but you know the oil pressure is fine, just the sensor is bad... so no matter how bright of a big red warning light you install it can't help you anymore, it's meaningless. Actually, it's a bit worse than that, it would be like someone knew your check engine lights were on because of some kludge they added for SPEED, and so they knew they could get away with pouring gasoline in your radiator because you wouldn't notice anything wrong until it overheated and blew up -- AND you asked them about the check engine light a few times over the past two years, but they just shrugged and said, "Don't worry about it, I haven't looked under the hood lately, but here's a bit of electrical tape if the light annoys you."
I write provably secure code all the time in C, ASM (drivers mostly), even my own languages. CPUs are a finite state machines, and program functions have finite state as well. It's fully possible to write and test code for security that performs as it should for every possible input. For bigger wordsize CPUs, Instead of testing every input, one just needs to test a sufficiently large number of them to exercise all the bit ranges and edge cases. As you've noted, automation is key. If you want to write secure code you have to think like a cracker. My build scripts automatically generate missing unit test and fuzz testing stubs based on the function signatures. Input fuzzing tests are what a security researching hacker or bug exploiting cracker will use first off on any piece of code to test for potential weakness, so if you're not using these tests your code shouldn't touch crypto or security products, it's simply not been tested. Using a bit of doc-comments to add a additional semantics I can auto generate the tests for ranges, and I don't commit code to the shared repos that doesn't have 100% test coverage in my profiler. If OpenSSL was using even just a basic code coverage tool to ensure each branch of #ifdef was compilable they'd have caught this heartbleed bug. I recompiled OpenSSL without the heartbeat option as soon as my news crawler AI caught wind of it.
Code review, chode review. These dumbasses aren't using basic ESSENTIAL testing methodology you'd use for ANY product even mildly secure: Code Coverage + memory checking is the bare minimum for anything that has to do with "credentials". They apparently also have no fuck
Re:I would think (Score:4, Interesting)
Actually, you (oddly) do very much care about speed in OpenSSL. One of the most successful types of attack against security algorithms involves measuring how long it takes to do things, and inferring the complexity of the calculation from that. Taking any significant amount of time makes measurement easier, and errors smaller, and hence this type of attack easier.