NSA Says Its Secure Dev Methods Are Publicly Known 114
Trailrunner7 writes "Despite its reputation for secrecy and technical expertise, the National Security Agency doesn't have a set of secret coding practices or testing methods that magically make their applications and systems bulletproof. In fact, one of the agency's top technical experts said that virtually all of the methods the NSA uses for development and information assurance are publicly known. 'Most of what we do in terms of app development and assurance is in the open literature now. Those things are known publicly now,' Neil Ziring, technical director of the NSA's Information Assurance Directorate, said in his keynote at the OWASP AppSec conference in Washington Wednesday. 'It used to be that we had some methods and practices that weren't well-known, but over time that's changed as industry has focused more on application security.'"
Re:Doesn't make sense (Score:5, Interesting)
Re:Of course they say that (Score:2, Interesting)
It depends. The best place to hide something is in plain sight. And the best way to hide encrypted somethings is in a sea of equally encrypted somethings. If the NSA had some algorithm that they felt OK with others knowing and also using themselves, then any traffic of theirs using said algorithm would be indistinguishable from any other traffic. An attacker would need to decrypt everything in order to establish whether or not anything was being sent that was of interest. Even if there was a vulnerability in the encryption that reduced the search space to something theoretically manageable, having to break each and every single conversation on the Internet would push the search space back into the unmanageable region.
ObSidetrackingRant: This is why sites that use SSL should use SSL for everything - it adds noise which conceals the encrypted packets which would actually be of interest. Don't forget that the biggest weakness in secure systems is context. If you have enough context, you can bypass a lot of system security. A simple example would be the "secret question" systems that are popular. If you know enough about a person, the odds are high that you can guess what the answers are. Another example would be social engineering - if you have enough personal information, you could pretend to be that person to a system admin. Social engineering is really the sum total of all the new cracking/viral methods that are being used these days. Far from being new, it's merely better-automated and better-documented. Social engineering was standard back in the BBS days.
Re:And yet we live in the non-ideal real world (Score:3, Interesting)
Not starting from a clean slate is immaterial. A new component can be 100% self-contained (and therefore verifiably clean within itself), communicating via some intermediary layer that handles legacy APIs, network connections, pipes, shared memory, et al.
The new component can therefore be as provably secure as you want. Security holes will then be contained (they must be in pre-existing code and cannot spread into new code).
This is not often done in the business world because they're stupid and prefer to burn huge amounts covering their backsides when inevitable breakins occur rather than the relatively small extra needed to properly secure systems in the first place. (It's stupid because such a method can never be cost-efficient in the long-run and only looks very marginally better on the books in the short-term.)
Re:Security is about preventing unintended outcome (Score:2, Interesting)
Initialize all variables to known values
And remember to reset them to known values as soon as they are no longer necessary. Not only is it good practice, whether or not the compiler has a job, but it encourages the programmer to keep his variables in mind.
I knew it! (Score:1, Interesting)
They're using Agile practices! They just developed them before anyone else, about twenty years ago!
Incidentally, this also explains why they haven't done any groundbreaking work in twenty years... ~~~~
Re:Most... (Score:1, Interesting)
Unfortuantely, "secure programming practices" often put the keys, including the master keys for replacing other keys, under NSA control. Take a good look at "Trusted Computing" and who would hold the keys. There was never a procedure enabled for keeping the keys entirely in private hands, only for keeping them in central repositories such as those owned by Microsoft. And never a procedure published for requiring a court order to obtain the keys: the entire thing was handwaved.
Looking back further, the "secure" Clipper Chip was discarded when it was discovered that people could generate their own private keys, without any central repository access. (http://en.wikipedia.org/wiki/Clipper_chip).
The NSA's mandate is not to provide security for US citizens. It is, in fact, to *break* security to monitor foreign communications. (Go read their original charter, available at http://www.austinlinks.com/Crypto/charter.html) One of their most effective techniques is to assure that commercial encryption worldwide is entirely accessible to them, and their history of ensuring this by blocking encryption they don't "p0wn" is very clear.
Re:I see it more like a proof that (Score:3, Interesting)
Because the card lives on your bus, and the cable does not.
More specifically most devices on the bus can do DMA to host memory, that enables them to read and write any byte of memory, completely bypassing OS memory protection.
In fact, firewire ports are a favorite of the digital forensics guys for exactly this reason - they can come along, plug their dohickey into the firewire port of most any PC that has one and do a complete memory dump of the system without the OS or any other program even noticing.
Re:Doesn't make sense (Score:3, Interesting)
In a corporation, you not only have accounting, HR, managers, VPs and such looking over your budget but you also have investors. If it costs you too much to produce something of "equal" quality to a competitor, they will start asking questions. A problem with insecure code probably won't cost the company the entire business.
The NSA, I think, mostly has a black budget. There's only a few people who know how much, where and to whom (employees) this money goes to. So there's not really a budget you have to account for. A problem (leak) because of bad code or anything else could be damaging to National Security. It will also, likely, become a political embarrassment and one to the DoD, NSA/CIA establishments. The people who approve the budgets will almost undoubtedly approve expenses to account for increases in security in any area incl. programming.
Re:Security is about preventing unintended outcome (Score:3, Interesting)
Without knowing it, many programmers are counting on memory being zero'ed as the program gets loaded
Any compliant C compiler will initialise all statics to 0 (stack values are different - they are not automatically initialised). From the C99 spec, 5.1.2:
All objects with static storage duration shall be initialized (set to their initial values) before program startup.
From 6.7.8.10:
If an object that has static storage duration is not initialized explicitly, then:
You can guarantee that any standards-compliant C implementation will do this. You can't guarantee anything about an implementation that doesn't comply with the standard - it may deviate from it in other ways.