Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Government Security Your Rights Online

NSA Says Its Secure Dev Methods Are Publicly Known 114

Trailrunner7 writes "Despite its reputation for secrecy and technical expertise, the National Security Agency doesn't have a set of secret coding practices or testing methods that magically make their applications and systems bulletproof. In fact, one of the agency's top technical experts said that virtually all of the methods the NSA uses for development and information assurance are publicly known. 'Most of what we do in terms of app development and assurance is in the open literature now. Those things are known publicly now,' Neil Ziring, technical director of the NSA's Information Assurance Directorate, said in his keynote at the OWASP AppSec conference in Washington Wednesday. 'It used to be that we had some methods and practices that weren't well-known, but over time that's changed as industry has focused more on application security.'"
This discussion has been archived. No new comments can be posted.

NSA Says Its Secure Dev Methods Are Publicly Known

Comments Filter:
  • Re:Most... (Score:3, Informative)

    by hedwards ( 940851 ) on Wednesday November 10, 2010 @06:27PM (#34190982)
    But cryptographic techniques aren't where most vulnerabilities are found. Most vulnerabilities are ones which could be avoided using secure programming practices.

    In fact the FBI failed to break into a set of hard disks encrypted with Truecrypt and another program using 256-bit AES. Which pretty clearly indicates that as long as you choose an appropriate encryption algorithm, the vulnerability is almost always going to be in either the implementation, user error or in access to the machine.
  • by icebike ( 68054 ) on Wednesday November 10, 2010 @06:39PM (#34191076)

    Because the card has smarts, and the cable does not.

    Because the card lives on your bus, and the cable does not.

    But try not to belabor the point, as I said, it was just an example. Substitute any other device resident in your computer which you feel better demonstrates the point.

  • by LWATCDR ( 28044 ) on Wednesday November 10, 2010 @06:46PM (#34191156) Homepage Journal

    Security doesn't sell in the consumer market.
    Mainframe and Minicomputer OSs and applications tended to be very secure. VMS and IBMs OS where and are very secure. PCs come from the microcomputer world. Security was never an issue with them. I mean they where single users systems and almost never networked. Even when you had Networks they tended to be Lans.
    It is the mind set that security is an after thought. Why should a picture viewing program ever worry about an exploit?

    On the PC side it just was never a "feature" worth putting any effort into until recently.
     

  • by Anonymous Coward on Wednesday November 10, 2010 @06:56PM (#34191280)

    One cornerstone of secure software development is the application of formal methods. The NSA Tokeneer project has been made completely open-source, demonstrating the feasability of applying formal methods to secure development problems.

  • by Jahava ( 946858 ) on Thursday November 11, 2010 @09:30AM (#34195402)

    Writing bulletproof code isn't really all that hard, but it does take discipline. Discipline to use only those constucts which have been verified with both the compiler and linker.

    Some simple things that coders can do: - avoid the use of pointers.

    Pointers aren't themselves bad; they just add some layers of complication to the otherwise stack-oriented game. The only reason the stack is nicer than pointers is because they're implicitly managed for you.

    Rather than avoid pointers, what you need is good code structure. Design functions that either manage the lifecycle of a pointer or are explicitly clear about how and what the pointer is going to be used as. Use const aggressively, and avoid typecasting as much as possible. Using good pointer naming techniques and management functions also dissipate the burden. Pointers are too useful to avoid religiously ... rather, build pointer security and management techniques into your coding style from the ground up. Choose descriptive names and try and constrain each pointer to its specific type (this lets the compiler help you keep your pointers straight).

    Initialize all variables to known values.

    Meh, I'm divided on this one. It's one thing to explicitly initialize global variables to either zero (which costs nothing, since they just end up in BSS sections) or non-zero (which puts them statically in the data segment). Stack variables, on the other hand, only really need to be initialized before they're used the first time. Pre-initializing them could lead to wasted instructions initializing them multiple times or cause them to be initialized in all code paths when they're only used in a few. My general rule of thumb is to be smart about it and, once again, naming conventions.

    Perform comparisons with the LHS using a static variable, so you don't accidentally get an assignment instead of a comparison

    Great tip; it's weird at first writing "if( NULL != p )", and you get a few funny stares, but after seeing enough "if( i = 10 )"s lying within seemingly-functional code, it's an easy selling point to make.

    - When you are done with a value, reset it to a "known" value. Zero is usually good.

    Definitely do this with pointers, descriptors, and other handle types. It also makes cleanup and pointer management easier. Less important to do with things like iterators and intermediate variables.

    - Keep functions less than 1 page long. If you can't see the entire function on a single editor page, it is too long.

    It's a good rule of thumb. I would like to add "any time you can't do this, make absolutely certain that you're not doing it for a good reason."

    Good tips, though. One thing I'd like to add: -Wall -Wextra -Werror (or your language's equivalent). If your code can't compile without a single warning, then you need to re-write your code and either manually disarm situations (e.g., override the compiler's common-sense with an assurance that you know what you're doing) or fix the warnings, which are actually bugs and errors. It's always fun to take someone's "bulletproof" code and turn on these flags and watch the crap spill out. Warnings are amazing, and they are absolutely your friend when it comes to writing bug-free and secure code.

  • by TheRaven64 ( 641858 ) on Thursday November 11, 2010 @10:54AM (#34196234) Journal

    Stack variables, on the other hand, only really need to be initialized before they're used the first time. Pre-initializing them could lead to wasted instructions initializing them multiple times or cause them to be initialized in all code paths when they're only used in a few.

    Unless your compiler really sucks, it will perform some trivial dataflow analysis and not generate code for the initialisation if the value is never used. Even really simply C compilers do this. If the value is used uninitialised on any code paths, then the initialisation will be used (although it may be moved to those code paths), and you don't want the compiler to remove it.

    From the flags you recommend, I'm guessing that you use gcc, which not only does this analysis it will even tell you if the value is used uninitialised.

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...