Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Government Security Your Rights Online

NSA Says Its Secure Dev Methods Are Publicly Known 114

Trailrunner7 writes "Despite its reputation for secrecy and technical expertise, the National Security Agency doesn't have a set of secret coding practices or testing methods that magically make their applications and systems bulletproof. In fact, one of the agency's top technical experts said that virtually all of the methods the NSA uses for development and information assurance are publicly known. 'Most of what we do in terms of app development and assurance is in the open literature now. Those things are known publicly now,' Neil Ziring, technical director of the NSA's Information Assurance Directorate, said in his keynote at the OWASP AppSec conference in Washington Wednesday. 'It used to be that we had some methods and practices that weren't well-known, but over time that's changed as industry has focused more on application security.'"
This discussion has been archived. No new comments can be posted.

NSA Says Its Secure Dev Methods Are Publicly Known

Comments Filter:
  • by Anonymous Coward on Wednesday November 10, 2010 @05:55PM (#34190728)

    ...it is definitely possible to write secure software if you just simply follow sound and smart development methods and practices... and don't write half-assed, slipshod, thrown-together-in-a-hurry code.

  • by dkleinsc ( 563838 ) on Wednesday November 10, 2010 @05:57PM (#34190742) Homepage

    If the NSA has something that really is Schneier-proof, they wouldn't tell the public. And understandably so, since part of their job is in part to ensure signal security for US agencies that deal in classified information.

  • Most... (Score:3, Insightful)

    by mbone ( 558574 ) on Wednesday November 10, 2010 @05:59PM (#34190768)

    It's that word most in "Most of what we do..." that may be important here. Most doesn't mean all. Also note he did not mention their cryptographic techniques, which is where I would expect them to be especially advanced.

  • by Tom ( 822 ) on Wednesday November 10, 2010 @06:00PM (#34190776) Homepage Journal

    Security, especially in software development, doesn't suffer from the "we don't know how to do it" problem. It suffers from the "we don't have time/budget/patience/interest in doing what we know we should be doing" issue.

  • by Applekid ( 993327 ) on Wednesday November 10, 2010 @06:05PM (#34190828)

    That's a closed source/open source distinction. It has nothing to do with development methodology... except that there are more eyes when it's open.

    Depending on whose eyes for closed source, I'm pretty sure the NSA has plenty of great eyes looking over code.

  • by hedwards ( 940851 ) on Wednesday November 10, 2010 @06:23PM (#34190946)
    But it's almost certainly true. Just look at OpenBSD's record. They went for a full decade without any vulnerabilities in the base system before one was eventually found. And that's from a group of mostly volunteers. Just imagine what you could get from programmers that are both paid and required to use secure coding practices.

    What's really embarrassing is that most of it has been known about for quite some time, but for one reason or another the organization funding the programming doesn't feel like paying for it to be done securely. It's a similar problem to programming style.
  • by icebike ( 68054 ) on Wednesday November 10, 2010 @06:26PM (#34190968)

    security doesn't come from obscurity

    Exactly right.

    The best security is the kind where everyone knows how it works, but even given the source code, you can't beat it, or you can't beat it in any useful length of time.

    That being said, the automated code inspection packages you can buy these days look only for the obvious noobie programmer mistakes.

    SELinux, originally from NSA, solves many of the problems of running untrusted code on your box, but even that is not 100% secure, and the maintenance problems it introduces mean that it is seldom used in real life.

    The problem is not how this agency (the NSA) cleans up their code.

    The problem is that we don't know about what backdoors exist in our hardware and our operating systems. Because so much code is embedded in silicon, and so few people actually look at that code, its easy to imagine all sorts of pownage living there.

    A compromised Ethernet card (just sayin by way of example), would be both Obscure, and hard to detect, and have access to just about everything going in and out of your machine.

    Security does not come from obscurity, but insecurity often does.

  • by boristdog ( 133725 ) on Wednesday November 10, 2010 @06:34PM (#34191040)

    The Soviets almost never used to crack codes. They just social-engineered (blackmail, sex, gifts, schmoozed, etc) to get all the information they wanted.

    It's how Mitnick did most of his work as well.

  • by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Wednesday November 10, 2010 @06:43PM (#34191114) Homepage Journal

    If we start with the fact that the NSA is responsible for the Rainbow Series, partly responsible for Common Criteria, totally responsible for the NSA guidebook on securing Linux, and also totally responsible for the concepts in SELinux (remember, they talk about methods not code), it follows that the NSA is implying that the processes used to develop this public information are rigorous, sound and the methods the NSA use internally for projects they don't talk about. It actually doesn't say that what the NSA publishes is what they use - they only say that methods that are public are what they use. The source is implied.

  • by Anonymous Coward on Wednesday November 10, 2010 @06:45PM (#34191150)

    Writing bulletproof code isn't really all that hard, but it does take discipline. Discipline to use only those constucts which have been verified with both the compiler and linker.

    Some simple things that coders can do:
    - avoid the use of pointers.
    - Initialize all variables to known values.
    - Perform comparisons with the LHS using a static variable, so you don't accidentally get an assignment instead of a comparison
    - When you are done with a value, reset it to a "known" value. Zero is usually good.
    - Keep functions less than 1 page long. If you can't see the entire function on a single editor page, it is too long.

    Simple.

    BTW, I wrote code for real-time space vehicle flight control systems. When I look at OSS and see variables not set to initial values, I cringe. Sure, it is probably ok, but there isn't any cost to initializing the variables. This is a compile-time decision. Without knowing it, many programmers are counting on memory being zero'ed as the program gets loaded. Not all OSes do this, so if you are writing cross platform code, don't trust it will happen. Do it yourself.

    Oh, and if you want secure programs, loosely typed languages are scary.

  • by K. S. Kyosuke ( 729550 ) on Wednesday November 10, 2010 @06:57PM (#34191290)
    Actually, programming is one of the few disciplines where practice can be exactly the same as the theory - the bits and bytes are all the same, they don't break from material fatigue; and if you write software for which you have a proof of correctness, it will simply work correctly. Few other branches of human endeavor are free from the evils of the material world to such a degree.
  • by Anonymous Coward on Wednesday November 10, 2010 @07:06PM (#34191374)

    Unless you've implemented every bit of the software stack, from the firmware to the OS to the compiler/runtime to the applications then you could potentially have issues.

    So I think the "theory and practice are different" might still apply, as nobody has the luxury of the time to formally prove all elements of all these things. It would be a Herculean undertaking.

  • by blair1q ( 305137 ) on Wednesday November 10, 2010 @07:58PM (#34191804) Journal

    But that's expensive, slow, and labor-intensive.

    Trojan bots are cheap, easy to distribute, and hard to double against you.

  • by Anonymous Coward on Wednesday November 10, 2010 @09:36PM (#34192530)

    Except that specific programs have a "finite" number of axioms.
    Here is a trivial example.
    You can write and prove a program that when fed a list of three integers of 32 bits or less will always return the largest. The range is limited so it is provable.
    Most all programs deal with finite datasets and computers have a finite set of states. Godel deals with infinite systems.
    Godel states that not every possible program can be proven it does not state that no program can be proven.
    The key is to limit the input data. Once you do that the system can become deterministic and then provable.

  • by drsmithy ( 35869 ) <drsmithy@nOSPAm.gmail.com> on Wednesday November 10, 2010 @10:00PM (#34192636)

    But it's almost certainly true. Just look at OpenBSD's record. They went for a full decade without any vulnerabilities in the base system before one was eventually found.

    Remote vulnerabilities. In the default install. Which isn't that hard to achieve when your default install doesn't really do much and hardly anyone uses your system.

    Just imagine what you could get from programmers that are both paid and required to use secure coding practices.

    Who didn't have to work towards any specific deadlines or goals ? And had essentially nothing to lose if they didn't get there ? I'd expect much the same.

    When you have nothing in particular to achieve, all the time in the world to achieve it, and no real consequences if you don't, then you'd expect anything that was done would be done well. However, the real world doesn't work like that.

  • by jvkjvk ( 102057 ) on Thursday November 11, 2010 @09:43AM (#34195466)

    Security does not come from obscurity, but insecurity often does.

    Security comes in many forms, and obscurity is actually quite a good form, as long as there are other layers.

    The "best" security comes from defense in depth and obscurity can certainly be part of that, and in fact probably should be. I will go through a few different layers where security by obscurity actually works quite well.

    Consider a random binary on Usenet. Even if I 'encrypt' the payload with ROT-13 I have achieved a decent amount of security simply through obscuring the target in a sea of ones and zeros.

    Now, consider the challenge-response system. It used to be that some systems would tell you whether your username, password (or both) was bad. It turns out that this lack of obscurity allows attackers quicker access to systems, since they can hit upon usernames by letting the system tell them which were valid. Simply obscuring the error response fixes this.

    And this brings up a good point about obscurity as a security practice. If you use it - don't tell anyone! You would have thought this was a "duh", but the previous example is a great one in that regards. Usernames are simply obscured data, but if the login system can be used as an oracle ... not so much.

    Now, on a systems level, network security is often predicated on obscurity - that's why you don't find many companies publishing their internal network maps! If those maps were published, then attackers would have a much easier time to penetrate the organization. Security by obscurity.

    Now, on a home level, if I am using port knocking (as one example) as one means of controlling access to ssh, then every attacker that does not know this will fail out of the box. Of course, it is better if I also have key exchange turned on, and even moreso if that and password enabled. But, even moreso if I simply run it on a non-standard port - which is security thorugh obscurity.

    So, while I wouldn't rely strictly on security through obscurity (except in cases it makes sense), it is a valuable tool in a security toolbox, and generally can be a show stopper for an attacker if they aren't able to obtain the knowledge. But again, security comes from defense in depth, and one layer of that depth should be considered obscurity.

    Regards.

Those who can, do; those who can't, write. Those who can't write work for the Bell Labs Record.

Working...