Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Government Security Your Rights Online

NSA Says Its Secure Dev Methods Are Publicly Known 114

Trailrunner7 writes "Despite its reputation for secrecy and technical expertise, the National Security Agency doesn't have a set of secret coding practices or testing methods that magically make their applications and systems bulletproof. In fact, one of the agency's top technical experts said that virtually all of the methods the NSA uses for development and information assurance are publicly known. 'Most of what we do in terms of app development and assurance is in the open literature now. Those things are known publicly now,' Neil Ziring, technical director of the NSA's Information Assurance Directorate, said in his keynote at the OWASP AppSec conference in Washington Wednesday. 'It used to be that we had some methods and practices that weren't well-known, but over time that's changed as industry has focused more on application security.'"
This discussion has been archived. No new comments can be posted.

NSA Says Its Secure Dev Methods Are Publicly Known

Comments Filter:
  • by Anonymous Coward on Wednesday November 10, 2010 @05:55PM (#34190728)

    ...it is definitely possible to write secure software if you just simply follow sound and smart development methods and practices... and don't write half-assed, slipshod, thrown-together-in-a-hurry code.

    • security doesn't come from obscurity
      • "Trust us. Honesty is our business."

        -- Sincerely, the 'Spooks'.

        • by LWATCDR ( 28044 )

          Programming is math. There really are no secrets in math. It is the same everywhere.

          • Theory and Practice.

            Different.

            • Re: (Score:3, Insightful)

              Actually, programming is one of the few disciplines where practice can be exactly the same as the theory - the bits and bytes are all the same, they don't break from material fatigue; and if you write software for which you have a proof of correctness, it will simply work correctly. Few other branches of human endeavor are free from the evils of the material world to such a degree.
              • Re: (Score:1, Insightful)

                by Anonymous Coward

                Unless you've implemented every bit of the software stack, from the firmware to the OS to the compiler/runtime to the applications then you could potentially have issues.

                So I think the "theory and practice are different" might still apply, as nobody has the luxury of the time to formally prove all elements of all these things. It would be a Herculean undertaking.

                • "So I think the "theory and practice are different" might still apply, as nobody has the luxury of the time to formally prove all elements of all these things. It would be a Herculean undertaking."

                  Even if you did have an infinite amount of time, there may also be errors in the proof. Even then, if all proofs are indeed 100% correct, one still runs into Godel Incompleteness [wikipedia.org] and what might be thought of as a variation of Heisenberg uncertainty [wikipedia.org] (i.e. the act of measuring changes the results), especially in ha

                  • by LWATCDR ( 28044 )

                    What?
                    You have taken two theories that you do not really understand and have mixed them up as bad as that stupid book Zen of Quantum Physics.
                    "Heisenberg uncertainty principle states by precise inequalities that certain pairs of physical properties, such as position and momentum, cannot be simultaneously known to arbitrarily high precision."
                    That has nothing to do with Godel's theorem except that both are taken as proof that the Universe is not deterministic. They are not in any other way related

                    And Godel does

                    • First of all, I said that it could be thought of as a kind of Heisenberg uncertainty... and then qualified what I meant with a parenthetical, which alluded to the Observer effect [wikipedia.org]. This is known as desirable conflation [wikipedia.org]. Secondly, if you cannot figure out how Godel incompleteness comes into play in a discussion about using mathematical theory to prove an almost infinitely complex set of axioms, then trying to explain it to you is certainly a fools errand, even if I cannot prove it ;-)

                      Hint: From the very
                    • Re: (Score:1, Insightful)

                      by Anonymous Coward

                      Except that specific programs have a "finite" number of axioms.
                      Here is a trivial example.
                      You can write and prove a program that when fed a list of three integers of 32 bits or less will always return the largest. The range is limited so it is provable.
                      Most all programs deal with finite datasets and computers have a finite set of states. Godel deals with infinite systems.
                      Godel states that not every possible program can be proven it does not state that no program can be proven.
                      The key is to limit the input da

                    • "You can write and prove a program that when fed a list of three integers of 32 bits or less will always return the largest. The range is limited so it is provable."

                      No, you cannot. You can prove that it should do so, but you cannot prove that it will always do so. For example, your assumption is that the interpreter or compiler behaves the way you think it does, when you in fact have operated on numerous unproved assumptions [bell-labs.com].

                    • "I think you don't know the meaning of the word 'proof'. From your perspective, nobody can ever prove anything at all."

                      I know the meaning of the word proof. You just don't know the meaning of the word Software, which is my whole point. Math and Software are not the same, amd mathematicians should stick to Math, and Software Engineers should worry about Software rather than Mathematical theory, as the two are quite different indeed.

                    • Or hardware fails. When an instruction to add 1 to the PC register simply doesn't, it can cause everything to fall apart. That's bad, but it's obvious. Worse still is when data gets a random blip. Now the program continues to function, but the output is wrong. And not necessarily obviously wrong. Just wrong enough to propagate.

                      If you are operating in the real world you must work on a system of probabilities. What's the acceptable rate for this thing to fail? If it can get kicked over weekly without fuss,
                    • Excellent elaboration. Now all we have to do is sit back and wait for all the people who were loudly proclaiming that it's all just math to reply back and admit they are wrong. Should we take bets on the likelihood of that happening? ;-)

                      Cheers!
                    • "You can not have a proof without starting with "unproved assumptions", as you put it. Once again displaying that you don't know the meaning of the word."

                      Bullshit. The fact that you admit that you start with unproved assumptions is an admission that you cannot prove the correctness. The fact that you are posting as an AC, but still following up with what is tantamount to a troll would be proof that you are a troll, were I not operating on the unproven assumption that you are in fact the same poster as you

                • Unless you've implemented every bit of the software stack, from the firmware to the OS to the compiler/runtime to the applications then you could potentially have issues.

                  Actually, it's even better to use your own CPU - possibly something based on MachineForth. Mind you, modern CPUs have bugs as well.

              • In most cases, this is true. In paranoid cases, however, I suspect Ken Thompson would disagree with you [vxheavens.com].
              • Actually, programming is one of the few disciplines where practice can be exactly the same as the theory - the bits and bytes are all the same, they don't break from material fatigue; and if you write software for which you have a proof of correctness, it will simply work correctly. Few other branches of human endeavor are free from the evils of the material world to such a degree.

                I disagree. If you're programming the OS it might be true (with narrow hardware compatibilities). However as soon as you write an application for a user, theory is useless. Users do the strangest things to their OS. One user might throw away all RST packets at the firewall because they read about sandvine when comcast was throttling. Another user tried to fix his own windows box, deleting important windows registry keys, so explorer freezes randomly. Another user overmounted a directory over /etc, so

                • Thank you.

                  Someone who has written software outside of class.

                • by LWATCDR ( 28044 )

                  You are confusing a mathematically correct program one that will do the "right thing" no matter what the input is.
                  What a correct program will do is only behave in a deterministic way.
                  And example would be is if Explorer was a "correct" program and you deleted some registry keys it would exit with an error message. If a program finds any input state that is outside of a specified range and that can not be healed should terminate with an error condition. That if the OS was also "correct".
                  The real problem becom

              • "Beware of bugs in the above code; I have only proved it correct, not tried it." -- Donald Knuth [stanford.edu]

              • Except from hardware glitches.

            • by RMH101 ( 636144 )
              Or "In theory, theory and practice are the same. In practice, they are different"!
      • by Applekid ( 993327 ) on Wednesday November 10, 2010 @06:05PM (#34190828)

        That's a closed source/open source distinction. It has nothing to do with development methodology... except that there are more eyes when it's open.

        Depending on whose eyes for closed source, I'm pretty sure the NSA has plenty of great eyes looking over code.

        • by blair1q ( 305137 )

          Disagree. Having a correct methodology is more efficient than having many extra eyes that aren't following any particular methodology.

          The trick is having a correct methodology, and applying it correctly.

        • except that there are more eyes when it's open

          There are potentially more eyes. In practice, a lot of open source code is only ever read by its author, or occasionally by the person committing it if that's not the same person. In contrast, NSA code will go through a code review process that ensures that several people look at it.

          A significant part of the point of pair programming is that it ensures that at least two people have read the code, which is one more than a lot of code gets (open or closed).

      • by icebike ( 68054 ) on Wednesday November 10, 2010 @06:26PM (#34190968)

        security doesn't come from obscurity

        Exactly right.

        The best security is the kind where everyone knows how it works, but even given the source code, you can't beat it, or you can't beat it in any useful length of time.

        That being said, the automated code inspection packages you can buy these days look only for the obvious noobie programmer mistakes.

        SELinux, originally from NSA, solves many of the problems of running untrusted code on your box, but even that is not 100% secure, and the maintenance problems it introduces mean that it is seldom used in real life.

        The problem is not how this agency (the NSA) cleans up their code.

        The problem is that we don't know about what backdoors exist in our hardware and our operating systems. Because so much code is embedded in silicon, and so few people actually look at that code, its easy to imagine all sorts of pownage living there.

        A compromised Ethernet card (just sayin by way of example), would be both Obscure, and hard to detect, and have access to just about everything going in and out of your machine.

        Security does not come from obscurity, but insecurity often does.

        • Why would a compromised Ethernet card be any more dangerous than a compromised Ethernet cable, which presumably their networks are designed to protect against? In other words, wouldn't all data that the Ethernet sees already be 1024-bit encrypted?
          • Re: (Score:3, Informative)

            by icebike ( 68054 )

            Because the card has smarts, and the cable does not.

            Because the card lives on your bus, and the cable does not.

            But try not to belabor the point, as I said, it was just an example. Substitute any other device resident in your computer which you feel better demonstrates the point.

            • Re: (Score:3, Interesting)

              Because the card lives on your bus, and the cable does not.

              More specifically most devices on the bus can do DMA to host memory, that enables them to read and write any byte of memory, completely bypassing OS memory protection.

              In fact, firewire ports are a favorite of the digital forensics guys for exactly this reason - they can come along, plug their dohickey into the firewire port of most any PC that has one and do a complete memory dump of the system without the OS or any other program even noticing.

              • by blueg3 ( 192743 )

                Except that that technique is not widely used, since it's extremely prone to failure (usually resulting in a blue-screen or such). As a fragile technique that requires a specialist on hand when you encounter a live machine, it doesn't see a whole lot of field use.

        • Re: (Score:3, Insightful)

          by jvkjvk ( 102057 )

          Security does not come from obscurity, but insecurity often does.

          Security comes in many forms, and obscurity is actually quite a good form, as long as there are other layers.

          The "best" security comes from defense in depth and obscurity can certainly be part of that, and in fact probably should be. I will go through a few different layers where security by obscurity actually works quite well.

          Consider a random binary on Usenet. Even if I 'encrypt' the payload with ROT-13 I have achieved a decent amount of security simply through obscuring the target in a sea of ones and

    • ...it is definitely possible to write secure software if you just simply follow sound and smart development methods and practices... and don't write half-assed, slipshod, thrown-together-in-a-hurry code.

      Proof? I don't see any proof in the article that the NSA produces secure software, or even a claim that they do. Instead, the NSA Technical Director quoted in the article said "even within the NSA, the problems of application security remain maddeningly difficult to solve." That doesn't sound like they have solved the problem, but that they, too, are grappling with a fundamental issue in software development.

    • It's not necessitate proof of anything. Are the NSA's applications actually bulletproof? They might distribute their coding practices but I'm pretty sure they don't distribute their source code or their applications. Therefore, no evaluation of their security can be made. Therefore, there's no evidence to show anything about the quality of their practices.

      I'm not saying they're wrong. In fact, evaluation of other, open, software indicates that security does stem from good coding practices. I'm just sa
      • by phek ( 791955 )

        actually you're wrong, they do distribute the source code to their applications whenever they can (their code is often just modifications to proprietary software at which point they can't redistribute it). SELinux is a good example of this, it was started and originally released/maintained by the NSA.

        There is absolutely no reason (other than copyright violations) for the NSA (or any other government agency) to not release more secure methods/code. Doing so will provide our nation with a more secure infras

  • by dkleinsc ( 563838 ) on Wednesday November 10, 2010 @05:57PM (#34190742) Homepage

    If the NSA has something that really is Schneier-proof, they wouldn't tell the public. And understandably so, since part of their job is in part to ensure signal security for US agencies that deal in classified information.

    • by hedwards ( 940851 ) on Wednesday November 10, 2010 @06:23PM (#34190946)
      But it's almost certainly true. Just look at OpenBSD's record. They went for a full decade without any vulnerabilities in the base system before one was eventually found. And that's from a group of mostly volunteers. Just imagine what you could get from programmers that are both paid and required to use secure coding practices.

      What's really embarrassing is that most of it has been known about for quite some time, but for one reason or another the organization funding the programming doesn't feel like paying for it to be done securely. It's a similar problem to programming style.
      • by lewiscr ( 3314 ) on Wednesday November 10, 2010 @06:29PM (#34190994) Homepage

        Just imagine what you could get from programmers that are both paid and required to use secure coding practices.

        Windows XP?

      • Re: (Score:3, Informative)

        by LWATCDR ( 28044 )

        Security doesn't sell in the consumer market.
        Mainframe and Minicomputer OSs and applications tended to be very secure. VMS and IBMs OS where and are very secure. PCs come from the microcomputer world. Security was never an issue with them. I mean they where single users systems and almost never networked. Even when you had Networks they tended to be Lans.
        It is the mind set that security is an after thought. Why should a picture viewing program ever worry about an exploit?

        On the PC side it just was never a "

        • "PCs come from the microcomputer world. Security was never an issue with them. I mean they where single users systems and almost never networked. ... On the PC side it just was never a feature" worth putting any effort into until recently."

          Unless when you say "recently" you mean since 1993 [slackware.com] then you are quite mistaken.

          • by LWATCDR ( 28044 )

            Please there was Unix before Slackware for the PC. And no security wasn't a feature that sold or was even much of a worry even in 93.
            Security as a feature that sold in the consumer space? Windows 95, 98, ME all show clearly that it wasn't. Windows 2000 and XP where big leaps forward. Vista is ME 2.0 and will soon be forgotten. Seven is much better.
            BTW Unix in 93 also was not all that secure. No ACLs and telnet and FTP where commonly used and SSH wasn't even released until 1995! and then it took a while to c

      • by drsmithy ( 35869 ) <drsmithy&gmail,com> on Wednesday November 10, 2010 @10:00PM (#34192636)

        But it's almost certainly true. Just look at OpenBSD's record. They went for a full decade without any vulnerabilities in the base system before one was eventually found.

        Remote vulnerabilities. In the default install. Which isn't that hard to achieve when your default install doesn't really do much and hardly anyone uses your system.

        Just imagine what you could get from programmers that are both paid and required to use secure coding practices.

        Who didn't have to work towards any specific deadlines or goals ? And had essentially nothing to lose if they didn't get there ? I'd expect much the same.

        When you have nothing in particular to achieve, all the time in the world to achieve it, and no real consequences if you don't, then you'd expect anything that was done would be done well. However, the real world doesn't work like that.

    • It's just software, we're not talking about something that takes billions of dollars worth of resources to produce. If the couple hundred software guys that the NSA employs can think of something, you can put good money on at least one of the hundreds of thousands of software guys that don't work for the NSA coming up with a similar idea. Now, if we were talking about some novel decryption scheme sure, there aren't that many people working on that outside of intelligence circles. But we're talking about

    • Re: (Score:2, Interesting)

      by jd ( 1658 )

      It depends. The best place to hide something is in plain sight. And the best way to hide encrypted somethings is in a sea of equally encrypted somethings. If the NSA had some algorithm that they felt OK with others knowing and also using themselves, then any traffic of theirs using said algorithm would be indistinguishable from any other traffic. An attacker would need to decrypt everything in order to establish whether or not anything was being sent that was of interest. Even if there was a vulnerability i

      • by phek ( 791955 )

        it may just be me, but as someone who has been a sysadmin and developer for high traffic sites, making everything on a site https isn't practical at all. https uses a LOT more resources than http. you would roughly need 3 times the number of servers to hide something that's already encrypted. a MUCH better solution would be to use only strong, non-anonymous ciphers for your encrypted pages.

        • by jd ( 1658 )

          HTTPS uses more resources because people use software encryption (which is expensive). If you used hardware acceleration, you'd have no overheads at all at the machine level. You wouldn't need even one extra server. And hardware acceleration doesn't cost the same as 2 beefy servers.

          Secondly, you're not limited to HTTPS - most firewalls are quite capable of supporting IPSec in opportunistic mode (ie: keys are generated and exchanged at the time the tunnel is set up). It serves much the same purpose, in that

          • by phek ( 791955 )

            you've just made so many holes in the setup with what you've said and your DoD statement is incorrect. Per what you said, the DoD wants you to use encryption as soon as possible to avoid saying anything over unencrypted talk. That means they do trust encryption (obviously only to a point). There are plenty of ways to get information pre/post encryption which is what they're worried about and what you've presented with your other solutions.

            • by jd ( 1658 )

              They trust encryption combined with a lack of context. I specified that they do not trust encryption WITH context. Do try and read before you bitch.

              And, no, hardware encryption has no holes. Neither does encryption via an FPGA. Nor indeed does an IPSec tunnel. Not one of these offers context and all offer encryption as good, or better, than SSL.

              Using a DMZ is secure, since the unencrypted network is not publicly visible. It is an isolated network. Having a firewall that only permits traffic to/from the prox

              • by phek ( 791955 )

                uhmm actually i work in the security industry and your setup just failed a simple pci audit.

                "A diskless, OS-less proxy is virtually impossible to compromise"
                If you think this is a valid statement you have no business maintaining any network.

                I also have no idea what point you're trying to argue. I simply said that providing encryption for all traffic of a high traffic site isn't practical. If you have a high traffic site, then most of your data sent doesn't need to be secure. Of course there are a few exc

                • by jd ( 1658 )

                  If there is no OS (ie: it's a bare-metal application), there's no user accounts, there's no shells, there's no login daemons (there's no daemons or services of any kind), there's no threads (threads need an OS), it's a pure round-robin (since anything else needs an OS).

                  Since there's no management (no OS, no threads), all packets are dumb. You have packets going in, getting decrypted and being passed to the internal network. Likewise, packets come in, get encrypted, get pushed out.

                  Because the maximum size of

                  • by phek ( 791955 )

                    if you have physical access to the network you can simply spoof the proxy (or a target on the network). This would be especially easy to do since that proxy is being used to encrypt traffic for the network and would therefor be sending plain text over the network.

                    I have no idea what you're trying to argue because on one hand you want everything encrypted but on the other hand you have no problem with everything being plain text over the dmz. I also just noticed you said "Using a DMZ is secure, since the u

                    • by jd ( 1658 )

                      My argument is that you're choosing the most expensive implementation of a specification and then blaming the specification for the price.

                      There are affordable, practical ways to achieve the same results and that totally eliminates the price side of your complaint (which, apparently, still has this fictional multiplication of servers which I've already demonstrated is not required).

                      If you want to argue against the idea, fine. But produce a VALID argument, not a bullshit one. I have no respect for bullshit ar

    • Be serious, the government would never lie to us.

  • ...that's what they WANT us to think...
  • Most... (Score:3, Insightful)

    by mbone ( 558574 ) on Wednesday November 10, 2010 @05:59PM (#34190768)

    It's that word most in "Most of what we do..." that may be important here. Most doesn't mean all. Also note he did not mention their cryptographic techniques, which is where I would expect them to be especially advanced.

    • Re: (Score:3, Informative)

      by hedwards ( 940851 )
      But cryptographic techniques aren't where most vulnerabilities are found. Most vulnerabilities are ones which could be avoided using secure programming practices.

      In fact the FBI failed to break into a set of hard disks encrypted with Truecrypt and another program using 256-bit AES. Which pretty clearly indicates that as long as you choose an appropriate encryption algorithm, the vulnerability is almost always going to be in either the implementation, user error or in access to the machine.
      • Re: (Score:1, Interesting)

        by Anonymous Coward

        Unfortuantely, "secure programming practices" often put the keys, including the master keys for replacing other keys, under NSA control. Take a good look at "Trusted Computing" and who would hold the keys. There was never a procedure enabled for keeping the keys entirely in private hands, only for keeping them in central repositories such as those owned by Microsoft. And never a procedure published for requiring a court order to obtain the keys: the entire thing was handwaved.

        Looking back further, the "secu

    • Also note he did not mention their cryptographic techniques, which is where I would expect them to be especially advanced.

      From a design standpoint, it's cheaper and more effective to leverage solutions that can be widely vetted and tested than it is to work strictly in a closed environment implementing your own solution and hoping you thought of everything. I'd frankly be *very* surprised if the NSA had anything more than potential (if promising) avenues of exploration with regards to "next-gen" encryption

  • by Tom ( 822 ) on Wednesday November 10, 2010 @06:00PM (#34190776) Homepage Journal

    Security, especially in software development, doesn't suffer from the "we don't know how to do it" problem. It suffers from the "we don't have time/budget/patience/interest in doing what we know we should be doing" issue.

    • by faichai ( 166763 )

      Of course. Though budget buys time, which buys patience and 911 pretty much secured interest. Oh look: http://www.upi.com/Top_News/US/2010/10/28/US-intelligence-budget-tops-80-billion/UPI-37231288307113/ [upi.com]

      • Of course. Though budget buys time, which buys patience and 911 pretty much secured interest.

        Perhaps in some parts of government, particularly security oriented agencies like military, CIA, FBI and NSA. But I'd bet that in the majority of government and business 9/11 had little to no impact on security considerations for IT projects. It has certainly not impacted my software projects, and they've been sold to a whole plethora of government agencies and fortune 500 companies.

    • Well, security also generally means performance hit, so it's also a balance of performance and security. When you're wiretapping half the nation looking for those keywords that send up red flags you gotta be lightning fast
      • by Tom ( 822 )

        Mostly nonsense. Unless you are doing some really insane crypto, work with embedded systems or have unusually high requirements (some realtime applications), the performance impact of security largely doesn't matter.

  • Despite its reputation for secrecy and technical expertise, ... virtually all of the methods the NSA uses for development and information assurance are publicly known.
     
      Secrecy doesn't have to extend to every single thing. I'm sure NSA uses regular toilets too, not the top secret kind. As for reputation for technical expertise, how does using tried and tested development methods goes against that?

    • by hedwards ( 940851 ) on Wednesday November 10, 2010 @06:30PM (#34191000)
      I suspect it's more along the lines of people expecting there to be something significant that they have for writing secure code. I'm willing to bet that the only thing they have that most other organizations don't have is a substantial budget for auditing the code for vulnerabilities. They probably wait longer before deploying code as well until it's been thoroughly vetted.
      • by blair1q ( 305137 )

        I'm willing to bet they have a code base that's been fully developed using secure methodologies.

        Most people don't.

      • Re: (Score:3, Interesting)

        by failedlogic ( 627314 )

        In a corporation, you not only have accounting, HR, managers, VPs and such looking over your budget but you also have investors. If it costs you too much to produce something of "equal" quality to a competitor, they will start asking questions. A problem with insecure code probably won't cost the company the entire business.

        The NSA, I think, mostly has a black budget. There's only a few people who know how much, where and to whom (employees) this money goes to. So there's not really a budget you have to acc

    • Re: (Score:3, Insightful)

      by jd ( 1658 )

      If we start with the fact that the NSA is responsible for the Rainbow Series, partly responsible for Common Criteria, totally responsible for the NSA guidebook on securing Linux, and also totally responsible for the concepts in SELinux (remember, they talk about methods not code), it follows that the NSA is implying that the processes used to develop this public information are rigorous, sound and the methods the NSA use internally for projects they don't talk about. It actually doesn't say that what the NS

      • by blair1q ( 305137 )

        IOW, if you go here [cert.org] you'll get what they have.

        Not their code. Just their style.

        • by jd ( 1658 )

          That's part of things, but not the whole of things. The CERT stuff eliminates a lot of the bugs that introduce security holes, but it doesn't really cover any of the access controls within the app or between the app and OS, nor does it really cover how to have provably secure software*.

          *If you can prove that your dynamic memory library, your I/O libraries and your access control library are correct, and that all dynamic memory and I/O accesses that have the potential to do Bad Things when used without autho

  • And thats exactly what they want you to think...

    • Indeed, because the new mind control devices are only blocked by those stupid cheese head hats.
  • What about all this "Setec Astronomy" business then?
  • by boristdog ( 133725 ) on Wednesday November 10, 2010 @06:34PM (#34191040)

    The Soviets almost never used to crack codes. They just social-engineered (blackmail, sex, gifts, schmoozed, etc) to get all the information they wanted.

    It's how Mitnick did most of his work as well.

    • Re: (Score:3, Insightful)

      by blair1q ( 305137 )

      But that's expensive, slow, and labor-intensive.

      Trojan bots are cheap, easy to distribute, and hard to double against you.

  • by Anonymous Coward on Wednesday November 10, 2010 @06:45PM (#34191150)

    Writing bulletproof code isn't really all that hard, but it does take discipline. Discipline to use only those constucts which have been verified with both the compiler and linker.

    Some simple things that coders can do:
    - avoid the use of pointers.
    - Initialize all variables to known values.
    - Perform comparisons with the LHS using a static variable, so you don't accidentally get an assignment instead of a comparison
    - When you are done with a value, reset it to a "known" value. Zero is usually good.
    - Keep functions less than 1 page long. If you can't see the entire function on a single editor page, it is too long.

    Simple.

    BTW, I wrote code for real-time space vehicle flight control systems. When I look at OSS and see variables not set to initial values, I cringe. Sure, it is probably ok, but there isn't any cost to initializing the variables. This is a compile-time decision. Without knowing it, many programmers are counting on memory being zero'ed as the program gets loaded. Not all OSes do this, so if you are writing cross platform code, don't trust it will happen. Do it yourself.

    Oh, and if you want secure programs, loosely typed languages are scary.

    • Re: (Score:2, Interesting)

      Initialize all variables to known values

      And remember to reset them to known values as soon as they are no longer necessary. Not only is it good practice, whether or not the compiler has a job, but it encourages the programmer to keep his variables in mind.

    • by Jahava ( 946858 ) on Thursday November 11, 2010 @09:30AM (#34195402)

      Writing bulletproof code isn't really all that hard, but it does take discipline. Discipline to use only those constucts which have been verified with both the compiler and linker.

      Some simple things that coders can do: - avoid the use of pointers.

      Pointers aren't themselves bad; they just add some layers of complication to the otherwise stack-oriented game. The only reason the stack is nicer than pointers is because they're implicitly managed for you.

      Rather than avoid pointers, what you need is good code structure. Design functions that either manage the lifecycle of a pointer or are explicitly clear about how and what the pointer is going to be used as. Use const aggressively, and avoid typecasting as much as possible. Using good pointer naming techniques and management functions also dissipate the burden. Pointers are too useful to avoid religiously ... rather, build pointer security and management techniques into your coding style from the ground up. Choose descriptive names and try and constrain each pointer to its specific type (this lets the compiler help you keep your pointers straight).

      Initialize all variables to known values.

      Meh, I'm divided on this one. It's one thing to explicitly initialize global variables to either zero (which costs nothing, since they just end up in BSS sections) or non-zero (which puts them statically in the data segment). Stack variables, on the other hand, only really need to be initialized before they're used the first time. Pre-initializing them could lead to wasted instructions initializing them multiple times or cause them to be initialized in all code paths when they're only used in a few. My general rule of thumb is to be smart about it and, once again, naming conventions.

      Perform comparisons with the LHS using a static variable, so you don't accidentally get an assignment instead of a comparison

      Great tip; it's weird at first writing "if( NULL != p )", and you get a few funny stares, but after seeing enough "if( i = 10 )"s lying within seemingly-functional code, it's an easy selling point to make.

      - When you are done with a value, reset it to a "known" value. Zero is usually good.

      Definitely do this with pointers, descriptors, and other handle types. It also makes cleanup and pointer management easier. Less important to do with things like iterators and intermediate variables.

      - Keep functions less than 1 page long. If you can't see the entire function on a single editor page, it is too long.

      It's a good rule of thumb. I would like to add "any time you can't do this, make absolutely certain that you're not doing it for a good reason."

      Good tips, though. One thing I'd like to add: -Wall -Wextra -Werror (or your language's equivalent). If your code can't compile without a single warning, then you need to re-write your code and either manually disarm situations (e.g., override the compiler's common-sense with an assurance that you know what you're doing) or fix the warnings, which are actually bugs and errors. It's always fun to take someone's "bulletproof" code and turn on these flags and watch the crap spill out. Warnings are amazing, and they are absolutely your friend when it comes to writing bug-free and secure code.

      • Re: (Score:3, Informative)

        by TheRaven64 ( 641858 )

        Stack variables, on the other hand, only really need to be initialized before they're used the first time. Pre-initializing them could lead to wasted instructions initializing them multiple times or cause them to be initialized in all code paths when they're only used in a few.

        Unless your compiler really sucks, it will perform some trivial dataflow analysis and not generate code for the initialisation if the value is never used. Even really simply C compilers do this. If the value is used uninitialised on any code paths, then the initialisation will be used (although it may be moved to those code paths), and you don't want the compiler to remove it.

        From the flags you recommend, I'm guessing that you use gcc, which not only does this analysis it will even tell you if the value

      • Great tip; it's weird at first writing "if( NULL != p )", and you get a few funny stares, but after seeing enough "if( i = 10 )"s lying within seemingly-functional code, it's an easy selling point to make.

        I'm strongly against this. I use a compiler so I can write clear code, rather than assembly language. A compiler can catch all unintented assignments like this, by giving a warning whenever the result of an assignment is used as the condition, without any further comparison.

        I find that it's much clearer

    • by Kjella ( 173770 )

      Some simple things that languages can do:

      - Have all variables initialize to known values. I mostly program in C++/Qt and QString, QByteArray etc. don't need initialization. All numbers should initialize to 0, all pointers to NULL.
      - Don't make the difference between assignment and comparison be a simple typo. If I were to design a language, "=" would not be a valid operator. ":=" for assignment, "==" for comparison. (You could keep all the "+=" etc. but not plain "=")
      - Smarter scoping hints, like letting you

    • Re: (Score:3, Interesting)

      by TheRaven64 ( 641858 )

      Without knowing it, many programmers are counting on memory being zero'ed as the program gets loaded

      Any compliant C compiler will initialise all statics to 0 (stack values are different - they are not automatically initialised). From the C99 spec, 5.1.2:

      All objects with static storage duration shall be initialized (set to their initial values) before program startup.

      From 6.7.8.10:

      If an object that has static storage duration is not initialized explicitly, then:

      • if it has pointer type, it is initialized to a null pointer;
      • if it has arithmetic type, it is initialized to (positive or unsigned) zero;
      • if it is an aggregate, every member is initialized (recursively) according to these rules;
      • if it is a union, the first named member is initialized (recursively) according to these rules.

      You can guarantee that any standards-compliant C implementation will do this. You can't guarantee anything about an implementation that doesn't comply with the standard - it may deviate from it in other ways.

  • by Anonymous Coward

    One cornerstone of secure software development is the application of formal methods. The NSA Tokeneer project has been made completely open-source, demonstrating the feasability of applying formal methods to secure development problems.

  • I knew it! (Score:1, Interesting)

    by Anonymous Coward

    They're using Agile practices! They just developed them before anyone else, about twenty years ago!

    Incidentally, this also explains why they haven't done any groundbreaking work in twenty years... ~~~~

  • This won't do anything to convince all the people who believe that the NSA can zoom in and enhance bad quality photos to a 10000 times. Despite it not being possible, the government probably has secret technology. Sigh.

  • For anyone who's read security posts on this site - all too often NSA folks pop up and respond :)

    (and are frequently very helpful)

For God's sake, stop researching for a while and begin to think!

Working...