Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Privacy Government Open Source Security Software IT

NSA Backdoors In Open Source and Open Standards: What Are the Odds? 407

New submitter quarrelinastraw writes "For years, users have conjectured that the NSA may have placed backdoors in security projects such as SELinux and in cryptography standards such as AES. However, I have yet to have seen a serious scientific analysis of this question, as discussions rarely get beyond general paranoia facing off against a general belief that government incompetence plus public scrutiny make backdoors unlikely. In light of the recent NSA revelations about the PRISM surveillance program, and that Microsoft tells the NSA about bugs before fixing them, how concerned should we be? And if there is reason for concern, what steps should we take individually or as a community?" Read more below for some of the background that inspires these questions.
quarrelinastraw "History seems relevant here, so to seed the discussion I'll point out the following for those who may not be familiar. The NSA opposed giving the public access to strong cryptography in the '90s because it feared cryptography would interfere with wiretaps. They proposed a key escrow program so that they would have everybody's encryption keys. They developed a cryptography chipset called the "clipper chip" that gave a backdoor to law enforcement and which is still used in the US government. Prior to this, in the 1970s, NSA tried to change the cryptography standard DES (the precursor to AES) to reduce keylength effectively making the standard weaker against brute force attacks of the sort the NSA would have used.

Since the late '90s, the NSA appears to have stopped its opposition to public cryptography and instead (appears to be) actively encouraging its development and strengthening. The NSA released the first version of SELinux in 2000, 4 years after they canceled the clipper chip program due to the public's lack of interest. It is possible that the NSA simply gave up on their fight against public access to cryptography, but it is also possible that they simply moved their resources into social engineering — getting the public to voluntarily install backdoors that are inadvertently endorsed by security experts because they appear in GPLed code. Is this pure fantasy? Or is there something to worry about here?"
This discussion has been archived. No new comments can be posted.

NSA Backdoors In Open Source and Open Standards: What Are the Odds?

Comments Filter:
  • This is stupid (Score:5, Insightful)

    by Anonymous Coward on Tuesday July 02, 2013 @08:53AM (#44164243)

    This is fearmongering. Encryption standards that have been adopted are open source and mathematicians comb over them with a fine tooth comb before giving them their blessing. Yes, there is a worry among mathematicians about the NSA developing an algorithm that would permit a pre-computed set of numbers to decrypt all communication. Which is why they make sure it DOESN'T HAPPEN.

    See https://www.schneier.com/essay-198.html

    • Re: (Score:3, Informative)

      by F.Ultra ( 1673484 )
      Not to mention that what became AES was a Dutch(?) algorithm to begin with (Rijndael).
      • Re:This is stupid (Score:5, Insightful)

        by arnodf ( 1310501 ) on Tuesday July 02, 2013 @09:14AM (#44164461)

        Belgian ffs.
        Belgium, I hate it when people mistake us for Dutch!

        • by Anonymous Coward on Tuesday July 02, 2013 @09:19AM (#44164519)

          Belgium - The more awesomer part of the Spanish Netherlands!

        • by Alranor ( 472986 ) on Tuesday July 02, 2013 @09:24AM (#44164589)

          It's all Greek to me

        • by PopeRatzo ( 965947 ) on Tuesday July 02, 2013 @09:39AM (#44164763) Journal

          Belgian ffs.
          Belgium, I hate it when people mistake us for Dutch!

          Seriously, right? They probably don't even know you guys invented spaghetti and kung fu.

          I for one think the Belgs are awesome.

    • Re:This is stupid (Score:5, Insightful)

      by kwikrick ( 755625 ) on Tuesday July 02, 2013 @09:08AM (#44164413) Homepage Journal

      Fearmongering, yes.
      But not impossible.
      It's not so easy to make sure that a program is a correct implementation of a mathematical algorithm or of an open standard.
      A subtle bug (purposeful or not) in a crypographic algorithm or protocol can be exploited.
      Writing a bug is much easier than spotting it.
      Many applications and OSes get security updates almost dayly. They certainly haven't found them all yet.
      Perhaps the NSA has engineered backdoors in our free software at some point, but those vunerabilities have been patched already.
      Mosty paranoia then....
      Rick

    • Re: (Score:3, Informative)

      by Anonymous Coward

      Also what is left out in the summary is that the NSA worked to strengthen the S-boxes in DES against differential cryptanalysis attacks, even though the existence of such attacks were not know publicly at the time.

      http://en.wikipedia.org/wiki/National_Security_Agency#Data_Encryption_Standard

      • Re:This is stupid (Score:5, Informative)

        by Joce640k ( 829181 ) on Tuesday July 02, 2013 @09:55AM (#44164953) Homepage

        This is often quoted as an example of NSA's supposed superiority in cryptography but that happened back in the '70s when there were hardly any cryptographers or computers in the world.

        The knowledge gap between the NSA and independent cryptographers has closed a lot since then.

    • Re:This is stupid (Score:5, Interesting)

      by Hatta ( 162192 ) on Tuesday July 02, 2013 @09:19AM (#44164525) Journal

      Encryption algorithms may be secure, but how sure are you that your implementation is? Debian was generating entirely insecure SSL keys for a couple years before anyone noticed. Couldn't the NSA do something like that, but perhaps a bit more clever, and remain unnoticed?

      • Encryption algorithms may be secure, but how sure are you that your implementation is? Debian was generating entirely insecure SSL keys for a couple years before anyone noticed. Couldn't the NSA do something like that, but perhaps a bit more clever, and remain unnoticed?

        But the point is that somebody did notice. Open source software enables a more thorough review (doesn't mean it will happen, though), since the actual source code is available. Closed source software means you can only monitor inputs and outputs, making the detection of a problem much more likely to go unnoticed.

    • Yep (Score:5, Insightful)

      by Sycraft-fu ( 314770 ) on Tuesday July 02, 2013 @09:20AM (#44164537)

      AES was developed in Belgium by Joan Daemen and Vincent Rijmen. It was originally called Rijndael and was one of the AES candidates. What happened is the NIST put out a call for a replacement for the aging DES algorithm. It was one of a number of contenders and was the one that one the vote. The only thing the NSA has had to do with it is that they weighed in on it, and all the other top contenders, before a standard was chosen and said they were all secure and that they've since certified it for use in encrypting top secret data.

      It was analyzed, before its standardization and since, by the world community. The NSA was part of that, no surprise, but everyone looked at it. It is the sole most attacked cypher in history, and remains secure.

      So to believe the NSA has a 'backdoor' in it, or more correctly that they can crack it would imply that:

      1) The NSA is so far advanced in cryptography that they were able to discover this prior to 2001 (when it got approved) and nobody else has.

      2) That the NSA was so confident that they are the only group to be able to work this out that they'd give it their blessing, knowing that it would be used in critical US infrastructure (like banking) and that they have a mission to protect said infrastructure.

      3) So arrogant that they'd clear it to be used for top secret data, meaning that US government data could potentially be protected with a weak algorithm.

      Ya, I'm just not seeing that. That assumes a level of extreme mathematical brilliance, that they are basically better than the rest of the world combined, and a complete disregard for one of their missions.

      It seems far more likely that, yes, AES is secure. Nobody, not even the NSA, has a magic way to crack it.

      • Re:Yep (Score:4, Funny)

        by Joce640k ( 829181 ) on Tuesday July 02, 2013 @10:05AM (#44165051) Homepage

        AES ... is the sole most attacked cypher in history, and remains secure.

        The 128-bit version remains secure. The 256 and 192-bit versions are believed secure but have shown cracks (they should really have had a couple more encryption rounds).

        The 256/192-bit versions are just re-fiddlings of the 128-bit version, made to fulfill the NIST requirements for key sizes. This was largely a waste of time since 128-bits can't be brute-forced with any imaginable technology.

        (My advice to any potential cryptograpy coders out there is to stick with the 128 bit version).

      • Re:Yep (Score:5, Informative)

        by Noryungi ( 70322 ) on Tuesday July 02, 2013 @10:25AM (#44165331) Homepage Journal

        Let me add a few datapoints here, as a reminder...

        1) The AES competition was launched in part because DES and 3DES were cracked by EFF using FPGA-based brute-force decryption machine. Source :
        https://en.wikipedia.org/wiki/EFF_DES_cracker [wikipedia.org]
        https://w2.eff.org/Privacy/Crypto/Crypto_misc/DESCracker/HTML/19980716_eff_des_faq.html [eff.org]

        As a reminder, DES was THE standard crypto algorithm, vetted and approved by NSA. It could be cracked by EFF only because of Moore's Law and some serious budget and effort.

        2) Public-key cryptography was invented separately at GCHQ (UK NSA) and NSA itself, several years *before* Diffie-Hellmann. Source:
        https://en.wikipedia.org/wiki/Public-key_cryptography#History [wikipedia.org]

        So, yes, these people (NSA/GCHQ) are very good at what they do. They have had at least 10 years of head-start, since cryptography was considered for many years just a branch of mathematics in academic circles. These guys work on nothing but crypto and digital/analog communications, year in, year out. Do not underestimate them.

        3) One of the first electronic computers, was delivered to the NSA in the 1950s. NSA later suggested improvements to the company that built it. The first Cray supercomputers were delivered straight to NSA. Again, that was in the 1950s, when most computer companies (IBM comes to mind) were still struggling to define what a computer was good for. Source:

        http://www.nsa.gov/public_info/_files/cryptologic_quarterly/digitalcomputer_industry.pdf [nsa.gov]
        http://www.physics.csbsju.edu/370/mathematica/m1_eniac.pdf [csbsju.edu]

        4) The NSA and GCHQ have a long history of backdoors. They love these things, as they make their life so much easier. Read on Venona, Enigma, Ivy Bells: all of these were made possible by intercepting/copying one-time pads, selling "unbreakable" German encryption machines and tapping undersea Russian cables. And I am willing to bet these are just a small fraction of what these people have done over the years. Source:

        https://en.wikipedia.org/wiki/Venona_project [wikipedia.org]
        https://en.wikipedia.org/wiki/Enigma_machine [wikipedia.org]
        https://en.wikipedia.org/wiki/Operation_Ivy_Bells [wikipedia.org]

        Again, this is just a small fraction of what NSA and GCHQ have done over the years. So, yes, suspecting backdoors in open-source software is... shall we say... only natural.

        If I was paid to be a professional paranoid, I would be taking a very long hard look at my computers and telecom equipment right now.

    • Re: (Score:3, Interesting)

      by mitcheli ( 894743 )
      The bigger threat to encryption isn't the pre-computed magic numbers that the NSA may or may not have placed into key algorithms, it is the advance of technology and the subsequent rendering useless of the models we currently use today [wikipedia.org].
    • at the same time, there only a handful of people who know how read it. Plus reading source code is not as easy as writing.

      My real question is just how much scrutiny has been poured over it, and by who, instead of making the assumptions.
    • by c0lo ( 1497653 )
      Mmm? [marc.info] Are you sure that's enough? [computerworld.com] It wouldn't be quite the first time NSA would have "helped" [cryptome.org] someone.
    • Here's an article [wired.com] about the new encryption standard and its back door - master key. The facts are as follows:
      1) There exists a set of numbers that could be used as a master key to the system that has since been published as a standard.
      2) NSA created the system.
      3) You can't prove they don't have this skeleton key.
      4) It's their job to do stuff like this.

      Now re-read #1 again. Mathematically there IS a back door. The question is weather anyone knows the key.
    • Re:This is stupid (Score:5, Insightful)

      by ledow ( 319597 ) on Tuesday July 02, 2013 @10:20AM (#44165241) Homepage

      The question to ask is:

      What happens for the VERY VERY FIRST TIME this sort of tampering is spotted?

      What if we found something in Linux, or something in PKE, or something in anything we use?

      Would we just go "Oh, well, that's the NSA for you" and then carry on as normal? No. Likely there'd be a complete fork from a clean workbase to start over again, a complete distrust of code from day one, and a complete overhaul of all existing systems.

      It's just not something that, as a government agency, you'd want to get implicated in whatsoever. For a start, you have a backdoor into systems in the German government? Or the Koreans? Holy crap you're in trouble for it being found out.

      And what purpose would it serve, above and beyond traditional spying? Not a lot. The effort and secrecy required, and the implications if you're found out EVER, is far too large-scale to reap any benefit from it.

      It's much more incredibly likely that they are using standard spying techniques (i.e. let's tap X's computer because we know he's involved) than planting things into major pieces of open source software. Closed commercial? That's a different matter but - again - compared to just issuing an order that they do it for you and never speak about it, it's too difficult. And, even then, we've found out that that eventually comes out and has diplomatic effects on entire nations (including allies).

      I don't believe they wouldn't try. I don't believe they wouldn't have some way into the system. I don't believe for a second, though, that they've backdoored something quite so open and visible, or that the people involved in reviewing it wouldn't - EVENTUALLY - spot it and the outcry from that having a 100 times greater impact on the world than anything some twat leaks from diplomatic cables.

      I'd be so incredibly disappointed if that was the height of their capabilities, to do something some clumsy and ineffective, and that they couldn't choose their targets better.

      These people are spies. I expect them to perform all manner of dirty manoeuvres as a routine job. But the fact is that good, old-fashioned spying is a million times more effective.

      I would also have to say that an "enemy" of any description who has the capability to use only compiled-from-source software on regulated hardware, and uses them exclusively in whatever activities might be of interest to the NSA or GCHQ probably has the resources to verify that code or write it themselves.

      And, you have to remember the old "fake-Moon-landings-Russians" argument - if your enemy is capable of DETECTING that you've done that, and they announce it to the world and show it was you that did it, they'd do it. Just to discredit you. Just to make you forget about the guy in the airport. Just to make you look like fools. Just to prove that THEY know what's going on and it's not so easy to get into their systems.

      If you have a perfect government entity, then yes it's theoretically possible. But in real life, no, I'm sorry, it's just implausible on anything other than a trivial scale. They might get a "euid=root" type bug into the code if they try hard and find a weak target, but to be honest, it's not a concern.

      And if I was really worried, I'd use FreeDOS. Or Minix. Or FreeBSD. Or whatever. And any "common point" like gcc, well you can verify those kinds of things with the double-compilation tricks or just using a different piece of software. Either they would have to have infected EVERYTHING or NOTHING. And I'll go with nothing.

    • Re:This is stupid (Score:5, Insightful)

      by mlts ( 1038732 ) * on Tuesday July 02, 2013 @10:22AM (#44165279)

      We already had a closed algorithm pushed on us in the 1990s -- Skipjack. It was broken shortly after it was declassified.

      Weak algorithms will get torn apart quickly, because there are many people looking for weaknesses, both university researchers as well as criminal organizations.

      Best thing one can do if worried about one algorithm -- do cascades. Realistically, three 256 bit algorithms won't give 768 bit security, but 258 bits. However, if one algorithm gets broken, the data is still protected. This applies to public key crypto as well. The ideal would be RSA, ECC, and maybe one more that is resistant to Shor's algorithm like Unbalanced Oil and Vinegar or something lattice based.

    • Not to mention that if the NSA put a back door in SELinux or other open source software, they would be exposing their "secret" methods to the public. How about this scenario:

      NSA: Let's put some backdoors into SELinux.
      BadGuys: Hey, the NSA helped develop SELinux, let's examine their code to figure out how their other algorithms work.

      One advantage to open source software is that the source code is available for both the good guys and the bad guys to look at. If somebody plants something in the code, somebod

    • Re:This is stupid (Score:4, Informative)

      by hawguy ( 1600213 ) on Tuesday July 02, 2013 @11:22AM (#44166063)

      This is fearmongering. Encryption standards that have been adopted are open source and mathematicians comb over them with a fine tooth comb before giving them their blessing. Yes, there is a worry among mathematicians about the NSA developing an algorithm that would permit a pre-computed set of numbers to decrypt all communication. Which is why they make sure it DOESN'T HAPPEN.

      See https://www.schneier.com/essay-198.html [schneier.com]

      And there's the fact that AES-192 and AES-256 are NSA approved [wikipedia.org] for protecting Top Secret classified documents.

      It seems unlikely that they would approve the use of an algorithm with a known vulnerability to protect classified information -- knowing that a vulnerability would likely eventually be discovered (or stolen) by an adversary, leaving classified documents at risk. It would be awfully embarassing if, for example, someone stole secret documents and handed them over to a newspaper reporter and revealed some of the inner workings of the NSA.

  • by Viol8 ( 599362 ) on Tuesday July 02, 2013 @08:54AM (#44164255) Homepage

    .... but unless they have the worlds top obfuscating coders working there (quite possible) , how long do you think it would be until someone spots the suspect code especially in something as well trodden as the Linux kernel or GNU utilities? I would guess not too long.

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      Nahhh the backdoors are in the compilers.. They've modified GCC to install it for you. Your code looks fine and the backdoor is there. Everyone wins!

      AC.

    • by cHiphead ( 17854 ) on Tuesday July 02, 2013 @09:00AM (#44164329)

      Why not just insert something at the compiler level. Perhaps they have compromised GCC itself or something at a different, less obvious point in the process of development?

      • by pegr ( 46683 ) on Tuesday July 02, 2013 @09:08AM (#44164407) Homepage Journal

        Reflections on Trusting Trust [cmu.edu] (PDF alert). Required reading for anyone with interest on that very topic. Written by Ken Thompson, in fact.

      • by zerro ( 1820876 ) on Tuesday July 02, 2013 @09:15AM (#44164465)

        Why backdoor just one brand of compiler (since there are several), when you could backdoor the architecture?
        I'm pretty sure there is a special sequence of intel instructions which open the unicorn gate, and pipe a copy of all memory writes to NSA's server.

        • by dkf ( 304284 )

          Pretty sure they don't bother with that. The difference between cpu-memory bandwidth and general network bandwidth is colossal, and it would be very easy to detect that something untoward was happening. One of the points of spying is to do without being found out.

          Intercepting at telco/ISP level is much easier, much more practical.

      • by johnrpenner ( 40054 ) on Tuesday July 02, 2013 @11:37AM (#44166267) Homepage

        yeah — like ken thompson's C compiler exploit:

        http://scienceblogs.com/goodmath/2007/04/15/strange-loops-dennis-ritchie-a/ [scienceblogs.com]

        For debugging purposes, Thompson put a back-door into “login”. The way he did it was by modifying the C compiler. He took the code pattern for password verification, and embedded it into the C compiler, so that when it saw that pattern, it would actually generate code
        that accepted either the correct password for the username, or Thompson’s special debugging password. In pseudo-Python:

            def compile(code):
                if (looksLikeLoginCode(code)):
                    generateLoginWithBackDoor()
                else:
                    compileNormally(code)
        With that in the C compiler, any time that anyone compiles login,
        the code generated by the compiler will include Ritchie’s back door.

        Now comes the really clever part. Obviously, if anyone saw code like what’s in that
        example, they’d throw a fit. That’s insanely insecure, and any manager who saw that would immediately demand that it be removed. So, how can you keep the back door, but get rid of the danger of someone noticing it in the source code for the C compiler? You hack the C compiler itself:

            def compile(code):
                if (looksLikeLoginCode(code)):
                    generateLoginWithBackDoor(code)
                elif (looksLikeCompilerCode(code)):
                    generateCompilerWithBackDoorDetection(code)
                else:
                    compileNormally(code)
        What happens here is that you modify the C compiler code so that when it compiles itelf, it inserts the back-door code. So now when the C compiler compiles login, it will insert the back door code; and when it compiles
        the C compiler, it will insert the code that inserts the code into both login and the C compiler.

        Now, you compile the C compiler with itself – getting a C compiler that includes the back-door generation code explicitly. Then you delete the back-door code from the C compiler source. But it’s in the binary. So when you use that binary to produce a new version of the compiler from the source, it will insert the back-door code into
        the new version.

        So you’ve now got a C compiler that inserts back-door code when it compiles itself – and that code appears nowhere in the source code of the compiler.

    • Well, Ken Thompson's in login.c since like 1984, so we have that much.
    • by gatkinso ( 15975 )

      "All" they need to do is insert a very very subtle bug, and as pointed out, that bug could be in the compiler.

    • Considering that security audits are actually quite a rarity it's not beyond reason to think that flaws and bugs can be introduced and go unnoticed. Just because in theory people can comb over OSS code doesn't mean that it actually happens with any regularity.

    • Depends (Score:5, Interesting)

      by Sycraft-fu ( 314770 ) on Tuesday July 02, 2013 @09:30AM (#44164653)

      Check out the Underhanded C contest (http://underhanded.xcott.com/). There are great examples of code that look innocuous, but aren't. What's more, some of them look like legit mistakes that people might make programming.

      So that is always a possibility. Evil_Programmer_A who works for whatever Evil Group that wants to be able to hack things introduces a patch for some OSS item. However, there's a security hole coded in purposely. It is hard to see, and if discovered will just look like a fuckup. Eventually it'll probably get found and patched, but nobody suspects Evil_Programmer_A of any malfeasance, I mean shit security issues happen all the time. People make mistakes.

      In terms of how long to spot? Depends on how subtle it is. If you think all bugs get found real fast in OSS you've never kept up on security vulnerabilities. Sometimes, they find one that's been around for a LONG time. I remember back in 2000 when there was a BIND vulnerability that applied to basically every version of BIND ever. It has been lurking for years and nobody had caught it. Worse, it was a "day-0" kind of thing and people were exploiting it already. Caused a lot of grief for my roommate. By the time he heard about it (which was pretty quick, he subscribed to that kind of thing), their server at work had already been owned.

      Don't think that just because the code is open that it means that it gets heavily audited by experts. Also don't think that just because an expert looks at it they'll notice something. It turns out a lot of security issues are still found in the runtime, not by a code audit. Everyone looks at the code and says "Ya, looks good to me," and only when later running it and testing how it reacts do they discover an unintended interaction.

      I'm not trying to claim this is common, or even happening at all, but it is certainly possible. I think people put WAY too much faith in the "many eyes" thing of OSS. They think that if the code is open, well then people MUST see the bugs! All one has to do is follow a bug track site to see how false that is. Were it true, there'd be no bugs, ever, in release OSS code. Thing is, it is all written and audited by humans are humans are fallible. Mistakes happen, shit slips through.

  • by CajunArson ( 465943 ) on Tuesday July 02, 2013 @08:54AM (#44164265) Journal

    Last year or early this year there was a fix for a Linux kernel bug that could provide root privilege escalation. Here's the kicker though: The bug had been fixed years earlier but had been reintroduced into the kernel and nobody caught it for a very long time. For some reason, OpenSuse's kernel patches still included the bug fix, so OpenSuse couldn't be exploited, but mainline didn't reintroduce the fix for a long time.

    Given the complexity of the kernel as just one example of a large open-source project, I don't really buy the "all bugs are shallow" argument from days of past. That argument was making a presumption that people *wanted* to fix the bugs, and as we all know there are large groups of people who don't want the bugs fixed. That's not to say that there is a magical NSA backdoor in Linux (and no, there isn't a magical NSA backdoor in Windows either, get over it conspiracy fanboys). That is to say that simply not running Windows isn't enough to give you real security and yes, your Linux box can be attacked by a skilled and determined adversary.

    • by F.Ultra ( 1673484 ) on Tuesday July 02, 2013 @09:04AM (#44164355)
      if Microsoft giving NSA info on undisclosed vulnerabilities, they have in effect a magic backdoor in Windows.
      • by FriendlyLurker ( 50431 ) on Tuesday July 02, 2013 @09:21AM (#44164541)

        GP wrote: and no, there isn't a magical NSA backdoor in Windows either, get over it conspiracy fanboys

        You are forgetting something [eideard.com]. A pretty BIG BACK DOOR into windows that has been known and confirmed for some time now.

        “...the result of having the secret key inside your Windows operating system “is that it is tremendously easier for the NSA to load unauthorized security services on all copies of Microsoft Windows, and once these security services are loaded, they can effectively compromise your entire operating system“. The NSA key is contained inside all versions of Windows from Windows 95 OSR2 onwards”

        • Re: (Score:2, Insightful)

          by Anonymous Coward
          The only speculation is whether Microsoft has given NSA et. al. access to those keys so they can load what they like onto windows (via "product updates" and whatnot) without needing UAC permission etc. Given recent Snowdens revelations/confirmations we can pretty much conclude that that is very much the case...
        • So the NSA put in a magical untraceable backdoor that has never been found by the likes of Bruce Scheier or others in his field, but the NSA was also so stupid that they named the file "NSA_secret_evil_backdoor.dll" or something like that... yeah whatever.

          • by gr8_phk ( 621180 )
            IIRC the NSA_KEY definition exists and has been seen in accidentally released header files. You are free to offer an alternative explanation for what it is, but instead you choose to misrepresent the implementation and give a "whatever".
          • by HiThere ( 15173 )

            FWIW, the backdoor would have been put it by Microsoft. Did they? I don't know. I have no reason to doubt it, given their general sleazy business ethics, but the only reason to believe it is that they titled a particular thing "NSAKey". (And the name was assigned by Microsoft, so NSAs sneakiness about such things doesn't apply.)

            For all I know the name could have stood for "No Software Algorithm" and been documentation of something they needed to write. (And, no, I don't trust their public explanations.

        • As a followup to my other response, if this magical backdoor into every Windows system on the planet is so great, then why was there a need for Stuxnet to ever come into existence?

          The NSA should have built-in access to every Iranian Windows computer without the need for highly complex malware package!

          • by s.petry ( 762400 ) on Tuesday July 02, 2013 @10:06AM (#44165057)

            As a followup to my other response, if this magical backdoor into every Windows system on the planet is so great, then why was there a need for Stuxnet to ever come into existence?

            The NSA should have built-in access to every Iranian Windows computer without the need for highly complex malware package!

            You fail to understand the difference between a back door and spyware. A back door would allow the installation of such a piece of software, but would not be the spyware itself. This code worked around normal protection in Windows for security and privilege escalation, as well as avoided malicious software detection from AV software. In addition, there has been information leaked that told you that there are back doors in Windows for the US Government (and perhaps other Governments). The part that was not clear is whether NSA has people working at MS to ensure that they have and know about back doors, or MS employees facilitate their whims for creating these back doors.

      • They aren't giving the NSA stuff that nobody else gets. The NSA is just on the early notification list. Various groups get told about vulnerabilities as soon as MS knows about them. The rest get told about them when there's a patch. So sure, I guess the NSA could quickly develop and exploit the vulnerability (if it is relevant, amazing how few no-user interaction, remote initiated exploits there are now that there's a default firewall) before MS patches it, but that is not really that likely a scenario, and

  • I hear the Vegas odds of NSA backdoors into encryption schemes is 1000:0. Meaning everyone who bets $0 on the NSA not having a backdoor will receive $1,000 if they do.

    • by PhilHibbs ( 4537 )

      I'll bet infinity dollars at those odds.

      • by tnk1 ( 899206 )

        You've fell for it! By adding in an infinity, they can now simply renormalize their equation and now you owe them approximately... one million dollars.

        Vinnie and Joey will be over to collect momentarily.

  • by Anonymous Coward on Tuesday July 02, 2013 @08:57AM (#44164287)

    DES was developed in the early 1970's, and has been proven to be quite resistent to differential cryptanalysis, which didn't appear in the public literature until the late 1980's.

    During the development of DES, IBM sent DES's S-boxes to NSA, and when they came back, they had been modified. At the time there was suspicion that the modifications were a secret government back door, however when differential cryptanalysis was discovered in the 1980s, the researchers found that DES was surprisingly hard to attack. It turned out that the modifications to the S-boxes actually strengthened the cipher.

    • Intriguing ... citation please. "Strengthened the cipher" or "mucked it up with goal X and instead supported goal Y"?

      • by time961 ( 618278 ) on Tuesday July 02, 2013 @09:34AM (#44164691)
        Biham and Shamir, Differential Cryptanalysis of the Data Encryption Standard, at CRYPTO '92. They showed that the S-boxes were about as strong as possible given other design constraints.

        Subsequently, Don Coppersmith, who had discovered differential cryptanalysis while working (as a summer intern) at IBM during the development of DES in the early 1970's, published a brief paper (1994, IBM J. of R&D) saying "Yep, we figured out this technique for breaking our DES candidates, and strengthened them against it. We told the NSA, and they said 'we already know, and we're glad you've made these improvements, but we'd prefer you not say anything about this'." And he didn't, for twenty years.

        Interestingly, when Matsui published his (even more effective) DES Linear Cryptanalysis in 1994, he observed that DES was just average in resistance, and opined that linear cryptanalysis had not been considered in the design of DES.

        I think it's fair to say that NSA encouraged DES to be better. But how much they knew at the time, and whether they could have done better still, will likely remain a mystery for many years. They certainly didn't make it worse by any metric available today.
        • yeah, the conventional wisdom is that NSA improved the S-boxes in Lucifer, and at the time nobody quite understood why. Academic cryptographers later understood why and this sort of led to a ghetto legend that NSA people were mentats who were far advanced from academia. The more likely explanation is that in the mid-70's, when crypto CS was relatively new, that the people who held such interests gravitated to the NSA because that's where the opportunities were. NSA likely had somebody on staff who had st

  • Is there a question in there about something specific or are you throwing pasta against the wall to see what sticks? Take AES for example. A pretty open selection process evaluating a number of known ciphers among many smart eyes. Are you saying No Such Agency pulled a fast one in broad daylight in front of multitudes or is your line of question non-specific and open ended?
    • Is there a question in there about something specific or are you throwing pasta against the wall to see what sticks? Take AES for example. A pretty open selection process evaluating a number of known ciphers among many smart eyes. Are you saying No Such Agency pulled a fast one in broad daylight in front of multitudes or is your line of question non-specific and open ended?

      It seems fair; can someone not related to the government attest to the viability of SEL? Has anyone read/understood enough of it to know for sure? Is it right to presume that someone from the OSS community would have certainly caught on to a trick by the NSA, or is it hubris?

  • by feld ( 980784 ) on Tuesday July 02, 2013 @08:58AM (#44164301)

    http://marc.info/?l=openbsd-tech&m=129236621626462&w=2 [marc.info]

    Some guy claimed to have put backdoors in the OpenBSD IPSEC stack for the FBI, but a full audit proved no such thing ever happened.

    I seriously doubt this is happening in open source.

    • by eer ( 526805 )

      Ha Ha. Hahaha. I guess you missed the bit about how it is computationally infeasible (as in, halting problem) to definitively determine whether there are artifices in source or object code that deliberately mask and hide their behavior. See Naval Post Graduate School thesis and papers on how few lines of code need to be introduced to turn IPSEC implementations into clear text relays - turned on and off via encrypted key triggers.

      A few years back, it was discovered that virtually every one's - and I mean

  • by sjbe ( 173966 ) on Tuesday July 02, 2013 @08:59AM (#44164311)

    I have yet to have seen a serious scientific analysis of this question, as discussions rarely get beyond general paranoia facing off against a general belief that government incompetence plus public scrutiny make backdoors unlikely.

    Government's are not nearly as incompetent as many pundits would have you believe. We have some very seriously talented people doing some pretty amazing things in our government. Government isn't always a model of efficiency but inefficient does not (always) equal incompetent. And in some cases inefficiency is actually a good thing. Sometimes you want the government to be slow and deliberative and to do it right instead of fast. Some of the most remarkable organizations and talented people I've met are in government. Sadly some of the worst I've met are in government as well but my point remains. Assuming government = incompetent is in clearly wrong in the face of copious evidence to the contrary.

  • With the continuing audit process and complete transparency I would trust OpenBSD along with OpenSSH, etc.
  • The Clipper chip (Score:5, Interesting)

    by Vintermann ( 400722 ) on Tuesday July 02, 2013 @09:01AM (#44164333) Homepage

    You mention the Clipper chip and its key escrow system guaranteeing government access, but what you should remember is that the cryptosystem that chip used was

    1. Foolishly kept secret by the NSA, although it has long been understood that academic scrutiny is far more important than security through obscurity, and

    2. The symmetric cipher the chip used, Skipjack, was subject to a devastating attack on its first day of declassification (breaking half the rounds) and by now is completely broken. That remains rare for any seriously proposed cipher...

    Since presumably the NSA did not try to make a broken cryptosystem (why, to help other spies? They themselves had the keys anyway!) this illustrates that yes, incompetence is a concern even at super-funded, super-powerful agencies like the NSA.

  • Bitcoin? (Score:5, Funny)

    by Fesh ( 112953 ) on Tuesday July 02, 2013 @09:09AM (#44164415) Homepage Journal

    Obviously I haven't read the literature enough to know how it works or why it's impossible... But it would be really funny if it turned out that Bitcoin mining was actually the NSA's attempt at crowdsourcing brute-force decryption...

  • Obviously the government has access to very fast computers beyond what the public has available. As computer power gets greater it becomes easier for specialists to break into supposedly secure situations. We have also been in a war mode since 9/11 and all kinds of covert snooping are taking place. Deeply embedded agents do exist in this world. I have seen it first hand. Back in the 1960s that fine young girl that spent a lot of nights in your bed that you thought was a hippie was often some kin

    • by dclydew ( 14163 )

      Wait a second... a hippie from the 60's that's geeky enough to post on /.? Any girl in your bed should have been suspect!!!! ;-)

  • Close to Unity [ubuntu.com].
  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Tuesday July 02, 2013 @09:18AM (#44164493)
    Comment removed based on user account deletion
  • by time961 ( 618278 ) on Tuesday July 02, 2013 @09:24AM (#44164583)
    I can't find a good reference right now, but I recall reading a few years back the observation that one of the GSM stream ciphers (A5/1?) has a choice of implementation parameters (register sizes and clocking bits) that could "hardly be worse" with respect to making it easily breakable.

    This property wasn't discovered until it had been fielded for years, of course, because the ciphers were developed in the context of a closed standards process and not subjected to meaningful public scrutiny, even tough they were nominally "open". The implication was that a mole in the standardizing organization(s) could have pushed for those parameters based on some specious analysis without anyone understanding just what was being proposed, because the (open) state of the art at the time the standard was being developed didn't include the necessary techniques to cryptanalyze the cipher effectively. Certainly the A5 family has proven to have more than its fair share of weaknesses, and it may be that the bad parameter choices were genuinely random, but it gives one to think.

    Perhaps some reader can supply the reference?

    The 802.11 ciphers are another great example of the risks of a quasi-open standardization process, but I've seen no suggestion that the process was manipulated to make WEP weak, just that the lack of thorough review by the creators led to significant flaws that then led to great new research for breaking RC4-like ciphers.
  • Oh yeah, I'm so sure after this many years and many people looking at the source code for AES that nobody happened to see a totally stand-out backdoor code in it. And nobody noticed the resulting weakness in cracking the encryption. That's completely ridiculous.
    • by Fuzzums ( 250400 )

      The thing with encryption (follow the Coursera course by Dan Boneh) is that the code doesn't have to be compromised for the encryption to be insecure.
      And showing the encryption is secure or not. Well. That is not so easy.
      Some smart ass thought doing DES twice was safer than just DES. Wrong. Meet In The Middle Attacks.
      Think of the scenario where random primes are picked every time directly after a device boots. A random generator didn't have enough time to get random. Those primes that are not random but in

  • by lkcl ( 517947 ) <lkcl@lkcl.net> on Tuesday July 02, 2013 @09:34AM (#44164707) Homepage

    there's a story i heard about the origins of linux, which was told to me a few years ago at a ukuug conference by a self-employed journalist called richard. he was present at a meeting in a secure facility where the effects of "The Unix Wars" were being exploited by Microsoft to good effect. the people at the meeting could clearly see the writing on the wall - that the apx-$10,000s cost of Unixen vs the appx-$100s of windows would be seriously, seriously hard to combat from a security perspective. their primary concern was that the [expensive] Unixen at least came with source: microsoft was utterly proprietary, uncontrolled, out of control, yet would obviously be extremely hard to justify *not* being deployed in sensitive government departments based on cost alone. ... so the decision was made to *engineer* a free version of Unix. one of the people at the meeting was tasked with finding a suitable PhD student to "groom" and encourage. he found linux torvalds: the rest is history.

    now we have SE/Linux - designed and maintained primarily by the NSA.

    the bottom line is that the chances of this speculation being true - that the NSA has placed back-doors in GNU/Linux or its compiler toolchain - are extremely remote. you have to bear in mind that the NSA is indirectly responsible for securing its nation's infrastructure. adding in backdoors would be extremely foolish.

  • Not likely (Score:4, Interesting)

    by amck ( 34780 ) on Tuesday July 02, 2013 @10:09AM (#44165095) Homepage

    This misses the dual goals of the NSA:
    (1) Break other peoples communications.
    (2) Protect US (govt?) ones.

    The trouble with backdoors is that they can be used by others to break US systems. So this is not the preferred solution from the NSA's perspective.

    A good lesson in this is the DES cipher. The DES cipher was a 56-bit cipher based on IBMs original 128-bit Lucifer algorithm. When it was released everybody worried about the S-boxes and design and wondered if the NSA has created a backdoor for themselves. As attacks on Fiestel network ciphers (such as DES) were found, it was apparent that DES was already hardened against these: the NSA knew of these attacks and had produced the hardest 56-bit cipher possible. Their strategy became apparent: by setting the strength at 56-bits, they created a cipher they could break because they had the processing power, but no-one else could (at the time).

    Similarly today: its apparent that 22 years after PGP was created, mail is not encrypted by default. The NSA's strategy is to help push the design of open standards to suit their goals: small -enough quantities of encryption that it is possible for brute-force or black-bag jobs to be used as required.

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...