NSA Backdoors In Open Source and Open Standards: What Are the Odds? 407
New submitter quarrelinastraw writes "For years, users have conjectured that the NSA may have placed backdoors in security projects such as SELinux and in cryptography standards such as AES. However, I have yet to have seen a serious scientific analysis of this question, as discussions rarely get beyond general paranoia facing off against a general belief that government incompetence plus public scrutiny make backdoors unlikely. In light of the recent NSA revelations about the PRISM surveillance program, and that Microsoft tells the NSA about bugs before fixing them, how concerned should we be? And if there is reason for concern, what steps should we take individually or as a community?" Read more below for some of the background that inspires these questions.
quarrelinastraw "History seems relevant here, so to seed the discussion I'll point out the following for those who may not be familiar. The NSA opposed giving the public access to strong cryptography in the '90s because it feared cryptography would interfere with wiretaps. They proposed a key escrow program so that they would have everybody's encryption keys. They developed a cryptography chipset called the "clipper chip" that gave a backdoor to law enforcement and which is still used in the US government. Prior to this, in the 1970s, NSA tried to change the cryptography standard DES (the precursor to AES) to reduce keylength effectively making the standard weaker against brute force attacks of the sort the NSA would have used.Since the late '90s, the NSA appears to have stopped its opposition to public cryptography and instead (appears to be) actively encouraging its development and strengthening. The NSA released the first version of SELinux in 2000, 4 years after they canceled the clipper chip program due to the public's lack of interest. It is possible that the NSA simply gave up on their fight against public access to cryptography, but it is also possible that they simply moved their resources into social engineering — getting the public to voluntarily install backdoors that are inadvertently endorsed by security experts because they appear in GPLed code. Is this pure fantasy? Or is there something to worry about here?"
This is stupid (Score:5, Insightful)
This is fearmongering. Encryption standards that have been adopted are open source and mathematicians comb over them with a fine tooth comb before giving them their blessing. Yes, there is a worry among mathematicians about the NSA developing an algorithm that would permit a pre-computed set of numbers to decrypt all communication. Which is why they make sure it DOESN'T HAPPEN.
See https://www.schneier.com/essay-198.html
Re: (Score:3, Informative)
Re:This is stupid (Score:5, Insightful)
Belgian ffs.
Belgium, I hate it when people mistake us for Dutch!
Re:This is stupid (Score:5, Funny)
Belgium - The more awesomer part of the Spanish Netherlands!
Re:This is stupid (Score:4, Funny)
It's all Greek to me
Re:This is stupid (Score:4, Funny)
Seriously, right? They probably don't even know you guys invented spaghetti and kung fu.
I for one think the Belgs are awesome.
Re:This is stupid (Score:5, Interesting)
But what bothers me the most with Belgian mistaken identity is that a lot of American companies or websites serves everything in French when it detects I'm from Belgium. It is like if the rest of the world would detect that you are from the States and serve everything in Spanish because there is a big Hispanic community.
It took Microsoft years to get it in their head that most people here speaks Flemish. For years everything on Xbox live (that had a french localization) was served in French.
Re: This is stupid (Score:3, Informative)
Re:This is stupid (Score:4, Interesting)
Hell, the State of California practically does that now.
Practically? In some parts of S. California I could walk outside my front door and not be able to read the commercial signs. You'd never know the official langauge of the country was English.
Re:This is stupid (Score:5, Informative)
You'd never know the official langauge of the country was English.
That's probably because it's not....
The US, on principle, never adopted an official language in the way most other countries do.
Re:This is stupid (Score:4, Insightful)
Hell, the State of California practically does that now.
Practically? In some parts of S. California I could walk outside my front door and not be able to read the commercial signs. You'd never know the official langauge of the country was English.
Point of order... that's because the US has no official language. It is generally held that such would be a violation of the First Amendment. :)
Some States, California among them, have passed official language laws but as far as I know they all lack enforcement clauses.
Re:This is stupid (Score:4, Funny)
So, you're illiterate and proud of it. Cool.
So, you're a dick, and don't know it. Awsome.
I think he probably knows.
Re: (Score:3)
They don't speak Dutch, they speak Phlegmish.
Re:This is stupid (Score:4, Informative)
You're probably thinking of DES rather than AES with regards NSA provided s-boxes. IIRC said s-boxes in DES were changed by the NSA with no real explanation. Some years later when differential cryptanalysis was discovered in the non-secret world it turned out that the change actually hardened DES against such an attack - so in this case the NSA created a stronger algorithm. See wikipedia [wikipedia.org].
Re:This is stupid (Score:5, Insightful)
Fearmongering, yes.
But not impossible.
It's not so easy to make sure that a program is a correct implementation of a mathematical algorithm or of an open standard.
A subtle bug (purposeful or not) in a crypographic algorithm or protocol can be exploited.
Writing a bug is much easier than spotting it.
Many applications and OSes get security updates almost dayly. They certainly haven't found them all yet.
Perhaps the NSA has engineered backdoors in our free software at some point, but those vunerabilities have been patched already.
Mosty paranoia then....
Rick
Re:This is stupid (Score:5, Informative)
It's always interesting to see what (some of the best attempts at) intentional code obfuscation can look like:
http://www.ioccc.org/ [ioccc.org]
Re:This is stupid (Score:5, Informative)
Re: (Score:3, Interesting)
Mosty paranoia then....
Misdirection rather than paranoia. They're trying to point the finger at Linux etc when it's SecureBoot that's the vulnerability.
When you use a board with SecureBot, you're using pre-compromised hardware. Even when you install a secure OS, the underlying hardware hides the backdoor.
Re:This is stupid (Score:4, Interesting)
It's not so easy to make sure that a program is a correct implementation of a mathematical algorithm or of an open standard.
There's a huge list of test vectors for AES published by NIST (among others): http://csrc.nist.gov/archive/aes/rijndael/wsdindex.html [nist.gov]
The chances of being able to write some code which reproduces those values but ISN'T AES are less than the reciprocal of the number of atoms in the universe.
Re: (Score:3)
http://www.ubuntu.com/usn/usn-612-2/ [ubuntu.com]
Weaknesses in key generation will create encrypted code that IS AES (or whatever), but it's not cryptographically secure. Huge difference.
Re: (Score:3, Informative)
Also what is left out in the summary is that the NSA worked to strengthen the S-boxes in DES against differential cryptanalysis attacks, even though the existence of such attacks were not know publicly at the time.
http://en.wikipedia.org/wiki/National_Security_Agency#Data_Encryption_Standard
Re:This is stupid (Score:5, Informative)
This is often quoted as an example of NSA's supposed superiority in cryptography but that happened back in the '70s when there were hardly any cryptographers or computers in the world.
The knowledge gap between the NSA and independent cryptographers has closed a lot since then.
Re:This is stupid (Score:5, Interesting)
Encryption algorithms may be secure, but how sure are you that your implementation is? Debian was generating entirely insecure SSL keys for a couple years before anyone noticed. Couldn't the NSA do something like that, but perhaps a bit more clever, and remain unnoticed?
Re: (Score:3, Funny)
http://www.gergely.risko.hu/debian-dsa1571/random4.jpg [risko.hu]
Re: (Score:3)
Encryption algorithms may be secure, but how sure are you that your implementation is? Debian was generating entirely insecure SSL keys for a couple years before anyone noticed. Couldn't the NSA do something like that, but perhaps a bit more clever, and remain unnoticed?
But the point is that somebody did notice. Open source software enables a more thorough review (doesn't mean it will happen, though), since the actual source code is available. Closed source software means you can only monitor inputs and outputs, making the detection of a problem much more likely to go unnoticed.
Yep (Score:5, Insightful)
AES was developed in Belgium by Joan Daemen and Vincent Rijmen. It was originally called Rijndael and was one of the AES candidates. What happened is the NIST put out a call for a replacement for the aging DES algorithm. It was one of a number of contenders and was the one that one the vote. The only thing the NSA has had to do with it is that they weighed in on it, and all the other top contenders, before a standard was chosen and said they were all secure and that they've since certified it for use in encrypting top secret data.
It was analyzed, before its standardization and since, by the world community. The NSA was part of that, no surprise, but everyone looked at it. It is the sole most attacked cypher in history, and remains secure.
So to believe the NSA has a 'backdoor' in it, or more correctly that they can crack it would imply that:
1) The NSA is so far advanced in cryptography that they were able to discover this prior to 2001 (when it got approved) and nobody else has.
2) That the NSA was so confident that they are the only group to be able to work this out that they'd give it their blessing, knowing that it would be used in critical US infrastructure (like banking) and that they have a mission to protect said infrastructure.
3) So arrogant that they'd clear it to be used for top secret data, meaning that US government data could potentially be protected with a weak algorithm.
Ya, I'm just not seeing that. That assumes a level of extreme mathematical brilliance, that they are basically better than the rest of the world combined, and a complete disregard for one of their missions.
It seems far more likely that, yes, AES is secure. Nobody, not even the NSA, has a magic way to crack it.
Re:Yep (Score:4, Funny)
AES ... is the sole most attacked cypher in history, and remains secure.
The 128-bit version remains secure. The 256 and 192-bit versions are believed secure but have shown cracks (they should really have had a couple more encryption rounds).
The 256/192-bit versions are just re-fiddlings of the 128-bit version, made to fulfill the NIST requirements for key sizes. This was largely a waste of time since 128-bits can't be brute-forced with any imaginable technology.
(My advice to any potential cryptograpy coders out there is to stick with the 128 bit version).
Re:Yep (Score:5, Informative)
Let me add a few datapoints here, as a reminder...
1) The AES competition was launched in part because DES and 3DES were cracked by EFF using FPGA-based brute-force decryption machine. Source :
https://en.wikipedia.org/wiki/EFF_DES_cracker [wikipedia.org]
https://w2.eff.org/Privacy/Crypto/Crypto_misc/DESCracker/HTML/19980716_eff_des_faq.html [eff.org]
As a reminder, DES was THE standard crypto algorithm, vetted and approved by NSA. It could be cracked by EFF only because of Moore's Law and some serious budget and effort.
2) Public-key cryptography was invented separately at GCHQ (UK NSA) and NSA itself, several years *before* Diffie-Hellmann. Source:
https://en.wikipedia.org/wiki/Public-key_cryptography#History [wikipedia.org]
So, yes, these people (NSA/GCHQ) are very good at what they do. They have had at least 10 years of head-start, since cryptography was considered for many years just a branch of mathematics in academic circles. These guys work on nothing but crypto and digital/analog communications, year in, year out. Do not underestimate them.
3) One of the first electronic computers, was delivered to the NSA in the 1950s. NSA later suggested improvements to the company that built it. The first Cray supercomputers were delivered straight to NSA. Again, that was in the 1950s, when most computer companies (IBM comes to mind) were still struggling to define what a computer was good for. Source:
http://www.nsa.gov/public_info/_files/cryptologic_quarterly/digitalcomputer_industry.pdf [nsa.gov]
http://www.physics.csbsju.edu/370/mathematica/m1_eniac.pdf [csbsju.edu]
4) The NSA and GCHQ have a long history of backdoors. They love these things, as they make their life so much easier. Read on Venona, Enigma, Ivy Bells: all of these were made possible by intercepting/copying one-time pads, selling "unbreakable" German encryption machines and tapping undersea Russian cables. And I am willing to bet these are just a small fraction of what these people have done over the years. Source:
https://en.wikipedia.org/wiki/Venona_project [wikipedia.org]
https://en.wikipedia.org/wiki/Enigma_machine [wikipedia.org]
https://en.wikipedia.org/wiki/Operation_Ivy_Bells [wikipedia.org]
Again, this is just a small fraction of what NSA and GCHQ have done over the years. So, yes, suspecting backdoors in open-source software is... shall we say... only natural.
If I was paid to be a professional paranoid, I would be taking a very long hard look at my computers and telecom equipment right now.
Re: (Score:3, Interesting)
Re: (Score:2)
My real question is just how much scrutiny has been poured over it, and by who, instead of making the assumptions.
Re: (Score:2)
But they did it (Score:3)
1) There exists a set of numbers that could be used as a master key to the system that has since been published as a standard.
2) NSA created the system.
3) You can't prove they don't have this skeleton key.
4) It's their job to do stuff like this.
Now re-read #1 again. Mathematically there IS a back door. The question is weather anyone knows the key.
Re:This is stupid (Score:5, Insightful)
The question to ask is:
What happens for the VERY VERY FIRST TIME this sort of tampering is spotted?
What if we found something in Linux, or something in PKE, or something in anything we use?
Would we just go "Oh, well, that's the NSA for you" and then carry on as normal? No. Likely there'd be a complete fork from a clean workbase to start over again, a complete distrust of code from day one, and a complete overhaul of all existing systems.
It's just not something that, as a government agency, you'd want to get implicated in whatsoever. For a start, you have a backdoor into systems in the German government? Or the Koreans? Holy crap you're in trouble for it being found out.
And what purpose would it serve, above and beyond traditional spying? Not a lot. The effort and secrecy required, and the implications if you're found out EVER, is far too large-scale to reap any benefit from it.
It's much more incredibly likely that they are using standard spying techniques (i.e. let's tap X's computer because we know he's involved) than planting things into major pieces of open source software. Closed commercial? That's a different matter but - again - compared to just issuing an order that they do it for you and never speak about it, it's too difficult. And, even then, we've found out that that eventually comes out and has diplomatic effects on entire nations (including allies).
I don't believe they wouldn't try. I don't believe they wouldn't have some way into the system. I don't believe for a second, though, that they've backdoored something quite so open and visible, or that the people involved in reviewing it wouldn't - EVENTUALLY - spot it and the outcry from that having a 100 times greater impact on the world than anything some twat leaks from diplomatic cables.
I'd be so incredibly disappointed if that was the height of their capabilities, to do something some clumsy and ineffective, and that they couldn't choose their targets better.
These people are spies. I expect them to perform all manner of dirty manoeuvres as a routine job. But the fact is that good, old-fashioned spying is a million times more effective.
I would also have to say that an "enemy" of any description who has the capability to use only compiled-from-source software on regulated hardware, and uses them exclusively in whatever activities might be of interest to the NSA or GCHQ probably has the resources to verify that code or write it themselves.
And, you have to remember the old "fake-Moon-landings-Russians" argument - if your enemy is capable of DETECTING that you've done that, and they announce it to the world and show it was you that did it, they'd do it. Just to discredit you. Just to make you forget about the guy in the airport. Just to make you look like fools. Just to prove that THEY know what's going on and it's not so easy to get into their systems.
If you have a perfect government entity, then yes it's theoretically possible. But in real life, no, I'm sorry, it's just implausible on anything other than a trivial scale. They might get a "euid=root" type bug into the code if they try hard and find a weak target, but to be honest, it's not a concern.
And if I was really worried, I'd use FreeDOS. Or Minix. Or FreeBSD. Or whatever. And any "common point" like gcc, well you can verify those kinds of things with the double-compilation tricks or just using a different piece of software. Either they would have to have infected EVERYTHING or NOTHING. And I'll go with nothing.
Re:This is stupid (Score:5, Insightful)
We already had a closed algorithm pushed on us in the 1990s -- Skipjack. It was broken shortly after it was declassified.
Weak algorithms will get torn apart quickly, because there are many people looking for weaknesses, both university researchers as well as criminal organizations.
Best thing one can do if worried about one algorithm -- do cascades. Realistically, three 256 bit algorithms won't give 768 bit security, but 258 bits. However, if one algorithm gets broken, the data is still protected. This applies to public key crypto as well. The ideal would be RSA, ECC, and maybe one more that is resistant to Shor's algorithm like Unbalanced Oil and Vinegar or something lattice based.
Re: (Score:3)
Not to mention that if the NSA put a back door in SELinux or other open source software, they would be exposing their "secret" methods to the public. How about this scenario:
NSA: Let's put some backdoors into SELinux.
BadGuys: Hey, the NSA helped develop SELinux, let's examine their code to figure out how their other algorithms work.
One advantage to open source software is that the source code is available for both the good guys and the bad guys to look at. If somebody plants something in the code, somebod
Re:This is stupid (Score:4, Informative)
This is fearmongering. Encryption standards that have been adopted are open source and mathematicians comb over them with a fine tooth comb before giving them their blessing. Yes, there is a worry among mathematicians about the NSA developing an algorithm that would permit a pre-computed set of numbers to decrypt all communication. Which is why they make sure it DOESN'T HAPPEN.
See https://www.schneier.com/essay-198.html [schneier.com]
And there's the fact that AES-192 and AES-256 are NSA approved [wikipedia.org] for protecting Top Secret classified documents.
It seems unlikely that they would approve the use of an algorithm with a known vulnerability to protect classified information -- knowing that a vulnerability would likely eventually be discovered (or stolen) by an adversary, leaving classified documents at risk. It would be awfully embarassing if, for example, someone stole secret documents and handed them over to a newspaper reporter and revealed some of the inner workings of the NSA.
Well they COULD put a backdoor in some OSS... (Score:5, Interesting)
.... but unless they have the worlds top obfuscating coders working there (quite possible) , how long do you think it would be until someone spots the suspect code especially in something as well trodden as the Linux kernel or GNU utilities? I would guess not too long.
Re: (Score:3, Insightful)
Nahhh the backdoors are in the compilers.. They've modified GCC to install it for you. Your code looks fine and the backdoor is there. Everyone wins!
AC.
Re:Well they COULD put a backdoor in some OSS... (Score:5, Interesting)
Why not just insert something at the compiler level. Perhaps they have compromised GCC itself or something at a different, less obvious point in the process of development?
Re:Well they COULD put a backdoor in some OSS... (Score:5, Informative)
Reflections on Trusting Trust [cmu.edu] (PDF alert). Required reading for anyone with interest on that very topic. Written by Ken Thompson, in fact.
Re:Well they COULD put a backdoor in some OSS... (Score:5, Interesting)
...as usual missing Fully Countering Trusting Trust through Diverse Double-Compiling [dwheeler.com]
Re: (Score:3)
Did anybody ever try that with a C compiler?
Re: (Score:3)
The idea is that you DON'T have a trusted C compiler. You have two apparently good C compilers that were developed independant of each other. So you use one to compile the other's source code, and then you use that second one to compile the first ones source code. Then you can probably trust the first one. (If two steps doesn't suffice, use a chain of three.)
Note that this only works if the compilers are developed independantly of each other, and if they recognize particular chunks of code that the spe
Re:Well they COULD put a backdoor in some OSS... (Score:5, Interesting)
Why backdoor just one brand of compiler (since there are several), when you could backdoor the architecture?
I'm pretty sure there is a special sequence of intel instructions which open the unicorn gate, and pipe a copy of all memory writes to NSA's server.
Re: (Score:3)
Pretty sure they don't bother with that. The difference between cpu-memory bandwidth and general network bandwidth is colossal, and it would be very easy to detect that something untoward was happening. One of the points of spying is to do without being found out.
Intercepting at telco/ISP level is much easier, much more practical.
Re:Well they COULD put a backdoor in some OSS... (Score:4, Interesting)
yeah — like ken thompson's C compiler exploit:
http://scienceblogs.com/goodmath/2007/04/15/strange-loops-dennis-ritchie-a/ [scienceblogs.com]
For debugging purposes, Thompson put a back-door into “login”. The way he did it was by modifying the C compiler. He took the code pattern for password verification, and embedded it into the C compiler, so that when it saw that pattern, it would actually generate code
that accepted either the correct password for the username, or Thompson’s special debugging password. In pseudo-Python:
def compile(code):
if (looksLikeLoginCode(code)):
generateLoginWithBackDoor()
else:
compileNormally(code)
With that in the C compiler, any time that anyone compiles login,
the code generated by the compiler will include Ritchie’s back door.
Now comes the really clever part. Obviously, if anyone saw code like what’s in that
example, they’d throw a fit. That’s insanely insecure, and any manager who saw that would immediately demand that it be removed. So, how can you keep the back door, but get rid of the danger of someone noticing it in the source code for the C compiler? You hack the C compiler itself:
def compile(code):
if (looksLikeLoginCode(code)):
generateLoginWithBackDoor(code)
elif (looksLikeCompilerCode(code)):
generateCompilerWithBackDoorDetection(code)
else:
compileNormally(code)
What happens here is that you modify the C compiler code so that when it compiles itelf, it inserts the back-door code. So now when the C compiler compiles login, it will insert the back door code; and when it compiles
the C compiler, it will insert the code that inserts the code into both login and the C compiler.
Now, you compile the C compiler with itself – getting a C compiler that includes the back-door generation code explicitly. Then you delete the back-door code from the C compiler source. But it’s in the binary. So when you use that binary to produce a new version of the compiler from the source, it will insert the back-door code into
the new version.
So you’ve now got a C compiler that inserts back-door code when it compiles itself – and that code appears nowhere in the source code of the compiler.
Re: (Score:2)
Like the source isn't available AND it wasn't one of the pieces of code RMS himself actually works/worked on. I suppose it could happen, but if so, they did a very fine job of getting it in there.
Although, perhaps if they instead compromised the packages that are usually used to install gcc, that might work. The source code doesn't do shit for you if you're installing pre-compiled binaries...
Re: (Score:3)
You say that like the source to GCC isn't available....
Re: (Score:3)
Re: (Score:2)
You say that as if any one person understands the entirety of GCC's massive codebase.
yes, LITERALLY (Score:3)
Re: (Score:2)
"All" they need to do is insert a very very subtle bug, and as pointed out, that bug could be in the compiler.
Re: (Score:3)
Considering that security audits are actually quite a rarity it's not beyond reason to think that flaws and bugs can be introduced and go unnoticed. Just because in theory people can comb over OSS code doesn't mean that it actually happens with any regularity.
Depends (Score:5, Interesting)
Check out the Underhanded C contest (http://underhanded.xcott.com/). There are great examples of code that look innocuous, but aren't. What's more, some of them look like legit mistakes that people might make programming.
So that is always a possibility. Evil_Programmer_A who works for whatever Evil Group that wants to be able to hack things introduces a patch for some OSS item. However, there's a security hole coded in purposely. It is hard to see, and if discovered will just look like a fuckup. Eventually it'll probably get found and patched, but nobody suspects Evil_Programmer_A of any malfeasance, I mean shit security issues happen all the time. People make mistakes.
In terms of how long to spot? Depends on how subtle it is. If you think all bugs get found real fast in OSS you've never kept up on security vulnerabilities. Sometimes, they find one that's been around for a LONG time. I remember back in 2000 when there was a BIND vulnerability that applied to basically every version of BIND ever. It has been lurking for years and nobody had caught it. Worse, it was a "day-0" kind of thing and people were exploiting it already. Caused a lot of grief for my roommate. By the time he heard about it (which was pretty quick, he subscribed to that kind of thing), their server at work had already been owned.
Don't think that just because the code is open that it means that it gets heavily audited by experts. Also don't think that just because an expert looks at it they'll notice something. It turns out a lot of security issues are still found in the runtime, not by a code audit. Everyone looks at the code and says "Ya, looks good to me," and only when later running it and testing how it reacts do they discover an unintended interaction.
I'm not trying to claim this is common, or even happening at all, but it is certainly possible. I think people put WAY too much faith in the "many eyes" thing of OSS. They think that if the code is open, well then people MUST see the bugs! All one has to do is follow a bug track site to see how false that is. Were it true, there'd be no bugs, ever, in release OSS code. Thing is, it is all written and audited by humans are humans are fallible. Mistakes happen, shit slips through.
Linux Kernel has had bugs publicly reintroduced. (Score:5, Insightful)
Last year or early this year there was a fix for a Linux kernel bug that could provide root privilege escalation. Here's the kicker though: The bug had been fixed years earlier but had been reintroduced into the kernel and nobody caught it for a very long time. For some reason, OpenSuse's kernel patches still included the bug fix, so OpenSuse couldn't be exploited, but mainline didn't reintroduce the fix for a long time.
Given the complexity of the kernel as just one example of a large open-source project, I don't really buy the "all bugs are shallow" argument from days of past. That argument was making a presumption that people *wanted* to fix the bugs, and as we all know there are large groups of people who don't want the bugs fixed. That's not to say that there is a magical NSA backdoor in Linux (and no, there isn't a magical NSA backdoor in Windows either, get over it conspiracy fanboys). That is to say that simply not running Windows isn't enough to give you real security and yes, your Linux box can be attacked by a skilled and determined adversary.
Re:Linux Kernel has had bugs publicly reintroduced (Score:5, Insightful)
Windows does have a backdoor. (Score:5, Informative)
GP wrote: and no, there isn't a magical NSA backdoor in Windows either, get over it conspiracy fanboys
You are forgetting something [eideard.com]. A pretty BIG BACK DOOR into windows that has been known and confirmed for some time now.
“...the result of having the secret key inside your Windows operating system “is that it is tremendously easier for the NSA to load unauthorized security services on all copies of Microsoft Windows, and once these security services are loaded, they can effectively compromise your entire operating system“. The NSA key is contained inside all versions of Windows from Windows 95 OSR2 onwards”
Re: (Score:2, Insightful)
Re: (Score:2)
So the NSA put in a magical untraceable backdoor that has never been found by the likes of Bruce Scheier or others in his field, but the NSA was also so stupid that they named the file "NSA_secret_evil_backdoor.dll" or something like that... yeah whatever.
Re: (Score:3)
Re: (Score:3)
FWIW, the backdoor would have been put it by Microsoft. Did they? I don't know. I have no reason to doubt it, given their general sleazy business ethics, but the only reason to believe it is that they titled a particular thing "NSAKey". (And the name was assigned by Microsoft, so NSAs sneakiness about such things doesn't apply.)
For all I know the name could have stood for "No Software Algorithm" and been documentation of something they needed to write. (And, no, I don't trust their public explanations.
Re: (Score:3)
So basically the NSA has been granted the same level of access as every low-grade Taiwanese device manufacturer, the Mozilla foundation that wrote the firefox browser I'm using, and probably multiple front companies associated with the PLA. Check.
Still doesn't prove or even suggest there's a backdoor, and as far as I know, even the big-bad NSA would have to send traffic over a network to control my PC remotely. How come nobody has ever seen that traffic? In order for the traffic to be completely invisible
Re: (Score:2)
As a followup to my other response, if this magical backdoor into every Windows system on the planet is so great, then why was there a need for Stuxnet to ever come into existence?
The NSA should have built-in access to every Iranian Windows computer without the need for highly complex malware package!
Re:Windows does have a backdoor. (Score:5, Interesting)
As a followup to my other response, if this magical backdoor into every Windows system on the planet is so great, then why was there a need for Stuxnet to ever come into existence?
The NSA should have built-in access to every Iranian Windows computer without the need for highly complex malware package!
You fail to understand the difference between a back door and spyware. A back door would allow the installation of such a piece of software, but would not be the spyware itself. This code worked around normal protection in Windows for security and privilege escalation, as well as avoided malicious software detection from AV software. In addition, there has been information leaked that told you that there are back doors in Windows for the US Government (and perhaps other Governments). The part that was not clear is whether NSA has people working at MS to ensure that they have and know about back doors, or MS employees facilitate their whims for creating these back doors.
No not really (Score:2)
They aren't giving the NSA stuff that nobody else gets. The NSA is just on the early notification list. Various groups get told about vulnerabilities as soon as MS knows about them. The rest get told about them when there's a patch. So sure, I guess the NSA could quickly develop and exploit the vulnerability (if it is relevant, amazing how few no-user interaction, remote initiated exploits there are now that there's a default firewall) before MS patches it, but that is not really that likely a scenario, and
Re: (Score:2)
Despite what you think, lots of people, including security researches, have access to the Windows source code too.
What you are saying is that:
1. Without source code, people find security holes in Windows all the time... you do agree with that statement right?
2. With source code, only the good guys find all the security bugs and fix them so fast that they never become an issue. Oh, and all existing Linux deployments, including the embedded Linux installs in your home router/cell phone/toaster/etc. get up to
Re: (Score:2)
You have a point, but at the same time, there are plenty of people who install pre-compiled binaries on their Linux systems too. Having the source code for what you are supposed to be running isn't the same thing as having the source code for what you *are* running.
Granted, that does make an open source application safer, if you do compile it from source, but how many people do that? And be aware that you need to make sure you're always getting the source itself from the right place or that could be compr
Vegas odds (Score:2)
I hear the Vegas odds of NSA backdoors into encryption schemes is 1000:0. Meaning everyone who bets $0 on the NSA not having a backdoor will receive $1,000 if they do.
Re: (Score:2)
I'll bet infinity dollars at those odds.
Re: (Score:3)
You've fell for it! By adding in an infinity, they can now simply renormalize their equation and now you owe them approximately... one million dollars.
Vinnie and Joey will be over to collect momentarily.
Re: (Score:2)
"Little: Interesting, if true. The Vegas odds tonight stand at an unprecedented 1000-0; a bet of $0 on Bender pays $1000 if he wins. Still, very few takers."
Futurama quote.
Historically, NSA have done the opposite. (Score:5, Insightful)
DES was developed in the early 1970's, and has been proven to be quite resistent to differential cryptanalysis, which didn't appear in the public literature until the late 1980's.
During the development of DES, IBM sent DES's S-boxes to NSA, and when they came back, they had been modified. At the time there was suspicion that the modifications were a secret government back door, however when differential cryptanalysis was discovered in the 1980s, the researchers found that DES was surprisingly hard to attack. It turned out that the modifications to the S-boxes actually strengthened the cipher.
Re: (Score:2)
Intriguing ... citation please. "Strengthened the cipher" or "mucked it up with goal X and instead supported goal Y"?
Re:Historically, NSA have done the opposite. (Score:5, Interesting)
Subsequently, Don Coppersmith, who had discovered differential cryptanalysis while working (as a summer intern) at IBM during the development of DES in the early 1970's, published a brief paper (1994, IBM J. of R&D) saying "Yep, we figured out this technique for breaking our DES candidates, and strengthened them against it. We told the NSA, and they said 'we already know, and we're glad you've made these improvements, but we'd prefer you not say anything about this'." And he didn't, for twenty years.
Interestingly, when Matsui published his (even more effective) DES Linear Cryptanalysis in 1994, he observed that DES was just average in resistance, and opined that linear cryptanalysis had not been considered in the design of DES.
I think it's fair to say that NSA encouraged DES to be better. But how much they knew at the time, and whether they could have done better still, will likely remain a mystery for many years. They certainly didn't make it worse by any metric available today.
Re: (Score:3)
yeah, the conventional wisdom is that NSA improved the S-boxes in Lucifer, and at the time nobody quite understood why. Academic cryptographers later understood why and this sort of led to a ghetto legend that NSA people were mentats who were far advanced from academia. The more likely explanation is that in the mid-70's, when crypto CS was relatively new, that the people who held such interests gravitated to the NSA because that's where the opportunities were. NSA likely had somebody on staff who had st
Real threat or open question? (Score:2)
Re: (Score:2)
Is there a question in there about something specific or are you throwing pasta against the wall to see what sticks? Take AES for example. A pretty open selection process evaluating a number of known ciphers among many smart eyes. Are you saying No Such Agency pulled a fast one in broad daylight in front of multitudes or is your line of question non-specific and open ended?
It seems fair; can someone not related to the government attest to the viability of SEL? Has anyone read/understood enough of it to know for sure? Is it right to presume that someone from the OSS community would have certainly caught on to a trick by the NSA, or is it hubris?
Re:Real threat or open question? (Score:5, Informative)
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/MAINTAINERS#n7166 [kernel.org]
I work for Red Hat. Not for the NSA. SELinux code does not go from me through the NSA, it actually goes the other way around. The NSA asks me to put code in the Linux kernel and I pass it to Linus. I have reviewed each and every line at one point or another.
The NSA may have some magic backdoor somewhere in the Linux kernel, but I'll stake my name that it isn't in the SELinux code.
Re: (Score:2)
You can see though why in the presence of surveillance+gag orders, even such personal assurance may be less than satisfactory. That's one problem with the scheme: even honest people+companies become suspect.
Who guards the guards? (Score:5, Insightful)
I can attest to the lack of backdoors in SELinux. I am the SELinux maintainer. I'm the guy responsible for it.
Then the only question remaining is whether we should trust you.
They tried scare tactics with OpenBSD (Score:3, Interesting)
http://marc.info/?l=openbsd-tech&m=129236621626462&w=2 [marc.info]
Some guy claimed to have put backdoors in the OpenBSD IPSEC stack for the FBI, but a full audit proved no such thing ever happened.
I seriously doubt this is happening in open source.
Re: (Score:2)
Ha Ha. Hahaha. I guess you missed the bit about how it is computationally infeasible (as in, halting problem) to definitively determine whether there are artifices in source or object code that deliberately mask and hide their behavior. See Naval Post Graduate School thesis and papers on how few lines of code need to be introduced to turn IPSEC implementations into clear text relays - turned on and off via encrypted key triggers.
A few years back, it was discovered that virtually every one's - and I mean
Re: (Score:2)
Seems the "private key" is the key in many ways too
Inefficient != Incompetent (Score:5, Insightful)
I have yet to have seen a serious scientific analysis of this question, as discussions rarely get beyond general paranoia facing off against a general belief that government incompetence plus public scrutiny make backdoors unlikely.
Government's are not nearly as incompetent as many pundits would have you believe. We have some very seriously talented people doing some pretty amazing things in our government. Government isn't always a model of efficiency but inefficient does not (always) equal incompetent. And in some cases inefficiency is actually a good thing. Sometimes you want the government to be slow and deliberative and to do it right instead of fast. Some of the most remarkable organizations and talented people I've met are in government. Sadly some of the worst I've met are in government as well but my point remains. Assuming government = incompetent is in clearly wrong in the face of copious evidence to the contrary.
OpenBSD is the answer (Score:2)
The Clipper chip (Score:5, Interesting)
You mention the Clipper chip and its key escrow system guaranteeing government access, but what you should remember is that the cryptosystem that chip used was
1. Foolishly kept secret by the NSA, although it has long been understood that academic scrutiny is far more important than security through obscurity, and
2. The symmetric cipher the chip used, Skipjack, was subject to a devastating attack on its first day of declassification (breaking half the rounds) and by now is completely broken. That remains rare for any seriously proposed cipher...
Since presumably the NSA did not try to make a broken cryptosystem (why, to help other spies? They themselves had the keys anyway!) this illustrates that yes, incompetence is a concern even at super-funded, super-powerful agencies like the NSA.
Bitcoin? (Score:5, Funny)
Obviously I haven't read the literature enough to know how it works or why it's impossible... But it would be really funny if it turned out that Bitcoin mining was actually the NSA's attempt at crowdsourcing brute-force decryption...
Re: (Score:2)
I'll raise my tin-foil hat to that theory. Either the NSA is doing that right now, or they are going to start.
Better breakers (Score:2)
Obviously the government has access to very fast computers beyond what the public has available. As computer power gets greater it becomes easier for specialists to break into supposedly secure situations. We have also been in a war mode since 9/11 and all kinds of covert snooping are taking place. Deeply embedded agents do exist in this world. I have seen it first hand. Back in the 1960s that fine young girl that spent a lot of nights in your bed that you thought was a hippie was often some kin
Re: (Score:3)
Wait a second... a hippie from the 60's that's geeky enough to post on /.? Any girl in your bed should have been suspect!!!! ;-)
What are the odds? (Score:2)
Comment removed (Score:5, Insightful)
The GSM ciphers are an interesting story (Score:3)
This property wasn't discovered until it had been fielded for years, of course, because the ciphers were developed in the context of a closed standards process and not subjected to meaningful public scrutiny, even tough they were nominally "open". The implication was that a mole in the standardizing organization(s) could have pushed for those parameters based on some specious analysis without anyone understanding just what was being proposed, because the (open) state of the art at the time the standard was being developed didn't include the necessary techniques to cryptanalyze the cipher effectively. Certainly the A5 family has proven to have more than its fair share of weaknesses, and it may be that the bad parameter choices were genuinely random, but it gives one to think.
Perhaps some reader can supply the reference?
The 802.11 ciphers are another great example of the risks of a quasi-open standardization process, but I've seen no suggestion that the process was manipulated to make WEP weak, just that the lack of thorough review by the creators led to significant flaws that then led to great new research for breaking RC4-like ciphers.
AES? Yeah right (Score:2)
Re: (Score:3)
The thing with encryption (follow the Coursera course by Dan Boneh) is that the code doesn't have to be compromised for the encryption to be insecure.
And showing the encryption is secure or not. Well. That is not so easy.
Some smart ass thought doing DES twice was safer than just DES. Wrong. Meet In The Middle Attacks.
Think of the scenario where random primes are picked every time directly after a device boots. A random generator didn't have enough time to get random. Those primes that are not random but in
origins of linux (Score:3, Funny)
there's a story i heard about the origins of linux, which was told to me a few years ago at a ukuug conference by a self-employed journalist called richard. he was present at a meeting in a secure facility where the effects of "The Unix Wars" were being exploited by Microsoft to good effect. the people at the meeting could clearly see the writing on the wall - that the apx-$10,000s cost of Unixen vs the appx-$100s of windows would be seriously, seriously hard to combat from a security perspective. their primary concern was that the [expensive] Unixen at least came with source: microsoft was utterly proprietary, uncontrolled, out of control, yet would obviously be extremely hard to justify *not* being deployed in sensitive government departments based on cost alone. ... so the decision was made to *engineer* a free version of Unix. one of the people at the meeting was tasked with finding a suitable PhD student to "groom" and encourage. he found linux torvalds: the rest is history.
now we have SE/Linux - designed and maintained primarily by the NSA.
the bottom line is that the chances of this speculation being true - that the NSA has placed back-doors in GNU/Linux or its compiler toolchain - are extremely remote. you have to bear in mind that the NSA is indirectly responsible for securing its nation's infrastructure. adding in backdoors would be extremely foolish.
Easy answer (Score:3)
https://github.com/search?q=nsa+backdoor+extension%3Arb+extension%3Apy+extension%3Ajava+extension%3Aphp&type=Code&ref=searchresults [github.com]
Not likely (Score:4, Interesting)
This misses the dual goals of the NSA:
(1) Break other peoples communications.
(2) Protect US (govt?) ones.
The trouble with backdoors is that they can be used by others to break US systems. So this is not the preferred solution from the NSA's perspective.
A good lesson in this is the DES cipher. The DES cipher was a 56-bit cipher based on IBMs original 128-bit Lucifer algorithm. When it was released everybody worried about the S-boxes and design and wondered if the NSA has created a backdoor for themselves. As attacks on Fiestel network ciphers (such as DES) were found, it was apparent that DES was already hardened against these: the NSA knew of these attacks and had produced the hardest 56-bit cipher possible. Their strategy became apparent: by setting the strength at 56-bits, they created a cipher they could break because they had the processing power, but no-one else could (at the time).
Similarly today: its apparent that 22 years after PGP was created, mail is not encrypted by default. The NSA's strategy is to help push the design of open standards to suit their goals: small -enough quantities of encryption that it is possible for brute-force or black-bag jobs to be used as required.
Re:Back doors are so 90s (Score:5, Funny)
Backdoors are passé.
And so is proper Unicode support...
Re: (Score:2)
^ this one. ding ding ding.
Paraphrasing old Brucie on this:
Why would an attacker spend time trying to get through your steel-plated triple-deadbolted front door, when they can throw a rock through your kitchen window and crawl in?
All it takes are some unchallengeable secret court orders, and off to your nearest cloud/service provider to suck down all your datas.