Schneier: We Don't Need SHA-3 143
Trailrunner7 writes with this excerpt from Threatpost: "For the last five years, NIST, the government body charged with developing new standards for computer security, among other things, has been searching for a new hash function to replace the aging SHA-2 function. Five years is a long time, but this is the federal government and things move at their own pace in Washington, but NIST soon will be announcing the winner from the five finalists that were chosen last year. Despite the problems that have cropped up with some versions of SHA-2 in the past and the long wait for the new function, there doesn't seem to be much in the way of breathless anticipation for this announcement. So much so, in fact, that Bruce Schneier, a co-author of one of the finalists not only isn't hoping that his entry wins, he's hoping that none of them wins. ... It's not because Schneier doesn't think the finalists are worthy of winning. In fact, he says, they're all good and fast and perfectly capable. The problem is, he doesn't think that the world needs a new hash function standard at all. SHA-512, the stronger version of the SHA-2 function that's been in use for more than a decade, is still holding up fine, Schneier said, which was not what cryptographers anticipated would be the case when the SHA-3 competition was conceived. 'I expect SHA-2 to be still acceptable for the foreseeable future. That's the problem. It's not like AES. Everyone knew that DES was dead — and triple-DES was too slow and clunky — and we needed something new. So when AES appeared, people switched as soon as they could. This will be different,' Schneier said via email."
Useful replacement (Score:4, Insightful)
Re: (Score:3)
Faster computation of cryptographic hashes add weaknesses as they make bruteforce collision finding faster as one can try possibilities quicker.
Re: (Score:2)
This is only a problem for one single use of hashes, namely storing passwords in a database, and there are perfectly satisfactory solutions to it.
It is not a problem for most other uses of hashes.
Re: (Score:3)
There are a few uses, but yes it only affects certain types of collision. But it is a weakness in those use cases. Does it matter if the hashing is slightly slower for checking the HMAC from a security standpoint? Yes from a usability standpoint I don't want to be waiting 5 minutes while computer decrypts a webpage, but it doesn't detract or add to the security of the algorithm in such use cases.
Re: (Score:2)
I think all the finalists are 512 or more bit hashes that make collisions far harder than the current bit lengths.
If you are just meaning passwords then chose a more suited hash function as this is not what SHA-3 is for.
Re: (Score:1)
No they don't. Hashes that can be brute-forced if you can only calculate them fast enough are weak per se. 512-bit hashes cannot be brute-forced even if you can calculate 2^64 per second, so it is advantageous that they can be evaluated quickly for the mainline case of using them to hash things.
Re: (Score:2)
Hashes should be
Re: (Score:2)
Assuming you're salted and not in plain text (big assumption, alas), nothing can sensibly defend against weak keys - but that's because all the security is in the key. Weak keys are weak, end of.
Re: (Score:1)
Go right ahead then, pick any of the contestants and bruteforce a collision. You'll be famous.
We cannot even design a computer to COUNT to 2^128, so for any even minimally secure hash function of 256 bits or more, brute force won't happen. Not even if your GPU with 10000 cores can do a hash in one clock cycle.
Re: (Score:2)
That's the point, it doesn't have to, each of those 10000 cores each do a hash attempt in a few clock cycles each, my nvidia gpu (GTX 465) which is 2 years old now can handle 2.75 million sha1 hashes a second (I tested it). Being the geforce consumer model for graphics, only half the processing power is available for use in CUDA general purpose computing. Salted md5 passwords upto 12 characters can be brute force cracked in about a month with ~$40,000 worth of off the shelf hardware (I dread to think how fa
Re: (Score:2)
2.75 million sha1 hashes a second (I tested it)
Now compare the 2^ complexity of SHA1 to SHA512.
(I dread to think how fast NSA or GCHQ could do it on their top secret supercomputers with classified performance specs).
NSA is a government agency. Figure their costs to do anything are 3x that of industry (I'm being generous).
Now, figure out what they need to spend to out-R&D Intel and look at their budget. If they have working quantum supercomputers, are they building that massive western data center just
Re:Useful replacement (Score:5, Funny)
True, I normally use a 8-bit checksum for my hashing for best performance. On passwords in particular some people think hashing and password recovery are incompatible, but on the server I simply maintain a list of 256 complex looking passwords so a match can be quickly looked up and e-mailed back.
Does anyone know if that idea has been thought of before, maybe I should take a patent?
Re: (Score:2)
SHA-512 is a cryptographic hash function. Faster computation of hashes is exactly what you *don't* want.
Re: (Score:1)
Why wouldn't you want faster cryptographic hashes? It is trivial to slow a hash down as much as you want, but when you need it to go fast, it is very difficult to speed it up.
Re: (Score:2)
They *specifically* stated that "computational efficiency" was important in their official announcement of the FIPS 180-3 competition. That was second on their list, right after "security".
Future proofing (Score:2)
However, SHA-2 could be broken tomorrow, and this time we won't have a decade's wait while a suitable replacement is designed.
Re: (Score:3)
http://en.wikipedia.org/wiki/Data_Encryption_Standard#NSA.27s_involvement_in_the_design [wikipedia.org]
Re:Future proofing (Score:5, Interesting)
To be fair, the NSA don't seem to have caused problems with the S-Boxes and differential analysis doesn't seem to have worked too well. On the other hand, COCACABANA et al were glorified 1940s-era Colossus machines - cracking codes via a massively parallel architecture. To me, that's the scary part. Turing's work on cryptography and massively parallel code breakers was 100% applicable to the design of DES because the keylength was so incredibly short. You could build enough machines to effectively break it.
How many DES engines do you think someone could have crammed onto a wafer in the 1980s? (Remember, each die can have multiple engines, and then the dies that work can be hooked together.) Link up a bunch of such wafers and you end up with a crypto engine from hell. It would have been VERY expensive, but I would imagine it perfectly plausible that a sufficiently detemined and rich organization (I would imagine the NSA might have been one such) could have potentially built such a machine when the rest of us still thought the 6502 was a really neat idea.
Doesn't mean anyone ever did. People could have reached Mars in the 1980s, so "could have" and "did" are obviously very different things. What people actually did is anyone's guess, though "nothing" sounds about right.
Had they built such a device, though, then near-real-time breaking of DES would have been possible at the time it was in mainstream use. Certainly, there were claims circulating that such devices existed, but a claim like that without proof is hard to accept. All I can say is that it's demonstrably not impossible, merely unlikely.
Back to SHA-2. Are we in the same boat? Are there ways to build something today, even if nobody is likely to have actually built it yet, that could endanger SHA-2? (To me, THAT is the measure of security, not whether anyone actually has, because they're not likely to tell you when they have.) Quantum computing is the obvious threat, since 512 bits is a lot of security, too much to attack in parallel with a classical architecture. Quantum computing, though, should let you scale up non-linearly. The question is whether it's enough. (I'm assuming here that there are no issues with preimages or timing that can be exploited to reduce the problem to a scale QC can solve even if classical machines can't.)
There have been a few murmurs that suggest SHA's security isn't as strong as the bitlength implies. Would that be enough? If Japan can build a vector machine the size of a US football stadium, then it is not physically impossible to scale a machine to those sizes. Nobody has scaled a quantum computer beyond a few bits, but I repeat, I don't care what people have publicly done, it is what is within the capacity of people TO build whether publicly or not that matters.
If you're not 100% certain that not even a quantum computer on such a scale, where all nodes were designed at the hardware level to perform JUST the task trying to break the has, then the hash is not safe for 20+ years. It may be unlikely, but there's nothing to say it might not be vulnerable right now. There's nothing physically impossible about it (as shown), it's merely a hard problem. And hard problems get solved. What you need in a crypto hash is something you can be sure WILL be impossible to break in a 20 year window, which means what you need is a crypto hash that is beyond anything where the components can be prototyped today. For a 30 year window, it needs to be beyond detailed theory. A 50 year window can be achieved if it's beyond any machine ANY existing theory can describe.
(It takes time to go from theory to prototype to working system to working system on the right scale. The intervals seem to be fairly deterministic in each subject. I believe this to indicate a mathematical model that underpins things like Moore's Law and which is independent of field. Know that model and you know when Moore's Law will fail. Moore's Law is merely the equivalent of Hooke's Constant for computing, failure is inevita
Re: (Score:2)
Had they built such a device, though, then near-real-time breaking of DES would have been possible at the time it was in mainstream use. Certainly, there were claims circulating that such devices existed, but a claim like that without proof is hard to accept. All I can say is that it's demonstrably not impossible, merely unlikely.
Speculation like this needs to take historical context into account. At that time, very little important information worth the effort were stored on a computer. And far less of it was connected to the internet where the NSA or CIA could access the data stream in real time. Such a machine may have been created, but I would hardly think there'd be a use for it.
Re: (Score:2)
True, for computer information, but plenty of data was sent via radio - it was simplicity itself to tune into civilian and military digital chatter. (See "The Hacker's Handbook", by "Hugo Cornwall" - pseudonym of Peter Sommer, an expert in information systems security.) For military purposes, it was much much easier to teach people to type messages into a portable machine which would digitize it and blast the digital form wirelessly (and encrypted) than to get them to key properly. Keying in morse was also
Re: (Score:2)
Re: (Score:2)
To be fair, the NSA don't seem to have caused problems with the S-Boxes and differential analysis doesn't seem to have worked too well
In fact, the NSA's changes to the S-boxes made DES stronger against differential cryptanalysis; it appears that they and IBM knew about diff crypto back in the 1970s and designed an algorithm to resist it even though the technique wouldn't be widely known for another 15-20 years.
Diffential crypto only "doesn't seem to have worked out so well" because it's known and algorithms
Re: (Score:2)
Re: (Score:2)
SHA2 supports 256 bit modes, which gives you 64 bits of security, which is WELL within the reach of modern technology, and part of the debate is whether SHA3 is needed at all. Clearly it is.
128 bits might be "out of reach" of technology for the next few decades, but that is not enough. Nowhere near. Classified information has to be secure for 50 years and SHA3 must be strong enough to support that requirement for at least as long as it will take to create a SHA4 (which, to judge from SHA3, might easily be a
Re: (Score:2)
Re: (Score:2)
However, SHA-2 could be broken tomorrow, and this time we won't have a decade's wait while a suitable replacement is designed.
And SHA-3 could be broken the day after. Or some mathematical breakthrough could undermine an assumption that both use.
Re: (Score:2)
All remaining SHA-3 candidates use a different mathematical assumptions to the SHA-2 algorithms. So breaking one won't just break the other.
Re:Future proofing (Score:5, Interesting)
Very true. Which is why I'm anxious SHA-3 has as little (ideally nothing) in common with SHA-2, be it algorithmically or in terms of the underpinning mathematical problems used that are assumed to be hard.
I would have preferred Blue Midnight Wish to be still in the running (well, it's got a cool name, but more importantly it has a very different design).
I ALSO wish Bruce and the others would pay attention to those of us on the SHA-3 mailing list advocating a SHA-3a and SHA-3b where -3a has the best compromise between speed and security, and -3b has absolutely b. all compromise and is as secure as you can get. Why? Because that meets Bruce's objections. -3a may will be broken before SHA-2 is so threatened that it is unusable, because of all the compromises NIST want to include. -3b, because it refuses to bow to such compromises, should remain secure for much longer. You can afford to stick it in the freezer and let it sit there for a decade, because it should still be fresh BECAUSE no compromises were made. By then, computers would be able to run it as fast, or faster, than -3a could be run now.
So I have ZERO sympathy with Schneier. He is complaining about a problem that he is, in part, responsible for making. Other views WERE expressed, he thought he knew better, but his path now leads to a solution he believes useless. So, to NIST, Bruce, et al, I say "next time, leave your bloody arrogance at home, there's no room for it, doubly so when you've got mine to contend with as well".
Re: (Score:2)
Because that meets Bruce's objections. -3a may will be broken before SHA-2 is so threatened that it is unusable, because of all the compromises NIST want to include. -3b, because it refuses to bow to such compromises, should remain secure for much longer. You can afford to stick it in the freezer and let it sit there for a decade, because it should still be fresh BECAUSE no compromises were made.
There are some applications where this is very important, for example the electronic signing of documents for copyright purposes (i.e hash published to prove authorship), public time-stamping of documents etc. If someone can come back in 10 years time with an alternative document that produces the same hash you no longer have absolute proof!
Re: (Score:3)
Is this still possible? Considering SHA-2 is really a take-your-pick suite of SHA-224, -256, -384 & -512, NIST could do the same with SHA-3 and create a family.
Hell, SHA-1 is still kosher according to FIPS 180-4 as of March 2012. I expect SHA-2 to hang around for many years to come.
I admit I have not been following the mailing lists and they might have nixed this idea totally. Thus, my question to you which is probably quicker than trying to dig thru the archives.
Re: (Score:2)
Oh, it should indeed still be possible to produce a best-of-breed class as well as a best-all-round class, but the closer we get to the deadline, the more apathy and office politics subsumes the process.
It would be great to have a family. Since SHA-3 entries were to produce a fixed-sized hash, the family would consist of different breeds of hash rather than different output lengths. I don't see a problem with that. People can then use what is correct for the problem, rather than changing the problem to make
Re:Future proofing (Score:5, Interesting)
Bruce's argument is essentially "the devil you know." Five years ago it seemed like SHA-2 might succumb to an attack. However, it's been five years, and those attacks never materialized on SHA-512. That's five more years of convincing evidence that SHA-2 is fairly secure. None of the SHA-3 finalists have had the same level of scrutiny. Sure, they've been looked at seriously, but nothing like the widespread amount of attention that an actual standard gets.
Another consideration is practicality. If a new standard is published, millions of dollars will be expended changing systems around the world. Will all that money have been well spent? If there was no cryptographic reason for the change, all that money and effort was wasted.
And what about security? Will every replacement be perfect? I personally doubt it; mistakes are made and people screw up implementations all the time. An organization that hired a cryptographer to design and implement a secure solution in 2007 might feel they can do the work themselves today. But we know that cryptographic protocols are notoriously finicky when it comes to unintended information leakage or security. If a secure design is modified in any way, the potential to introduce a security bug means the risk of change is much higher than the risk of sticking with SHA-2.
Re: (Score:2)
I don't disagree with any of your points. The issue I have is with people; in particular with people who look at these standards and say "I have to have the newest thing, because the newest thing is the most secure." These are not cryptographers, these are CIOs and managers. They get their ideas of security from the security professionals themselves, who are constantly saying "always download and install the latest patches as quick as possible." These people will see SHA-3 as a "patch" or "upgrade" on t
Name (Score:1)
I think the next hash should be called B455 DR0PP3R 2K12
I have an idea (Score:5, Informative)
How about we link to Schneier's actual blog post? https://www.schneier.com/blog/archives/2012/09/sha-3_will_be_a.html [schneier.com]
Re:I have an idea (Score:5, Funny)
You must be new here..
Re:I have an idea (Score:5, Funny)
Besides, Bruce Schneier doesn't need his blog entries linked from anywhere - he just breaks into webservers and puts links wherever he wants.
for the uninitiated [schneierfacts.com]
Why the unneccessary government bashing? (Score:5, Insightful)
Is it really necessary to have a snide remark at supposed government inefficiency there? Can't we bury this ideological attacks that are not really supported by facts or data, add nothing to the point and are in fact grossly misleading?
This is a hard mathematical problem. Ordinary research papers in mathematics often spend a year or more in peer review in order to verify their correctness. If you're building a key component of security infrastructure a couple of years of review is not at all unreasonable.
Re:Why the unneccessary government bashing? (Score:5, Insightful)
Yeah, that bit of snark really showed the author has no clue at all what goes into a process like this. Those years are there to give researchers time to really try and break all the candidates. You don't want to rush that part only to find out someone managed to break the winner the next year.
Re: (Score:2)
Are there cases where running stuff through the government is inefficient? No doubt. Let's look at one of your examples though, ISPs. Do you know what is the grand unifying theme of all the countries with better internet access? The government got much MORE involved, not less.
Same with public transport and infrastructure in general. It's horribly inefficient to let this stuff be driven by the free market (see the UK rail system). Government is inefficient if it is structurally underfunded, or if ideologues
Re: (Score:2)
This process is not at all "inefficient". It is slow, and deliberately so. Nobody involved would want it any faster. If anything, people would probably feel better if it was even slower.
The entire point is to give researchers enough time to attack the candidate algorithms and find as many lurking insecurities as possible. The last thing anyone wants is to find vulnerabilities after the algorithm has been standardized and deployed.
Re: (Score:2)
This process is not at all "inefficient". It is slow, and deliberately so. Nobody involved would want it any faster. If anything, people would probably feel better if it was even slower.
Here's an example of how it's inefficient. Has the NSA already compromised whatever will end up being the SHA-3 selection? Keep in mind that if a government agency builds a hashing algorithm with a hidden flaw (here, being inefficient is not the same as being unable to do something) before the contest starts, then it's a relatively simple matter to bias the contest to select that algorithm (unless the flaw is so obvious that it is caught during the contest). The NIST after all serves the same boss as the US
Why stop now? (Score:2)
So much work from everyone involved and we just throw it away??
This is a standard for many years in the future. SHA-1 is still used in some current applications and is considered secure and people are still using MD5.
Everyone can just ignore the new standard and the researcher can have a decade or two to try to break it before its needed. Where is the harm?
But Seriously Folks (Score:2)
If it aint broke... (Score:1)
NIST 800 is a waste of time (Score:2)
Has anyone ever actually read NIST 800? I just had to review 800 30 and 800 39 yesterday. Hand to god they're designed to put you in a coma. There is not enough ritalin on the the planet for that.
Quantum Botnets (Score:1)
Really, any increase in key-length or change in algorithm ought to be done to save us from the issues that could arise from things like quantum computers, super computer bot nets, or further into the future quantum computer bot nets. I mean we don't have those things yet, but we can kinda see them coming, and ought to be thinking long and hard about how to break that issue permanently.
Let's keep them on standby (Score:2)
Nist started the SHA-3 competition when SHA-1 was proven weak, and no one was sure how long SHA-2 would last,
no one liked the idea of relying solely on the wide pipe SHA-512 when the underlying building blocks have been proved week, (using SHA-512 is a bit like using triple-DES).
However it is difficult to predict advances in cryptography, and though SHA-512 is not nearly as weak as we predicted it would be a few years ago, we don't know what new cryptanalysis will show up tomorrow, forcing us to leave SHA-2
Re: (Score:2)
How is that different from just picking another one of those 5 and calling it SHA-4? It's not like they magically go away because one has been given a version number all of its own.
Disagree: There should always be two (Score:5, Insightful)
I disagree. You don't wait to build a fire escape until the building is on fire. Similarly, we need a good alternative hash algorithm now, not when disaster strikes.
I believe that, in general, we should always have two widely-implemented crypto algorithms for any important purpose. That way, if one breaks, everyone can just switch their configuration to the other one. If you only have one algorithm... you have nothing to switch to. It can take a very long time to deploy things "everywhere", and it takes far longer to get agreement on what the alternatives should be. Doing it in a calm, careful way is far more likely to produce good results.
The history of cryptography has not been kind, in the sense that many algorithms that were once considered secure have been found not to be. Always having 2 algorithms seem prudent, given that history. And yes, it's possible that a future break will break both common algorithms. But if the algorithms are intentionally chosen to use different approaches, that is much less likely.
Today, symmetric key encryption is widely implemented in AES. But lots of people still implement other algorithms, such as 3DES. 3DES is really slow, but there's no known MAJOR break in it, so in a pinch people could switch to it. There are other encryption algorithms obviously; the important point is that all sending and receiving parties have to implement the same algorithms for a given message BEFORE they can be used.
Similarly, we have known concerns about SHA-2, SHA-256, and SHA-512. Maybe there will never be a problem. So what? Build the fire escape NOW, thank you.
Re:Too slow? (Score:4, Insightful)
Disclaimer: I'm not a security expert so don't expect what I'm saying to be accurate.
Dictionary attacks have nothing to do with breaking hashes. If you mean stuff like rainbow tables, that's specific to hashes used to store passwords, which doesn't even need anything > SHA-256, because passwords don't have that much entropy to begin with.
What you need for security are essentially too properties: the entropy in the hash system (how random the values seem to be, in relation to the input), and the collision resistance (how hard is it to generate two inputs that result in the same hash, AND how hard it is to generate an input for a given hash number).
Cryptographic Hashes are used for a lot other purposes, and many of them DO require to be fast, and have a very high collision resistance. The most notable may be generating signatures for cryptographic purposes (to verify a message was sent by the entity that claims to have sent it, generally).
Re:Too slow? (Score:5, Informative)
> Dictionary attacks have nothing to do with breaking hashes.
There's two kinds of hashes you should use - those which are meant to be slow (for password hashes), and those which are meant to be fast (for message signing). SHA is meant to be fast.
Re: (Score:2, Informative)
If you rely on hashing speed to hash passwords, you are doing it wrong. computers get faster, constantly. It's not speed that matters, it's the number of possible combinations making it exponentially too large to brute force, relative to the time to calculate each hash. Who cares if you can calculate missions of hashes in one second, if you still need to spend longer than the age of the universe to get a reasonable number of inputs to use as a dictionary? It's just simpler to use a plain-text dictionary and
Re: (Score:3)
I just found an SQL injection attack and downloaded the whole password database. I know crack it at my own leisure. Now I can come back any time and use those user names and passwords. Now what is the bet some of those user name and passwords are used somewhere else by some of the users? When salting you need to do it very specific, you do not want to use the same salt as another system, you do want your salts to all be unique to a given user on your system suggestion is random data from a PRNG (technically
Re:Too slow? (Score:5, Funny)
If the passwords are decently salted and the salt is unknown good luck with that. Remember to switch planets when the Sun goes nova.
Re: (Score:2, Informative)
The sun will not got nova. It will turn into a red giant and then a white dwarf.
Re: (Score:1)
The salt is difficult to keep unknown. Every part of the web application which needs access to the salted hashed password also needs access to the salt, so if your security fails and allows access to the salted password, it probably allows access to the salt as well.
Sometimes you get lucky and the attackers get only the salted hashed password, but you cannot design your security around getting lucky.
Re:Too slow? (Score:5, Informative)
Perhaps I'm misunderstanding your point, but the idea of the salt is not to keep it secret. The idea is that each users password is combined with a unique string (the salt) so that if you try to attack the password database with a dictionary attack you have to process each password individually.
Re: (Score:3)
That's one way to use salt. Another way is to keep the salt secret. A secret salt for example, can be used to validate that a value you've handed to someone else hasn't changed.
Let's use this example...
I send you a session ID, uniquely identifying you. This session ID is tied to your username, and is involved in access control. If I simply send you the ID and trust the ID you return, you could easily change it, and possibly hijack someone else's session.
If I send you the session ID, and a salted hash of the
Re: (Score:2)
The scheme you describe is a Message Authentication Code, not a salted hash. If you use a salted hash when you actually need a MAC [wikipedia.org], you're potentially compromising your system's security.
Re:Too slow? (Score:4, Informative)
Re: (Score:2)
It doesn't need to be private
However, it should be sufficiently complex. A salt that adds a few numbers to the beginning and/or end of the password string does little to no good. A salt generated by a hash of a random value is, however, very effective.
Re: (Score:2)
The salt is known if you have the password file. The point is to use enough salt that rainbow tables are infeasible.
Re: (Score:2)
I just found an SQL injection attack and downloaded the whole password database. I know crack it at my own leisure.
Sure, but while the site is exploitable can't you pwn the rest of the site? You probably can pwn the rest of the database.
The solution it seems is to use different passwords for every site (or at least sites that matter). It doesn't even matter if the passwords are short. Once the hacker has enough access to get the passwords they normally have enough access to get the rest of the juicy data, or even change it.
Given the vast numbers of sites with weak security it seems a waste of time to use very long passw
Re: (Score:2)
Of course, but I'm betting even your educated users do not do that. And yeah, you need about 12 characters before brute force is out against salted MD5, this is why slower algorithms like bcrypt help (blowfish/sha-1/sha256 multiple times with some special stuff thrown in to make it hard to build hardware accelerators for it.)
Re: (Score:2)
Very valid point on pw length as it's what I tend to follow. For those sites with a "critical" pw, I tend to use as high an entropy and length as possible. For places like /. and other forums that are not important, I use a lower quality pw as I can and will replace the account if the forum is important enough to me. Otherwise, I post AC if I'm just stiking my nose in the tent to see if there's anything interesting.
Re:Too slow? (Score:5, Insightful)
Hashes like bcrypt are configurable too so the number of rounds is a parameter to the power of two so it can be made more secure / slower if necessary as time progresses. With 2^10 rounds it's approximately 8000x slower to make a hash than SHA1 which server side isn't a big deal but it is for someone running through a dictionary.
It's so bad that attackers would probably only bother to try a subset of common passwords (e.g. top 1000 passwords) before moving onto the next one. Enforcing password quality during signup would probably block these from hitting too.
Re: (Score:2)
Users will use weak passwords*, web servers will get compromised.
Ideally you would have a seperate "password checking server" that did nothing but store and check paswords and was locked down very tight but most sites can't really justify the cost of that. So on most sites the password database and any related secrets such as a fixed part of the salt are just one bug in a php webapp away from being revealed to an attacker.
Using a deliberately slow hashing technique will increase the time taken for the hacke
Re:Too slow? (Score:5, Informative)
If you rely on hashing speed to hash passwords, you are doing it wrong.
If you rely only on hashing speed to protect your passwords, you're doing it wrong.
The problem with fast hashing is that it facilitates brute force password searches. Salt prevents rainbow table attacks, but targeted brute force attacks against a specific password can be quite feasible given typical user password selection. There are two solutions to this: Make users pick better passwords or find a way to slow down brute force search. The best approach is to do both; do what you can to make users pick good passwords (though there are definite limits to that approach), and use a parameterized hash algorithm that allows you to tune the computational effort.
The common way to slow down the hash is simple enough: iterate it. Then as computers in general get faster you can simply increase the number of iterations. In fact, you can periodically go through your password table and apply a few hundred more iterations to each of your stored password hashes. The goal is to keep password hashing as expensive as you can afford, since whatever your cost is, the cost to an attacker is several orders of magnitude higher (since the attacker has to search the password space). Oh, and it's also a good idea to try to keep attackers from getting hold of your password file. Layered defense.
As I understand it, that's why you salt the passwords AND use a user-specific string (based on username, email and/or similarly constant data)
User-specific strings are good too, as another layer to the defense, but you have to assume that an attacker who gets access to your password file probably gets that data as well.
to introduce more variation so that they can't use generic rainbow tables or even site-specific rainbow tables.
Salt is sufficient to eliminate rainbow table attacks.
Re: (Score:2)
No need for the additional overhead of using a rainbow table for this. Just apply a generic brute force dictionary attack without using rainbow tables. It will be much faster, if there is just a single leak.
The rainbow table only helps if it can be precomputed, which you can only do if that salt is leaked before the database. If a site repeatedly have leaks of their password database, and the salt remains unchanged between l
Changing pepper after breach (Score:2)
Or you drop support for the old pepper and mass-expire all the passwords and requi
Re: (Score:2)
Supporting two or supporting an arbitrary number is about the same amount of work. If you only support one at any given time, you are going to have chaos when you change it and every single user password stops working. And you are going to have problems as you gradually roll out the change, as anybody who have worked with production systems know you should do.
Besides, if you immediately expire all the passwords on a leak of the peppe
Re: (Score:2)
The reference to site-specific rainbow tables implies the same salt was used for all passwords.
That's not salt, that's just a modification of the hash algorithm -- basically a tagged hash. Salt is defined as a per-entry random value.
Re: (Score:1)
You're missing the distinction between an online attack and an offline attack. In an online attack, where the attacker goes to the website and starts typing in passwords, then lockout will do just fine. But when the attacker has stolen the password file, he gets as many guesses as he wants, bounded only by computing power. And in that case, the hashing speed will be a limiting factor in how long it takes him to break the passwords.
Re: (Score:2)
Re: (Score:3)
It's not even just distributed computing. Some commodity hardware, like AMD GPUs, can compute current (fast) hashes at a ludicrous speed (billions per second, and no, that's not a typo, although memory throughput tends to limit the effective rate to hundreds of millions). Dedicated hardware, either custom-fabricated or using FPGAs, can improve on even that order of magnitude... and that's today's tech. As you say, hardware just keeps getting faster and faster, plus of course there's the distributed ("cloud"
Re: (Score:2)
> it is very likely they also know how you salt and/or create the user-specific string. so in that case, trying to find the password by a user still becomes trying all possible password until you find one that matches.
False, if the the site is using double passwords.
If you passwordHash2( userId + passwordHash1( plaintext )) good luck even trying to "crack" that password.
Functions passwordHash1 and passwordHash2 could be the same one-way-hash or passwordHash2 could be the "super" strong one-way-hash. As
Re: (Score:2)
There are too types of dictionary attacks, one is used for breaking passwords using a dictionary of likely passwords and trying them one by one hoping that you find the password in the dictionary, this can be coupled with bruteforcing techniques to try things like add number to the end, replace e with 3 etc. And some crackers will even start running bruteforce through combinations not in the dictionary when the dictionary runs out. Now in hash world a collision exists, this is where another set of the same
Re: (Score:1)
As I was trying to explain in the reply below, the time it takes to calculate the hash is meaningless. Relying on that time as a way to prevent intrusions would be like a bank using a maths puzzle to lock the safe, and then claiming that it takes too long to solve, so they would notice the attempt before it happens. It just doesn't work that way.
You have two strengths in preventing such intrusions: first is the exponential complexity of reversing the hashing process (brute forcing, unless the algorithm is p
Re:Too slow? (Score:5, Informative)
Not strictly try, one reason bcrypt/scrypt/PBKDF2 is recommended over straight salting and hashing is that it is slower to hash and in the case of BCRYPT it is also deliberately designed to be harder to build dedicated accelerators or use parallel processing to help speed it up, therefore slowing down a brute force attack. Yes, time shouldn't be the only factor, but most cryptography has a time element, given enough time one can break your the whole banks password database through bruteforce, don't you want to make it as slow as possible to even make attempts (offline as well as online). If I can break this diplomatic cable, it's great, but if it takes 70 years it's already declassified before I broke it anyway does it matter I could break it given 70 years?
Re: (Score:3)
It's not at the scale of 70 years. Brute forcing a 128-bit space would take at best millions of years and require that most of the planet mass be converted to energy.
Re:Too slow? (Score:4, Funny)
Conveniently, converting most of the planet's mass into energy serves as an effective substitute for diplomacy in many situations.
Re: (Score:2)
To be fair, pretty much all cryptogrophy has a time/memory element. This element is the main limitation on brute force attack.
The point of cryptogrophy is to make it more time consuming/expensive to brute force the key-space than to try to brute guess the contents of the hash. The difficulty of breaking modern cryptography is typically described in terms of astronomic scales (to brute force this cypher, you'd need a bit of memor
Re: (Score:2)
Perfectly random one time pad the length of the message and for bit for bit with the message is the only provably secure algorithm, just don't ever use the same key twice and find some secure way for key management (trusted sneaker net?). But most key management systems for such cryptography might as well just put the message instead of the key as message length is the same and key length.
Re: (Score:2, Informative)
would be link a bank using a maths puzzle to lock the safe, and then claiming that it takes too long to solve
Um, isn't that *exactly* how encryption works? :p
The point is, the timescales are exponential. A hash that's 100 times faster to compute only knocks 2 orders of magnitude off the time it takes to crack the hash (10^10 universe-lifetimes instead of 10^12, w00t), but it makes it 100 times more usable in the golden path case of computing a hash on an in-core string.
Re: (Score:3)
As I understood, it has to be slow to be hard to break via dictionary attacks etc. ...
No - you use long, cryptographically random, salts to avoid dictionary attacks. Any hash or cryptographic function that is fast enough to use will be fast enough to attack with a dictionary unless you do this. Of course user education and password rules forbidding short alpha-only words are important too!
Re: (Score:3)
No, they avoid certain classes of dictionary attack like rainbow table attacks, this is where the dictionary has the hash it matches to precalculated in the dictionary. Me taking a dictionary and salting and hashing each word and seeing if it matches is a dictionary attack.
Re:Too slow? (Score:5, Informative)
No - you use long, cryptographically random, salts to avoid dictionary attacks.
There are basically two types of salt, fixed salts stored in the server configuration and per-password salts stored in the password database. They defend against different things.
Fixed salts stored in the server configuration defend against someone who has got your password DB but not your server configuration.
per-password salts stored in the password DB defend against precomputed attacks.
Neither provides a defense against someone who has both your password DB and server configuration and is going after an individual password. That is where deliberately slow hash functions come in.
Re: (Score:2)
Salt also protects against rainbow tables in itself. If I add 4 bytes of salt, your rainbow table becomes 4 billion times larger to have the same effectiveness as it did when there is no salt (assuming the 4 bytes can hold any value).
Re:Too slow? (Score:5, Interesting)
This is a common misconception. The source of this misconception is the way people have tried to protect weak password through the use of hashing. If you take a strong password and hash it with a hash function satisfying all the requirements for a cryptographic hash function, then that password is well protected. That construction doesn't work for weak passwords. If you apply a salt while hashing, you move the threshold for the strength of passwords which can be brute forced. This is quite clearly an improvement over plain hashing. I know of no cryptographer, who has disputed, that it is a good idea to use salts while hashing passwords.
But there are still some passwords, which are too weak to be protected by a salted hash. This has lead to some people saying this hash function is insecure, because it is too fast. What they should have been saying was, this password mechanism is insecure, because it is using the wrong kind of primitive. This is an important distinction. Even if the hash function satisfies all the requirements of a cryptographic hash, then a salted hash cannot protect a weak password.
When building cryptographic systems you often need to apply different classes of primitives. Common primitives are hash functions, block ciphers, asymmetric encryption, digital signatures. Examples of primitives in each of these four classes are SHA512, AES128, RSA, RSA (yes RSA does fall in two different classes, there are other primitives, which fall in only one of those two classes). If you want to send an encrypted and signed email, you typically use all those four classes of primitives.
To protect semiweak passwords better than through a salted hash you really need a new class of primitive. For lack of better term I'll call that primitive a slow function. Claiming that a hash function is insecure because it is fast would be like claiming AES128 is secure because you can derive the decryption key from the encryption key.
The formal properties I would say a slow function should have are first of all that it is a (mathematical) function mapping bitstrings to bitstrings, and that it requires a certain amount of computing resources to compute the full output from any single input. Properties that I would not require a slow function to have includes collision resistance and fixed size outputs. Those are properties you expect from a hash function, which is a different kind of primitive.
People have tried to squeeze both kinds of properties into a single primitive, which if they succeeded, would be both a cryptographic hash and a slow function. But they haven't always paid attention to the differences in requirements. And often the result has been called a hash function, which confuses people, since it is different from a cryptographic hash.
One nice property of slow functions as I would define them is that given multiple candidate functions, you can just compute all of them and concatenate the results. And you will have another slow function, which is guaranteed to be at least as strong as the strongest of the functions you started with.
Once you have all the primitives you need, you can combine them into a cryptographic system, that achieves some goal. If you want to protect passwords, I think you are going to need both a slow function and a hash function. For the combined system, you actually give a formal proof of the security. The proof of course assumes, that the underlying primitives satisfy the promised criteria. I guess the password protection you would implement given the above primitives would compute the slow function on the password, and then compute a hash of password, salt and output of the slow function.
Additionally you could prove that the regardless of the strength of the slow function, the password as well protected as it would have been with just a salted hash. That way by handling those as separate primitives, you can reason about the security assuming the failure of one of the primitives. Such a construction would eliminate the main argument about some of the existing slow functions, which is that they haven't had sufficient review.
Re:Too slow? (Score:5, Interesting)
That obviously should have said "Claiming that a hash function is insecure because it is fast would be like claiming AES128 is insecure because you can derive the decryption key from the encryption key."
Put differently. If you use AES when you should have used RSA, you don't blame AES for being insecure. If you use a SHA512 when you should have used a slow function, you shouldn't blame SHA512 for being insecure.
When you reason about the security of cryptographic systems, you usually show how many times a certain primitive must be invoked to break it. And if that number is large (usually meaning 2^128 or more), then you consider the system to be secure. It is not threat if the primitive itself is fast, because once you have to execute it 2^128 times, it will still take "forever".
But for protection of weak passwords you can't give such guarantee. Those can be broken with a relatively small number of invocations of the primitive. At that point the resources it takes to compute the primitive matters, and adding another requirement to a primitive means it is no longer the same kind of primitive.
Re:Too slow? (Score:5, Informative)
The proper name for these "Slow functions" is Key Derivation Function. They've been around a long time and are what OSes use to protect login credentials and what encrypted archive formats like RAR use.
Some examples are crypt (obsolete, vulnerable) PBKDF-2 (repeated application of salt-and-hash), bcrypt (repeated rounds of a special extra-slow variant of blowfish), and scrypt (an attempt to defeat GPU and custom hardware attacks by requiring lots of low-latency RAM).
Single-round salted hash is only a "better than plaintext" hack solution, it's never been the correct way to store passwords.
Re: (Score:3)
They are related, but not exactly the same. The slow functions that I was suggesting does not require every bit of their output to be unpredictable. It just requires that the output as a whole to not be easily computable. It doesn't matter if it turns out some long subsequence of the slow function output is easily computable. There is also no requirement that the output of the slow function be random looking. It could start with a sequenc
Re: (Score:2)
You are assuming that the slow function is used in an insecure high level design. The way cryptographic systems are designed is to take primitives where you make assumptions about the security properties of the primitives, then you put those primitives together in a high level construction, where you prove that it is secure. The proof relies on the security properties of the low level primitives.
When done this way, you cannot brea
Re: (Score:2)
Properties that I would not require a slow function to have includes collision resistance
Unless you are very careful about how you define slowness, I think collision resistance (or something like it) might actually be important. For example, suppose 90% of all inputs result in the same output but to determine whether a particular value hashes to that common value might still require a lot of computation. Then if I want to crack a leaked password table, I compute a rainbow table assuming the slow function is just the constant function that always produces that common value. It is an invalid a
Re: (Score:2)
That is a valid point. And that is something that the formal definition would need to take into account. But to address that point it is sufficient that the probability of two inputs producing the same output is small, it does not need to be negligible.
For example if the probability that two random input
Re: (Score:2)
I think the proper definition would have to include: "An adversary who has spent x% of the resources required to compute the output has at most x% probability of guessing the correct output." In reality the actual probability as function of the time spent isn't going to grow linear, but more likely as a convex function. The definition just requires a linear upper bound,
Re: (Score:2)
The main point with my proposal to split the hashing and slowness into separate is that each of them have a much smaller set of requirements, and thus a much smaller set of possible vulnerabilities. The specific problem you mention is not considered a major threat, but my proposal still protects against it.
In my suggested model, the slow function is where you could choose to u
Re: (Score:2)
If the algorithm is slow, it doesn't really help prevent it from being cracked. Because the hacker can just put more computing power into it.
Then you also mean the person who is trying to protect the data now has to get a faster computer to keep the load and the application running at the right speed... Thus giving an extra cost to the protectee however the hardware speed increase will negate the disadvantage to the hacker. It is just a losing proposal for the system owner.
However if the algorithm is fas
Re: (Score:2)
FTFY