New Global Directory of OpenPGP Keys 234
Gemini writes "The PGP company just announced a new type of keyserver for all your OpenPGP keys. This server verifies (via mailback verification, like mailing lists) that the email address on the key actually reaches someone. Dead keys age off the server, and you can even remove keys if you forget the passphrase. In a classy move, they've included support for those parts of the OpenPGP standard that PGP doesn't use, but GnuPG does."
whitelists? (Score:4, Insightful)
Allow incomming mail only from such valid e-mail accounts that are using the service. Could be useful for spam. Or will spam endure as it always has done...
Backdoors? (Score:1, Insightful)
Are there backdoors? And if there are not, what will Homeland Security or the like try to do about it?
Can they do anything about it, realistically?
Have I completely misunderstood this (a common event, unfortunately) or will this be one of the few ways of having as close to true privacy as we can realistically get?
PGP's defaults are the real problem. (Score:5, Insightful)
Had PGP's defaults been for a 1 year key instead of infinite this wouldn't be an issue.
I always create 1 year keys but I've got a couple of key out there over 10 years old that I FUBAR'd that'll never go away.
Re:Widespread Crypto Revolution? (Score:4, Insightful)
Re:Is there a future for PGP? (Score:2, Insightful)
Nothing wrong with the standard itself, just a lack of support and clue by ISPs.
Oh great, spammer heaven (Score:2, Insightful)
We need a new key format, that doesn't have a live email address but instead has a hash of one. You'd send the address separately so it could be compared against the hash. There'd be salting to stop brute force searches. The database server could then still verify all the addresses (by sending emails out) but the actual email addresses would stay unpublished.
Re:Backdoors? (Score:5, Insightful)
Re:Backdoors? (Score:1, Insightful)
Re:Backdoors? (Score:5, Insightful)
It doesn't matter. Keyservers are merely a method of distributing keys, not establishing trust. You can establish trust by a number of methods, such as manually verifying the fingerprint with the person yourself using a trusted medium (e.g. face to face) or having somebody you trust sign the key (after verifying their key, of course).
The real danger to public key cryptography taking off is that it will become commonplace to simply trust keys without verifying them. Everyone will feel more secure, but the security will be an illusion.
Re:Is there a future for PGP? (Score:3, Insightful)
Unfortunately I can't see a good way to make things more transparent and invisible to the end user. Most folks don't pick good passwords, yet that is absolutely essential for PGP private key security. Also, a yearly drive reformat is not uncommon, so lost keys are a huge issue. This technology partially address that issue but I shouldn't need to check to see if someone updated there key every message, plus theres the trust issue with a constantly rotating keyset.
Jeff
Re:Centralization ?? (Score:2, Insightful)
Re:whitelists? (Score:3, Insightful)
Or only allow incoming mail that's signed. This won't prevent spam, but it will complicate the spammers' lives a bit, at least for a while.
Re:whitelists? (Score:4, Insightful)
It won't be any different from individuals creating their own whitelist, since you can't implement whitelists at the ISP level since most people do not use PGP and cannot be forced to use it.
It wouldn't stop spammers at all though, since spammers could still create legitimate keys, send out a billion spam then delete those email accounts and move on. It may slow it down a bit until some smart spammer creats a program to automate the process of creating, registering, and authenticating the key, but I doubt it will take too much time and effort.
Can a central repository bring security? (Score:4, Insightful)
Re:Encrypted Spam? (Score:5, Insightful)
So if I'm willing to post my public key and verify every 6 months that I'm the same live email responder at the other end, then what assurance do I have that encrypted email sent to me isn't spam?
Another way of looking at it is from the "cost" of spamming - encrypting a spam "costs" the spammer, hence recent suggestions for charging mail-senders in CPU-cycles. Additionally, you'd be able to verify whether you held the spammer's public key on your keyring, and very easily "process" (ie. delete with extreme prejudice) encrypted emails from unknown senders.
A Big Step... (Score:3, Insightful)
Feeding that will be dirt simple encryption applications that make it so EASY to encrypt and decrypt that you might as well do it. (Like, for example, the application I'm finishing right now but refuse to plug until it's released)
The biggest problem now is that if a developer wants to include Public Key encryption abilities in has app he has to create an entire key management system and force users to gather the keys of all their contacts manually because there's just no other way. How many users are going to do that for a program that they only kinda think they need?
If you want the answer to that question, look at the percentage of users who currently encrypt any large part of their communication (SSL excluded?)
Re:Encrypted Spam? (Score:3, Insightful)
It is way too computationally expensive.
Spam programs are designed to work extremely fast, using very little CPU to send a message.
That is why things like hashcash [hashcash.org] would work, they'd make it economically unfeasible for spammers.
Encryption takes quite a bit of work (just less than unauthorized decryption
Re:Encrypted Spam? (Score:3, Insightful)
The keys themselves can be signed by a master key, by o' say PGP's new website. (this does not require the PGP website to have a copy of the private key)
What this meens is they could give the signing service away for free to individuals, in order to create a defacto standard. But then charge legitimate bulk emailers for the privlege of their service. PGP becomes the arbiter of who is spam and who is not. In exchange they get to charge for permission to send bulk/commercial mail.
Sounds like a good buisness plan.
Of course, I'll have to RTFA once the
One word (Score:3, Insightful)
> as opposed to being just random garbage data that two people
> happened to mail to each other?
Torture.
Re:Is there a future for PGP? (Score:3, Insightful)
Re:FPCP (Score:1, Insightful)
And as a result of him doing WHAT YOU ASKED HIM TO, and thus causing you to see ONE piece of spam, you feel entitled to let him in for huge amounts of the crap? Maybe he should be entitled to take $100 from you for each challenge you send him. It would at least give him an incentive not to answer your challenges unless they're replies to messages he's sent, and it's a damned sight easier to cope with losing the odd $100 than to get yourself off huge numbers of mailing lists.
** APPLAUSE ** (Score:1, Insightful)
Well said. Anyone who thinks a C-R system is a good idea simply doesn't understand what they are doing. I also do what the GP does - respond to C-Rs that I get due to joe-jobbing or the virus du jour.
And in case any C-R users wish to respond, here in a nutshell is why C-R is explicitly worse than useless : You receive a bunch of mail. Some of it may be whitelisted, some of it may be blacklisted. Some of it may be rejected outright due to eg SpamAssassin. Some of it may not be accepted in the first place due to RBLs. Whatever, at the end of all that, you have a body of messages for which you have to decide what to do. Instead of just facing up to that burden and delivering the message (or not), the C-R user passes that burden back to the purported sender. Most all of the time this is an innocent third party. So a C-R user's burden may go down, but only at the expense of the wider net community. It's ignorant and wasteful, and is little different than the modus operandus of spammers : let other people bear the cost of my own selfish actions.
If you're using a C-R system you are hardly any better than a spammer.
Re:Backdoors? (Score:3, Insightful)
No, it doesn't matter in the slightest how you got the key. PGP operates under the assumption that it's not practical to always use a trusted medium to exchange keys. It doesn't trust keys by default.
PGP uses the concept of a "web of trust" to decide whether you should trust a key or not. If you can securely verify the legitimacy of a public key, then you can sign it, so that people who trust your judgement can also trust the key. In reverse, you can trust keys that people you trust have signed.
How the keys are transferred is completely irrelevent to this mechanism. You could download a public key from Gnutella or Usenet, and as long as it's been signed by somebody you trust, or you can verify the fingerprint over a secure medium, it's trustable.
So, your scenario would play out as follows:
The balance between how practical and how secure your web of trust is depends on how much trust you place in others. It doesn't depend on the medium you transfer keys under in the slightest. That is why it doesn't matter if there are backdoors in the keyserver. No amount of tampering with it could make the web of trust any less secure.
If you think about your line of reasoning, if what you said were true, PGP would be pretty damn insecure to begin with, as you'd necessarily be trusting an external entity (the PGP keyserver admins) with all your communications.
Re:This presents problems with the trust path. (Score:1, Insightful)