Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Encryption Government Privacy Security Your Rights Online

Government Could Forge SSL Certificates 168

FutureDomain writes "Is SSL becoming pointless? Researchers are poking holes in the chain of trust for SSL certificates which protect sensitive data. According to these hypothesized attacks, governments could compel certificate authorities to give them phony certificates that are signed by the CA, which are then used to perform man in the middle attacks. They point out that Verisign already makes large sums of money by facilitating the disclosure of US consumers' private data to US government law enforcement. The researchers are developing a Firefox plugin (PDF) that checks past certificates and warns of anomalies in the issuing country, but not much can help if government starts spying on the secure connections of its own citizens."
This discussion has been archived. No new comments can be posted.

Government Could Forge SSL Certificates

Comments Filter:
  • by TheRaven64 ( 641858 ) on Friday March 26, 2010 @09:24AM (#31625852) Journal

    SSL is, and always has been, and ugly hack. End-to-end encryption should be done at the IP layer, not the TCP layer. Now that we have IPSEC, we have a standard way of doing it properly. The only remaining part of the problem is key distribution, but with DNSSec we can put IPSEC public keys in DNS entries and get end-to-end encryption.

    A government able to insert something into the chain of trust is still able to fake a connection, but distributing the chain of trust makes this a bit harder. The US government won't be able to insert something into a .cn domain, for example, although the Chinese government can. For the ultra-paranoid, you can publish the same IPSec public key on both and make clients compare the two. Unlike an SSL certificate, the IPSec key is visible to anyone, even people who don't try to make a connection, so it's much easier to spot if someone has tampered with the connection, and will be cached in ISP's DNS caches, making an unnoticed attack much harder.

    • by tepples ( 727027 ) <tepples.gmail@com> on Friday March 26, 2010 @09:30AM (#31625948) Homepage Journal

      with DNSSec we can put IPSEC public keys in DNS entries

      Unless the government compels domain name registrars to sign phony DNS public keys.

      For the ultra-paranoid, you can publish the same IPSec public key on both and make clients compare the two.

      Which is little different from hosting something at two different domains with TLS certs from different registrars in different countries.

      • Re: (Score:3, Interesting)

        by TheRaven64 ( 641858 )

        Which is little different from hosting something at two different domains with TLS certs from different registrars in different countries.

        It's very different. The server does not need modifying at all. It only needs a single IPSec private key, you are just distributing the public key via two different routes. The client doesn't even need to try connecting in order to verify the integrity of the key, it just needs to try fetching the two different DNS records and comparing their IPSECKEY entries. If they are different, don't connect at all. Importantly, individual clients don't need to do this check; you just need someone to be perform th

    • The US government won't be able to insert something into a .cn domain, for example, although the Chinese government can.

      I hear you're a looking for a reason for IPSEC and DNSSec not being implemented. Allow me to introduce a candidate answer.

    • by Meneth ( 872868 )

      sudo mod me up

      ok :)

    • by TheLink ( 130905 )

      The Certlock thing should help (assuming they do it right and the software itself can be trusted), but the problem could have been fixed by the browser makers long ago if they took security seriously. If I remember right, the problem was discussed years ago in a firefox bug report.

      Basically the browser should have features to allow you to be warned if:
      1) The CA has changed (still vulnerable to "Gov can forge SSL certs with CA's help")
      2) The cert has changed (paranoid mode- the Gov can eavesdrop only if they

    • by 0123456 ( 636235 )

      End-to-end encryption should be done at the IP layer, not the TCP layer.

      Sorry, but that's nonsense. Encryption should be done at the IP layer as well as by the application; if encryption is only done by IPSEC at a low level, then I have no way of knowing that my connection to my bank is secure, or is even going to the correct site.

      To give an example, for a long time I've had my XP laptop at home configured to use IPSEC to talk to my Ubuntu server. Yesterday I discovered that at some point the Ubuntu server had stopped running the startup script that configures it to require IPS

      • You are confusing so many issues here that it's difficult to know how to reply. First, you are not using DNS to distribute your keys, so your experience is not particularly relevant. You are also talking about using IPSec for a VPN, rather than for individual connections. Second, there is no reason why the TCP/IP stack could not expose a single function for getting the security state of the connection, telling you whether:
        • The connection to the DNS server is secure (using IPSec)
        • The DNS record was signe
        • by 0123456 ( 636235 )

          You are confusing so many issues here that it's difficult to know how to reply. First, you are not using DNS to distribute your keys, so your experience is not particularly relevant.

          How does using DNS to distribute keys tell me whether the connection is really, actually, physically encrypted?

          You are also talking about using IPSec for a VPN, rather than for individual connections.

          I'm talking about using it for individual connections between PCs..

          Second, there is no reason why the TCP/IP stack could not expose a single function for getting the security state of the connection

          Why should I trust the TCP stack's idea of whether the connection is secure when my web browser can determine that itself?

          There is absolutely no point in using SSL over an end-to-end IPSEC connection.

          Unless you actually want to ensure that your connection is actually, really secure, and the only way to do that is for your application to incorporate its own encryption and authentication. Rule #1 of security is

          • So I presume your web browser doesn't use OpenSSL / GNUTLS or whatever your host platform's native SSL library is?
            • So I presume your web browser doesn't use OpenSSL / GNUTLS or whatever your host platform's native SSL library is?

              Noep. Sometimes there are advantages to statically linked libraries. :P

    • The layer encryption is done on is really irrelevant to this issue, it is the trust model that is broken.

    • by mlts ( 1038732 ) *

      Even with IPSec, I'd still use SSL because it does encryption on a higher level, and in some cases, is controlled by app versus app. It also allows for client certificates, so I can lock out a host of problems by having a critical external-facing Web server only allow for authentication via a cert on a smart card. Of course, more sophisticated malware can run a MITM attack by compromising the browser and changing text in flight before it leaves the client machine, but that isn't what SSL is designed to pr

    • by durdur ( 252098 )

      SSL (more exactly TLS) is an established, mature, well-designed secure protocol. It's not the problem. The "remaining problem" you mention though has always existed with PKI: how do you manage certificates, distribute them to all the sites that need them, handle revocation, etc. That's a big issue and there is no drop in easy solution.

  • by DarkOx ( 621550 ) on Friday March 26, 2010 @09:24AM (#31625860) Journal

    If you really want to be secure and you are using certificates you should be self signing and exchanging the self signed certs with your partners out of band.

    • by Anonymous Coward on Friday March 26, 2010 @09:32AM (#31625994)

      I like the way OpenSSH does it -- Trust On First Use (TOFU). First time you visit a server you're warned of possible MITM and given a fingerprint (which you could have, say, confirmed in person). After that you never see a warning again unless the server's key unexpectedly changes. No forcing you to automatically trust CAs, and no overstated warnings about self-signed certs.

      • by mpe ( 36238 )
        I like the way OpenSSH does it -- Trust On First Use (TOFU). First time you visit a server you're warned of possible MITM and given a fingerprint (which you could have, say, confirmed in person). After that you never see a warning again unless the server's key unexpectedly changes. No forcing you to automatically trust CAs, and no overstated warnings about self-signed certs.

        Whereas the way web browsers typically do things you could be communicating with several hundred different entities and not know it.
      • Re: (Score:3, Interesting)

        by thijsh ( 910751 )
        I really want web-browsers to support this properly, with a dialog that shows that it's self-signed and provides encryption but no verification. Self-signed certificates have a lot of advantages, and the only thing holding back widespread use is those crappy browser dialogs warning you that this website is going to cause the end of life as you know it. There is a legitemate use for encryption without a CA, and this article only confirms that.

        The way I see it you have the following levels:
        - HTTP (unencryp
        • Re: (Score:3, Interesting)

          by mlts ( 1038732 ) *

          Perhaps have Web server certs trusted by multiple CAs, so you might have a level of trust above that. This way, if a website had a cert validated by a CA in the US, Israel, Germany, Russia, and China, one of those CAs might get compromised, but it would take a pretty big international agreement to compromise all of them and generate a bogus cert.

      • Re: (Score:3, Insightful)

        Exactly like Firefox (and probably other browsers): if you delete all the CA certificates, you'll get warned the first time you visit an encrypted site. Firefox shows you the certificate data, and you can accept and mark "Permanently store this exception".

    • If you really want to be secure and you are using certificates you should be self signing and exchanging the self signed certs with your partners out of band.

      Precisely. Otherwise, you are always open to a sufficiently sophisticated man in the middle attack.

  • by yup2000 ( 182755 ) on Friday March 26, 2010 @09:24AM (#31625862) Homepage

    And it took you how long to figure this out? Anyone with real security in mind would create their own certificates and sign them. What's always been missing is a convenient way to verify the identify of the person you're communicating with. CAs only help in certain situations. SSL has always been more about encrypted content than identification no matter what people try to tell you.

    • by Dr. Evil ( 3501 )

      Well... it *is* about identity, and limiting the scope of who you have to trust.

      With SSL, you trust the computer manufacturer, your hardware configuration, your Operating system, your web browser (and root certificates in your web browser) and the Certificate Authority (which is a corporation under the U.S. government).

      Self signed certs are better, but you need a second channel to communicate the certificates securely.

    • I agree, ssl is really about facilitating secure communications between parties who don't really trust each other, like you and a bank, or you and an online store. Ultimately, that's a job only government can do, akin to (and derived from) contract enforcement, which is how trade among parties is facilitated when there are no tribal or other bonds. If the CA decided to sell you online identity to the highest bidder, what's your recourse? Sue them. So, ultimately the government is the arbiter. Ok then,
  • by Anonymous Coward

    and distribute them by mail or something. That doesn't help taking to your bank,
    but then again if the government wants your bank balance they'll just ask the bank.

  • ...all this is for your own good...government by the people for the people type of thing...smile...
  • SSL / HTTPS (Score:2, Funny)

    by bbroerman ( 715822 )
    One more nail in the coffin... (See http://nearlyperfectsoftware.com/secureajax.html [nearlyperf...ftware.com] for other hacks). Good thing I'm working on a protocol and libraries / utilities that can be used to replace it for all of my work, and my clients... Starting with a secure ajax framework, then on to things like POP, IMAP, SMTP, FTP, Telnet, etc. Should be cool once I get them all done.
    • I'm interested in your views and would like to subscribe to your newsletter, but I'm not sure if I trust the certificate governing your 'Subscribe To Our Newsletter' page.
    • How would this secure ajax framework work? A (trusted) plugin to be installed in the client's browser?

      Because, if the client-side javascript is being served by the server over the Web, it's vulnerable: an attacker could just intercept the javascript and insert whatever he wants inside, and pass that on to the client, who would be none the wiser. And as it is non-standard, there'd be no tell-tale signs such as http instead of https, that an astute user could see.

  • Does that mean that self-signed certificates are now more secure ? :)

    • No. This is not a passive attack, it is a man-in-the-middle attack. The government gets an SSL certificate that looks like the remote one. You connect to their computer, thinking it's the remote computer, authenticate, and it then does the same with the remote machine. With a self-signed certificate, this is even easier because any certificate looks the same as the self-signed one.

      It does mean that pre-shared certificates (whether signed by a third-party or not) are more secure. Unfortunately, these

    • by fuzzyfuzzyfungus ( 1223518 ) on Friday March 26, 2010 @09:46AM (#31626242) Journal
      Self-signed certs are more secure; but only if you have some way of distinguishing them. "Self signed certs" as a generic class, are man-in-the-middle city because anybody can produce one. The feds don't even have to coerce the CA, they can just sign their own.

      A specific self-signed cert, that you have some out-of-band reason for trusting, is extremely secure because only by compromising the computer storing the signing key could an adversary produce a fake one of those.

      The problem is, outside of fairly trivial scenarios(corporate intranet with self-signed certs, worker drones' browsers trust that cert by group policy; small group of paranoics who know each other IRL exchange keys under the bridge at midnight), establishing that out of band reason for trusting a cert is a pain in the ass, and not amenable to any particularly clear solution.

      CAs are basically the ugly-not-really-all-that-good solution that has the virtue of working in practice. You trust the cert because the big corporation whose business is attesting to the trustworthiness of certs says you should trust it. Easy, simple, and actually works ok from a strictly financial perspective(ie. the amount of money that Verisign can make by selling overpriced sequences of bits that make the magic green bar appear in consumer browsers is greater than the amount that they could make by MiTM attacking a whole bunch of banking sessions and then fleeing to Namibia with their reputation in tatters).

      Where it breaks down is non-strictly-financial situations. It is highly unlikely that clandestine cooperation, for surveillance purposes, with state agencies is all that costly to Verisign, or their ilk(and may in fact be lucrative, as doing various sorts of wiretaps is for the telcomms). If your threat space is just occupied by script-kiddies and Ukranian cyber criminals, commercial CAs work pretty well. If it is occupied by state entities who want information rather than money, there is no particular reason to expect them to work.
      • by alexhs ( 877055 )

        I wrote my previous post under the misunderstanding that governmental agencies could get a copy of the original certificate private key, not that they could get a different private key with the same certification information.

        That second case of course mean that "trusted" certificates are neither better nor worse than self-signed certificate. This is a MITM attack that only works well in the case of a first connection (as you should be wary of a certificate change as long as the preexisting certificate didn'

      • by u38cg ( 607297 )
        I complain about this every time the subject comes up, but the problem is a contractual one. You are having to place trust in a CA you have no contractual relationship, so what mechanism causes that CA to work in your favour? None. Browsers should ship without any root certificates, and users should pay to subscribe to whoever they choose to provide them with root signing services. Economics would help enforce the trust issues. It isn't a magic bullet, of course, but to me it takes the biggest weakness
        • I have my doubts about that one, on two basic grounds:

          1. It isn't clear that a contractual relationship would save you from any threats that CA self-interest with respect to reputation isn't already saving you from. Any CA knows that they are dead if the browsers drop them. Further, most commercial applications of the ability to man-in-the-middle somebody are illegal. Thus, the risks of the CA selling you out for some modest commercial gain are already fairly small. On the other hand, you will have a ver
    • No. Without any further verifications, self-signed certificates can be spoofed by the common crook, whereas CA-signed certificates can only be spoofed by governments.

      With further verification (customer manually checks certificates finger-print), both self-signed and CA-signed would be secure, but then you wouldn't really rely on the signature at all, but rather on the fingerprint.

      • by X.25 ( 255792 )

        No. Without any further verifications, self-signed certificates can be spoofed by the common crook, whereas CA-signed certificates can only be spoofed by governments.

        Hi.

        Please spoof my self-signed certificate.

        Thank you.

      • a self-signed cert becomes more trustworthy than a CA-verified one.

  • by segedunum ( 883035 ) on Friday March 26, 2010 @09:30AM (#31625958)
    SSL certificates only provide the ability to encrypt communication between a browser and a server. That's all it's for. Alas, many people have have tried to build in some level of 'trust' to SSL as well, and the money racket that has grown up around issuing SSL certificates on an ad-hoc basis just so someone's browser doesn't complain needs to go the journey. Those root certificates in your browser are just money for old rope. We definitely need something better.
    • Encryption is of limited utility if you aren't sure who your encrypted session is with. Since manual key exchange has practical problems browsers went for the

      Unfortunately the major browser vendors have put WAY too many CAs on the trust list meaning that pretty much any significant governement in the world can ask/order someone to generate them a cert for any domain. Some criminals probablly can too.

      IMO the proper way to do it would be to have a chain of trust system as part of DNS so the only people who c

    • SSL certificates only provide the ability to encrypt communication between a browser and a server. That's all it's for.

      No they are not. They are for providing authentication. You would not need any certificates to encrypt communications. Of course, you can then do a man in the middle attack, but you can only get around that by authentication anyway.

      Alas, many people have have tried to build in some level of 'trust' to SSL as well, and the money racket that has grown up around issuing SSL certificates on an ad-hoc basis just so someone's browser doesn't complain needs to go the journey.

      This has indeed be identified. There are a couple of things with that. 1) SSL certificates are, normally, just issued to the owner of the site. That already provides some security. 2) you can get certificates that provide more trust nowadays.

      Those root certificates in your browser are just money for old rope. We definitely need something better.

      I'm not so sure that the current SSL c

  • Banking secrecy laws (Score:5, Interesting)

    by ArsenneLupin ( 766289 ) on Friday March 26, 2010 @09:37AM (#31626096)
    Not a theoretical concern, but a very real one.

    Many European countries (Germany, Belgium) now have electronic identity cards, which double as PKI signing tokens, with which you can authenticate yourself to web services, such as your bank.

    When Luxembourg introduced a similar system [luxtrust.lu] they didn't piggy back it on an id card, but issued "signing stick" and smart cards just for the purpose of PKI.

    You may wonder why, especially since an electronic id card is already in planning in Luxembourg as well.

    The answer is obvious: many customers of Luxembourgish banks are foreigners, couldn't thus get a Luxembourgish id card, but wouldn't trust their own government's id cards, so an ad-hoc system was needed: Luxtrust.

    Unfortunately, Luxembourg doesn't have any native smartcard industry, so they had to buy the chips from the French [gemalto.com]... who just shipped units with a predictable random number generator, dramatically reducing the number of possible private keys. FAIL.

    And the BSI [www.bsi.de] institute (which "certified" the cards) "overlooked" this weakness, because the Germans too have a vested interested in spying on communications with Luxembourgish banks. DOUBLE FAIL.

    • "And the BSI [www.bsi.de] institute (which "certified" the cards) "overlooked" this weakness, because the Germans too have a vested interested in spying on communications with Luxembourgish banks. DOUBLE FAIL."

      That's a pretty serious accusation - personally I would put this strictly in the "paranoid scheming" box unless you've got anything to back up that claim.

      • It's interesting how the public perception of the various countries is different.

        You can suggest that the French can slip a backdoor into their products to compromise a neighboring country, and nobody bats an eye.

        But if you suggest that Germany may deliberately overlook a backdoor, everybody is outraged, Germany is a serious country, they don't do that.

        ... all the while, in recent news, it was Germany (not France) who bought CDs full of stolen Swiss bank customer data, thus encouraging Swiss banking empl

        • Personally, I don't give France a bad rap at all (unless it has got something to do with their previous nuclear decisions).

          I'm just saying that suggesting that BSI, a company that is credited for giving out Common Criteria certifications, is involved in counter-espionage requires at least some indication of guilt on their part.

          And they would be doing this by deliberating slipping through a bad implementation made by a (largely) French company - for their next door neighbor Luxembourg no less? And then they

        • by he-sk ( 103163 )

          Interestingly, the Swiss High Court in Lausanne ruled in 2000 that Swiss tax authorities can use "stolen" data to prosecute tax evasion. Similarly to the recent case, the Germans got hand of a CD-ROM containing incriminating banking information and then forwarded data about Swiss citizens to the Swiss authorities.

          Source (in German): http://www.sueddeutsche.de/politik/825/502064/text/ [sueddeutsche.de]

    • by Cyberax ( 705495 )

      Are you sure they haven't just shipped Debian on these sticks? :)

  • I think we've had this debate on /. before, no?
    Who do you trust to issue certs? Certainly not Verisign...the UN, then?

  • no duh (Score:3, Insightful)

    by Sloppy ( 14984 ) on Friday March 26, 2010 @09:41AM (#31626166) Homepage Journal

    Nobody would ever seriously say that x.509's single point of failure for trusted introducers is a good idea; it just happened to be easy to deploy and got some encouragement along the way because some people could make money on it. But as soon as you look at it in terms of security, it doesn't fare very well.

    OpenPGP encourages multiple certifiers for an identity: so they're all required to conspire to sell you out. Conspiracies are hard. Build your next network app on top of Gnu TLS and make sure you test with OpenPGP, so that some day we can switch to modern (and by modern, I mean about 1990-level tech) crypto.

    BTW, governments are a great example, but always remember that they are not the only entities with capability or motivation to point a gun at someone. And even if you don't believe that, you've got to admit there are multiple governments, and only one of them at most, is accountable to you. Anyone who says that the cert system should be left vulnerable because the public has an interest in making sure that communications aren't "too secure," definitely isn't thinking about all the angles. The inherent weaknesses of X.509 should never have been used as an argument for building the web on it.

  • There a "fix" that should help a lot: have browsers cache all certificates that they've accepted. Then, whenever a site *changes* its certificate, give a bit fat warning and optionally send the new certificate to some repository of questionable certificates.

    If that repository starts to see bogus certificates signed by a CA, revoke that CA's root certificate.

    To really make it work, HTTPS should have a mechanism to indicate that a certificate may not be changed until such-and-such a time, and there should be

  • by Old97 ( 1341297 ) on Friday March 26, 2010 @09:43AM (#31626196)
    The fact that governments can use or abuse technology to spy on its citizens is not news. That's as newsworthy as saying that if I had possession of your computer long enough I could find out all your secrets. If you want protection from your government, you have to do something about your government. In democracies you have some options and generally have laws and the rule of law (mostly). In such countries being vigilant and vocal can make the government behave if enough citizens participate. In autocratic countries you have to expect the worst I suppose and try to work around it. Which ever is the situation, you can't trust technology, especially one relying on a shared infrastructure (e.g. internet, ca's, etc.) to safeguard your secrets from everyone, especially anyone who has physical access to it.
    • If you want protection from your government, you have to do something about your government.

      Even assuming this were a practical solution, what about all the other governments out there, and the CAs within their jurisdictions? It only takes one CA caving to one government—not necessarily yours—to circumvent the trust authentication for any site.

      • by Old97 ( 1341297 )
        True. It's also possible that non-government organizations could corrupt a CA for their purposes. My point is that the issue really isn't governments, it's the vulnerability of the entire scheme. Governments will spy. Criminals will spy. That's a given. If you can't secure against physical access to key components of the shared infrastructure or the end points you rely on then you have a vulnerability.
    • Let's say Liechtenstein controls a CA that is trusted by your browser. They can issue a fake cert for mail.google.com and happily MITM all your GMail connections provided they can rig your hosts file or own your router.

      As we all know, there is very little you can do about the Liechtensteiner government.

  • by DrgnDancer ( 137700 ) on Friday March 26, 2010 @09:50AM (#31626326) Homepage

    Essentially if you really want secure end to end communication with someone that is more or less fool proof and not subject to outside interference you have to manually exchange keys. It's always been this way. Any time you do less you are trusting *someone* other than yourself and person at the remote end. The simple point is that we *have* to trust someone to exist in society. We trust that the government will not suddenly decide to print "Braquats" and declare Dollars (or Pounds, or Euros, or whatever) useless. We trust that the bank won't wander off with all our money. We trust that our ISP isn't just putting up servers that pretend to be the Internet and are slowly stealing everything we type into our browsers. We trust that the grocery store hasn't poisoned all the produce. Virtually every social action we take involves some modicum of trust that the "other guy" is acting in reasonably good faith.

    Thus far the certificate authorities have been trustworthy. Could they be compromised? Of course. Could the clerk at the grocery store be bribed to poison all the produce? Of course. Do we have any reason to think the CAs *have* been compromised? Not that I'm aware of. It's pretty straightforward. Are you doing something that needs to be *completely* secret? Exchange keys with the remote end manually. Are you doing something that needs to be as secret as one can reasonably expect while still dealing public entities? Use the CAs. They have an extremely good track record and seem to be about as trustworthy as anyone can reasonably expect.

    • by 0123456 ( 636235 )

      Do we have any reason to think the CAs *have* been compromised? Not that I'm aware of.

      The fact that someone's selling a box allowing MITM attacks with forged keys is a little bit suspicious... and since there are now so many CAs around the world it should be easy for governments to find one who'll happily sign a fake key for them, or set up trustuswerefromthegovernment.com as a new CA to sign any key they want.

    • Thus far the certificate authorities have been trustworthy.

      not exactly, we only don't know that they haven't been untrustworthy. There's a pretty big difference.
    • Are you doing something that needs to be *completely* secret? Exchange keys with the remote end manually.

      That is utterly misleading. There is no such thing as complete secrecy. It is also wrong to ask yourself: Do I trust this entity unconditionally? You should only trust an institution conditionally. It all depends in what you're using the encryption for. Can you trust CAs for financial transactions? So far, apparently yes. Can you trust CAs with your international trade secrets as a non-US company? NOT a

    • ... you have to manually exchange keys.

      No. You can get most of the way there with a model like that of PGP: if multiple entities that you trust have vouched for the this one, then you have some confidence. This is what the "web of trust" is all about. The CAs fail both counts -- multiple trust paths are not required and why should I trust any particular CA? The article is just pointing out another reason to not trust the CAs.

      With real PKI management I could choose for any particular communication whether I trusted the CAs under the jurisd

  • "A CA will protect you from anyone from whom they won't take money."
    -- Paul Vixie - on the NANOG mailing list
    (Going from memory here, so the quote may not be exact.)

    "In the real world, you prove your identity with documents provided by a government.
    In the digital world, why are we trusting certificates provided by 3rd-party business???"
    -- a former coworker
  • Here is a link to my own reply previously: http://slashdot.org/comments.pl?sid=1534366&cid=31004066 [slashdot.org]

    To summarize - I don't see how the "trust system" has any meaning. I don't actually know anyone at those 160+ companies and I sure as hell don't *trust* any one of them, not as far as I can throw them.

    In fact, I don't really trust anyone :) and based on that - see no reason that my SSL connection is more or less safe whether the cert for counterparty is signed by a "good" or "bad" CA. Frankly, I trust my b

  • Only a fool would rely on SSL based on the certificates that come with a browser to protect against government. That isn't what it is for. While I object to government snooping in principle (I object to pretty much all government activity in principle) I really have nothing to fear from the NSA learning what parts I am ordering from Jameco. Firefox, HTTPS, and Perspectives provide adequate assurance that I am communicating with the company I intend and that some clerk at some ISP can't snoop my credit ca

  • Use self-signed certs. I am not talking about being more secure when doing online shopping and other silly things, but for personal/company usage.

    For example, if I create my own CA and sign my own certs for my mailserver, I will import my CA cert (or accept cert once, on 1st mail retrieval, although more risky), and no matter what certificate government puts when doing mitm - I'll get a warning.

    But if I buy a cert from Verisign and think that I am totally safe now, I would never get any warnings if governme

  • So the problem is that two different CAs could issue certs for the same host, and some have essentially back doors?

    Know what I'd like to see? How about a DNS TXT record that tells you what the real, trusted CA for a given site is? Like, let people assert "for my domain, do not trust any certs unless they come from this particular CA", using DNS as an out-of-band channel that would have to be compromised separately from the SSL negotiation.

    Wouldn't that be relatively easy to implement (for those who care t

  • Enforcing the Law — using, among other things, eavesdropping on communications — and prosecuting wars, are practically the only things, a government of a free country is supposed to do. Because no one else can be allowed to do these things...

    Everything else — and I do mean everything: elementary and higher education, personal retirement, health care, communications, transportation — should be left to the competing enterprises. If only because they are much easier to switch from on

  • Disable trust in the root certificates so that your browser always prompts you to verify an SSL certificate until you mark it as trusted. The first time you visit a site, hit it from a few different IP addresses and make sure the SSL cert matches on all the different connections to rule out a MITM attack, then verify the chain of trust up to the now-untrusted root CAs. If you think you can still trust whichever root CA signed the cert, mark the site's cert as fully trusted and the browser won't bug you un
  • A single false, signed certificate from anywhere provides undeniable cause to revoke a CA from all browsers.

  • A lot of these "attacks" can be prevented by properly implementing your PKI. For example, some of the articles (and several commenters) make mention of "using a Root CA to generate sub-CA's which then generate rogue certs". Sure, the system allows this to happen, but it also provides constraints to prevent it. One of usual "basic constraints" (which is an X.509 attribute) of a certificate is: "Max path length" which means: "how deep can a signature chain extend from me, if I am trusted" For most peop

  • This whole CA thing is based on a hierarchical chain of trust, you can't keep it invisible for long. All certs are not equally trustable. We shouldn't get used to assimilate the "secure" icon to a completely trusted user, but just as a second grade security that our (incompetent) banks rely on. When they are not simply using http...

He who steps on others to reach the top has good balance.

Working...