Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Privacy Security Social Networks Twitter Wireless Networking Your Rights Online

Twitter Joins the HTTPS By Default Party 95

wiredmikey writes "Following a trend in allowing users to automatically utilize the secure HTTPS protocol when accessing Web based services, Twitter announced this week that it has added the option for users to force HTTPS connections by default when accessing Twitter.com. The reasons to utilize HTTPS when accessing any personal accounts aren't new, but an easy to use extension for FireFox called 'FireSheep,' released in October 2010, spiked concern, as it enables HTTP session hijacking for the masses."
This discussion has been archived. No new comments can be posted.

Twitter Joins the HTTPS By Default Party

Comments Filter:
  • Good (Score:4, Informative)

    by Tukz ( 664339 ) on Wednesday March 16, 2011 @08:38AM (#35503084) Journal

    I''d like to see all community sites do that.

    I got an addon that tries to force SSL where available, and it's surprising so many sites that doesn't have SSL enabled at all.

    • Simple tools like FireSheep are an awesome way to force websites to up their game on the encryption front and improve their security.

      I guess the addon you mention is EFF's "https-everywhere [eff.org]". Notice that the list of https sites the addon supports is growing pretty large. They will soon have to add the option "exclude these sites" rather than try and provide a massive list of included sites.

    • Yesterday my host provider (a big one) failed (again!) to renew the shared server host certificate so I couldn't access their Cpanel via HTTPS, so had to open a ticket.

      That happened in 2008, 2009, 2010, and now, so I'm expecting the same situation by march 2012, 2013, etc...

      BTW, they sell me a signed certificate for my domain. Alas, they don't track its expiration (nor me, of course!) so by some time in the year I'll have to open a ticket asking them to renew it (no big deal since I'm not doing e-business o

      • by bberens ( 965711 )
        I've never not been able to do something because an SSL certificate had expired. I get a warning from my browser, that's all. It's odd that you weren't able to function.
    • i run a popular site. the question is why should i enable ssl and thus need to expand my server farm to expend resources that you should spend for your own daily browsing?
      • If your site doesn't expect me to login, register, provide any content or interact with it in any way that is not completely passive, then that is perfectly fine.

        Otherwise: why should I bother interacting with your site in a less passive manner than simple browsing, if you can't be bothered to enable SSL? Enabling SSL is not difficult, does not generally cause a massive amount of extra load on modern systems (if your site is relatively dynamic then your scripting language and database will be much much m
    • by Firehed ( 942385 )

      I got an addon that tries to force SSL where available, and it's surprising so many sites that doesn't have SSL enabled at all.

      Well, SSL is not free in any sense. There's some amount of overhead simply in the encryption, of course. If you're running multiple sites from one machine (read: any shared hosting plan), you need a dedicated IP per SSL site* which costs extra. Then there's the cost of the certificate itself**, and the initial process of setting it up. And once you have the technical side of things configured, you still need to make sure that ALL resources on the page are coming in over https as well, which is easier said t

  • by Compaqt ( 1758360 ) on Wednesday March 16, 2011 @08:45AM (#35503168) Homepage

    Back some years ago, there was talk about dedicated SSL hardware. What's the performance penalty for HTTPS anymore?

    Say you're a small startup running your "the next Twitter" app on a Xen or OpenVZ VPS instance.

    What's the hit for HTTPS?

    Any thoughts on HTTPS only for the login page, or for all pages?

    • by buchner.johannes ( 1139593 ) on Wednesday March 16, 2011 @08:52AM (#35503260) Homepage Journal

      Any thoughts on HTTPS only for the login page, or for all pages?

      You can just steal the session cookie after login, so just doing the login page is almost useless. It prevents the attacker from learning the password and re-entering the system, but a) he can change the password and b) there is no reason he wouldn't get the job done within one session.

      • by Baloo Uriza ( 1582831 ) <baloo@ursamundi.org> on Wednesday March 16, 2011 @08:55AM (#35503312) Homepage Journal
        Most sites expect you to enter the current password to be able to change it, even if you are logged in.
      • True for the sessionjacking.

        Thinking out loud: I'd think a way to prevent password changes would be to require you to enter the old password in order to change the password, like passwd does.

      • You shouldn't let a user change their password without entering their current password. Specifically to prevent things such as this. Even if you have already authenticated the user, you should require them to re-enter their password in order to change it. Although I' agree with you that a lot could be done in a single session.
      • by tlhIngan ( 30335 )

        Any thoughts on HTTPS only for the login page, or for all pages?

        You can just steal the session cookie after login, so just doing the login page is almost useless. It prevents the attacker from learning the password and re-entering the system, but a) he can change the password and b) there is no reason he wouldn't get the job done within one session.

        And that's what FireSheep does. If it can't steal login credentials, it steals session cookies (which will be sent in the clear). Most sites already do the whole

        • All that is just polishing the turds. Or at least reinventing a mediocre replacement for a proper wheel.

          Just use HTTPS for everything. One certificate per year is certainly cheaper than one person-day for IT staff trying to implement a workaround. Buy cert (or get a StartSSL free one), install cert, behold a safe website.

          With governments, companies, data miners growing greedier every day, plaintext is going the way of the dodo. Only a few bearded Internet founders still believe in the good of mankind to sen

    • by hart ( 51418 ) on Wednesday March 16, 2011 @08:54AM (#35503288) Journal
      There's still a performance hit for SSL. Solutions for that include load balancers with dedicated hardware SSL support. As for what the performance hit is, try this: http://serverfault.com/questions/43692/how-much-of-a-performance-hit-for-https-vs-http-for-apache [serverfault.com] Re: HTTPS all vs. only on login page - as the recent Facebook session hijacking proved, it's the session cookies in cleartext that are the security problem - it doesn't sniff your password, it steals your session cookies to access your account. HTTPs should be on everything, IMHO. Cheers Leigh
      • There's still a performance hit for SSL. Solutions for that include load balancers with dedicated hardware SSL support.

        Back when Usenet providers starting offering full SSL transfers, I remember reading that one of the reasons they were charging more for it (at the time) was because SSL transfers saw a 400% increase in required CPU power on the back end.

        Nowadays though, SSL seems to come by default in most offerings I've seen.

        • by Firehed ( 942385 )

          400%? That doesn't sound right. Maybe we've improved our algorithms significantly since then, but I tend to hear anywhere from "rounding error" (probably hardware support) to 30-40% increase.

      • Damn, I had mod points last week.. That serverfault link was very informative but what I expected.. If you have a large site you are really going to want to have SSL accelerated load balancers in front of your web servers... The load is substantial.

    • by SamSim ( 630795 )

      Any thoughts on HTTPS only for the login page, or for all pages?

      All pages. When you log in to begin with, if the login page is HTTPS then your username and password are encrypted. This is good, because it means nobody else can snoop your password and log in as you later. You are then sent back a cookie. Later, when you want to prove that you are logged in, you just send the cookie along with the HTTP request. Of course, if all the other pages are not encrypted, then the cookie is sent in the clear, which al

    • Compaqt, because of HTTP of session hijacking works over unsecured wireless connections, it's important to use SSL beyond just the login. So even during the login process when the password is submitted, once a session is established, the session can be hijacked.

    • Pretty much the only real penalty for https is browser shittiness. If your identity isn't certified by a recognized CA, all the major browsers incorrectly treat your site, relative to http, as having a higher (rather than lower) risk of .. something .. so they try to scare the user with vague error messages that, even if the messages were appropriate, mislead the user rather than inform. So the penalty is that you have to pay someone to sign you.

      As for the computations, it's 2011 so CPU is so close to "

      • by Anonymous Coward

        > Encryption is free.

        You're concentrating on the client aspects of SSL, not the server-side.

        Take a low-ball estimate of "page size" at 300 kB; all elements of the page have to be encrypted.

        Take a low-ball estimate of 10,000 SSL page requests per second per server.

        Each server has to encrypt at 3 GB / second just to handle static page requests, let alone dynamic calls to DBs and CMS.

        Back to the maths for you!

        • Wait a minute, 10k requests per second per server? Highly unlikely. Ever look at those "0.09 seconds" things you get on Google? They used to be all over the place, and I don't think I've ever seen one below 10ms, certainly never below 1ms. You're asking for 100 microseconds to process a page in. On a 3GHz processor, that's 30,000 cycles. If the page is 300kb, you have roughly 1/10th of a cycle to process each byte, which is flat impossible.

          But, we know they process more than a byte at a time. A 64-bit machi

      • by praxis ( 19962 )

        I wrote a long post and then realized it was tl;dr. So, I'd instead just like to point out certificates work on a network of trust. If you present a certificate that no one else trusts, why should I? The browser behavior is absolutely the right one. "This guy is presenting a certificate he signed himself. No one I trust trusts him. What should I do?"

    • by shish ( 588640 ) on Wednesday March 16, 2011 @09:31AM (#35503670) Homepage

      Speaking as someone in exactly the situation you describe -- running our current site on a small single-core VPS, over HTTP we can serve ~400 static files per second, limited by bandwidth. Using HTTPS, we can serve 10 static files per second, limited by CPU speed. For dynamic pages, the limits are more like 50/sec (limited by CPU) and 5/sec (limited by CPU -- page load times go up a lot as the database and processing are competing with the encryption)

      Current plan to deal with this, because we want to be HTTPS by default, is to offload static files to an HTTPS-enabled CDN, and have a front-end reverse proxy or several dedicated to SSL processing -- unless anyone has any better ideas?

      • Wow, a 10:1 ratio, much more than the 2:1 (with caching) using a simple ab test from the serverfault link.

        Reading the responses above, I'm thinking that a happy medium is:

        -HTTPS for login for free users. Has the risk of sessionjacking
        -Ask for old password before changing password or major actions like "delete all"
        -HTTPS by default or by option for paid users.

      • by olau ( 314197 )

        Maybe you could use a trick with the domain or path when setting the session cookie so that static files don't get it. Then serve static files over HTTP and only the actual pages over HTTPS.

        • by shish ( 588640 )
          That causes browser warnings about parts of the page being insecure -- in most browsers it means the lock icon in the url bar is broken, which would be just about OK as we can reassure users that their actual data is safe -- but in IE the warning is a giant "THIS SITE HAS SOME UNENCRYPTED BITS. CLICK CONTINUE TO HAVE YOUR BABIES EATEN BY RABID WOLVES" or something like that...
    • Forcing SSL makes any hardware endpoint compression/optimization tools pretty much useless (Look at Riverbed products). You also put more of a strain on anything with smaller MTUs (VPN tunnels, PPPoE, dialup [Yes, it still exists]) or with higher latency (client in China, satellite users).

      Additionally, you need one IP address per https website you want to host. This isn't an issue with IPv6 (Yay IPv6) or when using a webserver/client that can use Host headers before the SSL transaction (which all older b
      • [...] the software isn't wide spread enough to reliably host multiple SSL websites on a single IP with vhosts. [...]

        I may be mistaken, but did not Apache introduce this feature in version 2? I have used SSL with name based vhosts quite heavily for years.

      • Additionally, you need one IP address per https website you want to host. This isn't an issue with IPv6 (Yay IPv6) or when using a webserver/client that can use Host headers before the SSL transaction (which all older browsers do NOT support). The main problem is that not everyone has a metric 'shitton' of IPv4 addresses and the software isn't wide spread enough to reliably host multiple SSL websites on a single IP with vhosts.

        Browsers that don't support SNI:

        • IE
        • Firefox
        • Opera
        • Safari
        • Chrome try to run an old version of Chrome.

        In short, you're safe to use it these days unless you're hosting a legacy app on an internal business server or massive shopping site trying to catch every last Christmas shopper--in either case, IPs are probably the least of your worries.

        • argh, less-than signs. That should be:

          • IE < 7
          • Firefox < 2
          • Opera < 8
          • Safari < 2
          • Chrome < 6

          There was other commentary on there, but it wasn't important.

          • Also my understanding is that some of those browsers, including IE 7/8, don't support SNI in WIndows XP.
            • Well that's news to me. I'd be surprised if any other than IE worked that way, as far as I know there's no mandatory system implementation of HTTP--I know you don't have full access to the IP stack, but that's as far as it goes (I think). Anyway, I'll have to test that out sometime when I have time (unless some enterprising slashdotter tests it out for us), but thanks for the info.

              • From what I read Google Chrome for Windows XP has the same limitation. Something about using the native OS call for SSL functions. The lack of support for Internet Explorer for Windows XP is a major shortfall for SNI. At my company we can't justify having a site that doesn't work properly in Internet Explorer for XP. I'm sure we aren't unique.
    • The latency from the handshake is unavoidable. Otherwise, it is just CPU intensive. SSL/TLS can resume previous sessions, which is a tremendous help.
    • by Ant P. ( 974313 )

      If you want dedicated SSL hardware, just set up a reverse proxy running on a Geode CPU. They can push 200Mbps of AES-128 despite being ancient. Newer Intel stuff has encryption-specific instructions built in so you might not even need a separate box.

  • by Anonymous Coward

    Users are required to change this setting themselves, nothing default about it. It's simply an added option

    Now Gmail, this is HTTPS by default..
    also I read mobile.twitter.com will not even switch to HTTPS? wut.

    Smarten up slashdot and editors

    • You're right -- It's not SET to default, but users can set the service to use HTTPS by default.The actual title of the article is "Twitter Enables Option for HTTPS by Default" - Though I agree that the /. could have been more clear.

    • by Anonymous Coward

      To be fair Gmail started off by giving this as an option, then transitioned to enabling it by default.

      Baby steps my friend, baby steps. Allowing the option is actually a really good way to get a good test of the system, you can see exactly how many people enabled it, had difficulties, then disabled it. As long as that number is nearly zero, compared to the number that switched it on and left it, you have some data supporting the move to ssl by default.

      I think this is the proper way of handling this.

    • by richlv ( 778496 )

      i searched for "slashdot" in comments. only came up in the middle of the page. i guess geeks must suck at security :)

      also, regarding slashdot and https - they probably lack the technical competency to set it up.

      YEAH. hope to see https next week, thanks.

  • A big problem I see with this is 1) Twitter isn't carrying important personal data, 2) in fact, quite the opposite, except for login credentials to sign in, and that's always been HTTPS anyway, 3) HTTPS does not cache. We should be encouraging sites to be more cachable and more ISPs to adopt proxies like Squid, not cripple their ability to reduce traffic leaving/entering the network.
    • Re:Bad idea! (Score:4, Insightful)

      by CastrTroy ( 595695 ) on Wednesday March 16, 2011 @09:01AM (#35503374)

      Twitter isn't carrying important personal data

      Tell that to the people in Libya, China, North Korea (do they have internet?) and other places around the world where the government isn't so easy on people who oppose the regime.

      • North Korea (do they have internet?)

        Most of them are lucky to have electricity, let alone internet.

        http://www.atr.org/userfiles/korea-by-night.jpg [atr.org]

      • Their problem isn't the lack of HTTPS, it's the lack of free speech. Nice scarecrow, though.
        • by ntk ( 974 ) *

          I work with independent journalists in this and other at-risk countries, and consult with those seeking to protect activists. While you are perhaps right that the threat is, at heart, one of human rights, protecting those attempting to change or document that situation is also important. And lack of on-the-wire encryption also presents an almost constant temptation to even other countries supposedly better protected by the rule of law. The pervasive data-mining conducted by AT&T on behalf of the NSA is

        • Free speech without anonymity isn't free speech. It's an invitation for thugs.

    • A big problem I see with this is 1) Twitter isn't carrying important personal data, 2) in fact, quite the opposite, except for login credentials to sign in, and that's always been HTTPS anyway, 3) HTTPS does not cache. We should be encouraging sites to be more cachable and more ISPs to adopt proxies like Squid, not cripple their ability to reduce traffic leaving/entering the network.

      HTTPS does cache pages at the browser, it is only middle tier browsers like squid that cannot cache the pages. Of course if you have an interactive site then these will disable caching anyway, you don't want everyone to see your session.

      • It's the middle tier that I'm worried about, since most of the bandwidth used by Twitter is for common objects displayed on all pages, like the CSS, the images, etc. These don't change. And most browsers only cache HTTPS for a single session.
        • It is completely up to the site serving the resources. A quick look unsurprisingly shows twitter not being stupid about it and setting the correct headers to get the browser to cache resources served over HTTPS for as long as the browser can. Here are the response headers from getting their logo over HTTPS:

          Date Wed, 16 Mar 2011 14:52:00 GMT
          Content-Length 1159
          Content-Type image/png
          Etag "c53472495d431cceef1c715732db12c9"
          Expires Wed, 18 May 2033 03:33:20 GMT
          Last-Modified Tue, 15 Mar 2011 21:2

    • You can steal the session cookie from someone using twitter using an unsecured network (such as a public wifi) - and then spam the crap out of his feed, or change some settings or something.

      I'm pretty sure the ability to spoof someone else's twitter to say whatever you want is considered - "Important Personal Data".

      Login Credentials aren't needed if you're nicking the cookie - see also : "Firesheep" which is script-kiddie friendly.

      • You can steal the session cookie from someone using twitter using an unsecured network (such as a public wifi) - and then spam the crap out of his feed, or change some settings or something.

        Boo hoo. It's Twitter. Who gives a shit? Until you can post more than 140 characters, unless you and your audience speak Korean, Japanese, Cherokee or some other language that uses ideograms or a syllabary instead of an alphabet, it's next to impossible to express a cogent thought on a level higher than "I'm hungry" on Twitter.

        • A short tweet like "We are attacking the Death Star tomorrow morning" is probably enough for the other side to set up a serious trap.

    • by ntk ( 974 ) *

      A large number of journalists and activists end up communicating with sources and each other using direct messaging on Twitter, so there is private information passing around. There's also the question of using login credentials to take over and fake messages. Also, there's the question of correlating Twitter identities with individuals (though I can think of a few strategies for attackers to do that even with https enabled).

    • by RichiH ( 749257 )

      > We should be encouraging sites to be more cachable and more ISPs to adopt proxies like Squid

      You have, quite literally, no idea what you are talking about. The German university and research backbone DFN is staffed with incredibly bright minds and has been pushing technology on quite a few frontiers.

      They gave up proxying in the 90ies.

      Why?

      * It's cheaper to just add more bandwidth
      * In any network of non-trivial size, there are a lot of possible routes traffic can go and you need to account for this and ch

  • by Enry ( 630 ) <enry@@@wayga...net> on Wednesday March 16, 2011 @08:55AM (#35503292) Journal

    I don't like keeping track of what sites I can and can't use HTTPS on, so I installed HTTPS Everywhere [eff.org] on my browsers and get HTTPS access to a bunch of sites by default.

    BTW, when do we get HTTPS access to /.?

  • by Chrisq ( 894406 ) on Wednesday March 16, 2011 @08:57AM (#35503326)
    It is built in to Firefox 4 [mozilla.org] so soon you won't need an extension.
    • From what I am understanding of the article its there to stop:

      http://www.example..../ [www.example....]
      [redirect to]
      https://..../ [....]

      Which could be grounds for a Man In The Middle Attack. It does not say anything about forcing people to use HTTPS, just that it will be done automatically instead of using a redirect. So it'll make sites which force HTTPS safer, but it won't force twitter to push https if you haven't asked for it.

      • by Chrisq ( 894406 ) on Wednesday March 16, 2011 @09:34AM (#35503710)

        From what I am understanding of the article its there to stop:

        http://www.example..../ [www.example....] [redirect to] https://..../ [....]

        Which could be grounds for a Man In The Middle Attack. It does not say anything about forcing people to use HTTPS, just that it will be done automatically instead of using a redirect. So it'll make sites which force HTTPS safer, but it won't force twitter to push https if you haven't asked for it.

        There is a better explanation here [wikipedia.org]. Basically after the header is received the browser will convert any http: requests to https [slashdot.org]:, therefore bypassing any redirect. Whether this will force you to use https depends on whether Twitter will set this header on their https sites only or on both http and https. Even if they do set it only on the https site it will force you to use https if you visit the https URL even once.

        • They cannot set it via http [wikipedia.org], so you will have to visit an https page for it to take effect:

          Strict-Transport-Security headers must be sent via HTTPS responses only. Client implementations must not respect STS headers sent over non-HTTPS responses, or over HTTPS responses which are not using properly configured, trusted certificates.

  • by Mikkeles ( 698461 ) on Wednesday March 16, 2011 @09:06AM (#35503424)

    So you can securely upload your private data for public dissemination?

    • More like so you can be sure that someone using the same connection at your coffee shop won't post something in your name by sniffing your cookies.

  • When will the "tweet this" button for websites be able to use SSL? Having this button in the footer of a site I worked on recently made it a bit of a hassle to create a page that's completely SSL.

    • Good point. Are browsers still putting out the notorious "not all elements on this page are SSL" errors they used to?

      • Well, I had to support IE8 with an SSL iframe because the client wanted some obscure payment processor that wouldn't break the continuity of the site's branding the way PayPal would.

        So, if you have an iframe in SSL then you've got to jump through several hoops including adding something to mod_header and removing every single non-SSL element from the page. If you don't, then cookies get broken. Without cookies, I couldn't use the session variables that made everything work in the good browsers.

        It's not a hu

        • Interesting. I'll bet you you had some kind of time convincing the client that hours billed for that research were worth it. ("If you're such a great developer, you should already know that")

  • Facebook got dinged [sophos.com] because their android app didn't use SSL even when the account is set up to use it. I wonder if Twitter has the same problem...
  • Better than nothing, but I don't see any HTTP Strict-Transport-Security: header.

  • It's not HTTPS by default. It's giving users the option to use HTTPS.
    HTTPS by default would be switching all users automatically, allowing them to opt out.

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...