Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Encryption Privacy The Internet

The Cost of the "S" In HTTPS 238

An anonymous reader writes Researchers from CMU, Telefonica, and Politecnico di Torino have presented a paper at ACM CoNEXT that quantifies the cost of the "S" in HTTPS. The study shows that today major players are embracing end-to-end encryption, so that about 50% of web traffic is carried by HTTPS. This is a nice testament to the feasibility of having a fully encrypted web. The paper pinpoints also the cost of encryption, that manifests itself through increases in the page loading time that go above 50%, and possible increase in battery usage. However, the major loss due to the "S" is the inability to offer any in-network value added services, that are offered by middle-boxes, such as caching, proxying, firewalling, parental control, etc. Are we ready to accept it? (Presentation can be downloaded from here.)
This discussion has been archived. No new comments can be posted.

The Cost of the "S" In HTTPS

Comments Filter:
  • by Charliemopps ( 1157495 ) on Thursday December 04, 2014 @11:15AM (#48522605)

    Are we ready to accept it?

    Slashdot certainly isn't ready!

  • by Overzeetop ( 214511 ) on Thursday December 04, 2014 @11:18AM (#48522621) Journal

    "in-network value added services"

    I just read that as "advertising".

    Besides, I though most of the internet traffic was netflix now. Is that all done https in a way that distributed caches are infeasible? I understood that the caching was pretty robust for their traffic.

    • by yakatz ( 1176317 ) on Thursday December 04, 2014 @11:22AM (#48522677) Homepage Journal
      It includes things like local caching which was once important, but probably isn't anymore.
      • by Qzukk ( 229616 ) on Thursday December 04, 2014 @11:56AM (#48522967) Journal

        Legitimate local proxies will have the clients configured to use them and will work fine with https.

        • by petermgreen ( 876956 ) <plugwash.p10link@net> on Thursday December 04, 2014 @12:31PM (#48523295) Homepage

          In my experiance most proxies legitimate or otherwise just pass https through without caching it.

          It's certainly possible to set up a proxy that decrypts and hashes https but it has a number of issues.

          1: legal, in some jurisdications it may not be legal to interfere with the encryption of certain types of traffic or may make you liable if the information you decrypted leaks out.
          2: client configuration, you have to explicitly add the certificate for every client. Having unmanaged client machines is not mutally exclusive with a legitimate desire to cache data.
          3: security, your proxy just became a massive target for anyone wanting to attack your users.

      • Re: (Score:3, Insightful)

        by Eunuchswear ( 210685 )

        My experience with telephone company provided local caching is that it usualy makes the web unusable, If I can get at a service via HTTP or HTTPS then quite often the HTTPS works where the HTTP will either give you nothing, or just the start of the page,

        (This was on Free Mobile, in France).

        • Caching at the phone company is kinda pointless. The time you want proxy caching is when you have a fast local network behind a slow wan and want to reduce the traffic over the WAN.

          Afaict the purpose of the phone companies proxy's is not caching. It's purpose at least in my experiance is to reduce bandwidth on the mobile network by reducing image quality.

    • by AmiMoJo ( 196126 ) *

      Netflix gives servers to ISPs to host inside their networks for "caching". It's not really caching, it's just distributing their servers more widely.

      • by dave420 ( 699308 )
        It most certainly is caching - it's a method called "push caching". The servers act just as caches, but the content is pushed to them from Netflix rather than stored when a client first accesses it.
      • More generally, CDNs aren't "in-network services" in the same sense as middleboxes and thus aren't hampered by TLS. When properly deployed they don't sit between the page server and the browser, but rather the page server links to CDN URLs for images, scripts, and other referenced content. From that standpoint they are essentially just another farm of web servers specialized for static content.

        The "in-network services" TFA talks about can only work because they can freely inspect, collect copies of, trans

    • My thoughts exactly.
      Good lord, with the IoT, and any other "value added" krap out there, are we to expect even a moment away from ads?
    • Some of them are even worse than advertising; but, yeah, "value-added services" is weasel speak for all the ghastly things that your telco would like to do to your perfectly good dumb pipe in order to charge you more for it. (In the same way that the recently revealed custom of injecting tracking IDs into the HTTP headers of traffic passing over some providers', like Verizon's, mobile data networks is called "HTTP Header Enrichment".)

      Breaking that shit isn't a cost of HTTPS, it's one of the major reasons
  • by sinij ( 911942 ) on Thursday December 04, 2014 @11:20AM (#48522643)
    Value added? More like value subtracted for most of the things on your list.

    Plus, you are ignoring the fact that nobody is planning to encrypt content like video streaming.
    • by jhantin ( 252660 )

      Plus, you are ignoring the fact that nobody is planning to encrypt content like video streaming.

      Remind me again why big corporations have been making a huge uproar about allowing unencrypted content? Oh yeah, DRM. ;-)

    • by trigeek ( 662294 )
      Actually, as the paper indicates, Youtube is encrypting most of their delivered content.
  • Yes (Score:5, Informative)

    by buchner.johannes ( 1139593 ) on Thursday December 04, 2014 @11:22AM (#48522671) Homepage Journal

    Caching: You can not cache Facebook for example, because the content is generated differently for every user. Youtube goes through great lengths to prohibit caching (e.g. with Squid) in the first place.
    Proxying: You can proxy https just fine.
    Firewalling: You can firewall https just fine.
    Parental control: You can block websites just fine, either via DNS or IP.
    I suspect they mean snooping for "copying that companies don't approve of" and "freedom fighters" here. And child pornography. It's kind of the point of HTTPS that it should be private. So yes, I can accept these costs.

    • I believe the "Firewalling" comment was likely in regard to packet inspection, not port filtering.
      • by Rob Riggs ( 6418 )
        Packet inspection is certainly possible. You proxy all traffic either explicitly or via one of the many MITM SSL deep packet inspection products. Surreptitious packet inspection is not possible. And that's a *very* good thing.
    • You can even cache, if you have access to the certs on the client. Google "squid in the middle". Any school or work environment with legit reasons to filter or cache content still can.

    • Re:Yes (Score:5, Informative)

      by Aethedor ( 973725 ) on Thursday December 04, 2014 @11:58AM (#48522979)
      Caching: You can cache Facebook's images, stylesheets and Javascripts just fine.
      Proxying: Not just fine. You need a man-in-the-middle proxy for that and its root certificate installed on every client. Otherwise, it's just routing, not proxying.
      Firewalling: Firewalling based on hostname / port, yes. Firewalling based on bad content (malware), no.
      Parental control: Same as firewalling. And blocking this kind of content is not only done by IP address, but often also by words in the hostname. This cannot be done when you can't read the hostname in the HTTP request.
    • Why can't you cache facebook?

      Sure, nothing on there is a static page, but if a million people are sharing the same 1MB image, you can still cache that. The text after "So and So shared..." will change, and the comments/likes will change, but somewhere there is a jpeg that keeps getting reused. Not everything makes sense to cache, but for things like images shared by George Takei, caching them once at the ISP or corporate network level could stop many gigabytes of external transfer.

      • by tom17 ( 659054 )

        Even for lower use images, caching them closer to the poster could be helpful given that their circle of friends is likely, statistically, to be in the same region. One image alone would not make much difference in this case, but millions of low use images mostly coming from caches closer to most of the people viewing them would make a huge difference.

    • Re: (Score:3, Informative)

      by Anonymous Coward
      It is disingenuous to make a blanket statement "you can not cache Facebook". After all, what takes up the most "data" on the page? It is between images and scripts. Neither of those is unique per user. When someone posts an image, all viewers of the image see the same image. It can be cached. Same with the javascript. It is just the unique parts of the page that can't be cached...
    • Ad injection, webpage redirection. "Value add" for network owners, not end-users. Fuck em.

  • Oh, you mean like targeted ad injection and other MITM type garbage? That's fine, you can keep those anyway.
  • Cost of certificates (Score:5, Interesting)

    by bunratty ( 545641 ) on Thursday December 04, 2014 @11:23AM (#48522681)

    The other cost of the S is the difficulty in obtaining and using certificates that are recognized by browsers without bothering the user. That's why the Let's Encrypt [letsencrypt.org] project is trying to make it free and easy.

  • Yes. Please. HTTPS all the time, everywhere.

    It's one thing when the free WiFi at the shady computer store down the street does it. But even for-pay WiFi hotspots are doing it now. (Looking at you, Southwest Airlines In-flight WiFi...)

  • When will Slashdot catch up? I still can't use this site in HTTPS without subscribing. Being able to browse securely is a standard feature these days.
  • by RivenAleem ( 1590553 ) on Thursday December 04, 2014 @11:30AM (#48522735)

    What is the cost to the user of having their communications intercepted, banking details stolen etc etc.

    That's like saying that putting locks on your doors has an added cost of you requiring more time every day getting in and out because you have to take time to turn a key. It also means that local corporations can't send people by to inject "value added" services into your home without your consent! Are you ready to accept locks on your doors?

  • WTF... (Score:4, Interesting)

    by EndlessNameless ( 673105 ) on Thursday December 04, 2014 @11:30AM (#48522739)

    Stupid article. Making a mountain out of a mole hill.

    How hard is it to push a certificate to your clients so they trust your proxy? How hard is it to setup a cache there? And monitoring/filtering? Not very hard.

    We do this at work, and it is dead simple for halfway competent admins to implement.

    What this really does is stop telecoms from monkeying with their users' traffic. By default, anyway.

    Most ISPs provide Windows installers/optimizers to their users, which their users dutifully click through without understanding. So they could just install their certificates and continue business as usual---with very little effort, all things considered. They might need beefier proxies to handle encryption, but CPU time is cheaper than ever.

    • You don't even need to know how to do it - Smoothwall automates most of the process so even an A+ certified tech could figure it out. Probably.

  • Comment removed based on user account deletion
    • I've not seen any patential control software that runs as a proxy server. It's all browser plugin. I'm surprised at this - given that many homes now have several laptops, a few more tablets and a mobile phone each, maintaining one proxy is a lot less hassle than ten browser plugins across four different operating systems.

  • by RabidReindeer ( 2625839 ) on Thursday December 04, 2014 @11:36AM (#48522789)

    I've no doubt that the overhead of https can be more than paid for if website designers would lay off the Singing Flowers and Dancing Fairies. Toss the gratuitous multi-media. Especially the auto-playing stuff. It's cheap and cheesy and makes me seriously think of avoiding the site altogether, whether it's local content or 3d-party adverts.

    And while you're at it, calculate the slow-filling parts of the page in advance so that the [censored] thing doesn't bounce up and down like a demented ping-pong ball as it loads. The only thing more irritating than having a page continually re-map itself while you're reading it is to have the stupid thing auto-reload and throw you back to the top of it.

    • The only thing more irritating than having a page continually re-map itself while you're reading it is to have the stupid thing auto-reload and throw you back to the top of it.

      My very favoritest thing is when an element moves over just slightly just as I was about to click it, and I click its neighbor instead.

  • by anegg ( 1390659 ) on Thursday December 04, 2014 @11:37AM (#48522803)

    The tradeoff is between a little more time, and a little more resources, against the benefit of keeping my communications private and unaltered by all of the middlemen through which my communications pass. That's a no-brainer for me.

    In the days before the exposure of Verizon's (and others) schemes to actually interfere with the content of communications from their customers passing through their network (I'm talking about the physical modification of the communications content, and not just traffic management/prioritizing), I may have had a different opinion about the tradeoffs. But now that the "common carriers" have shown that they have no morals what so ever with respect to the content of traffic they are carrying through their networks, SSL encryption is simply a necessary function to prevent interference.

    Today that interference may be limited to tracking user activity using an additional HTTP header that the user never knows exists. Who knows what packet re-writing magic might be used by the carriers in the future to completely "customize" each user's experience interacting with third parties to the benefit of the carrier?

  • Proxying and caching was a huge win back in the analog modem days. These days it is still a win, but not as big. Looking forward, the costs associated with having a secure connection are only going down while the value of the secure connection is holding steady or maybe increasing.

  • Let's see, on the useful side we have compression/acceleration and parental controls. Would it also interfere with ad blockers and anti-malware? Those are also useful services. Services we as consumers don't want are those ads certain low-cost carriers insert in content - though if blocking those forces the carrier to shut down we might have a problem. And of course we also don't want those Big Brother services - governmental content blocking and monitoring.
    • It interferes with non-browser-based ad blockers. Which are common still on corporate networks, though rarely at home. You can still block by DNS even then, it's just not so fine-grained. Fortunately you rarely need fine-grained to stop advertising.

  • What would you think about a hardened web browser which would only allow HTTPS connections? It might be a feasible idea already.
    • Current browsers prefer HTTP over HTTPS. That's why you don't get any security warning when you connect to a HTTP site but you get a death warning when you visit a self-signed certificate HTTPS site.
      • We could begin with a toggleable toolbar button which would lock the browser in HTTPS-only mode. Then we could educate users that when you want to avoid man-in-the-middle attacks, flip this switch on.
      • by bunratty ( 545641 ) on Thursday December 04, 2014 @11:58AM (#48522977)
        The problem with HTTP is that a middleman can see and alter content. If a browser doesn't warn when it encounters a self-signed certificate, then HTTPS would be no more secure than HTTP -- all the middleman has to do is use a self-signed certificate to decrypt/encrypt packets as needed. So browsers do prefer HTTPS, when the certificate can be verified. If you're using HTTPS and the certificate can't be verified, it's no more secure than HTTP unless the user is warned, and in fact it's a way of detecting that a middleman may be present. That's the whole reason for the death warning!
    • There isn't such an extension already? If there isn't, someone should write one or alter an existing one to add that functionality, at least as an option. Then people should try it and let us know how painful it actually is to use. My guess would be: extremely painful for most users for the next several years, so painful that hardly anyone would use it willingly. Maybe some businesses could force it on their employees.
      • Using something like Tor involves some pain too. Some people would probably be interested just for the security boost.
    • What, and not be able to post on Slashdot? No thanks.
    • by cen1 ( 2915315 )
      I believe HTTP 2.0 will pretty much require HTTPS at all times. So maybe in 20 years?
  • by nitehawk214 ( 222219 ) on Thursday December 04, 2014 @12:06PM (#48523051)

    Or as the rest of us like to say... stopping man in the middle attacks.

  • I work at a place with many distributed offices. A lot of these offices are large enough to have their own IT staff who make decisions locally.

    Some of those bozos felt the need to have very aggressive caching servers. Aggressive enough that on any non-https website, it was impossible to differentiate between users or deliver new content. So any web apps we rolled out had huge problems if multiple users were logged in- or even better, a page would never update because it already existed in the cache. Ess

    • Some of those bozos felt the need to have very aggressive caching servers. Aggressive enough that on any non-https website, it was impossible to differentiate between users or deliver new content.

      If the server ignores your cache-pragma then the server is not just aggressive, it is broken. If you failed to set your cache-pragma then your app is broken, not the proxy. You didn't provide enough information to determine which was the case, and I wouldn't be surprised if some proxies ignored the pragma and some didn't.

  • Now, I'm not sure how HTTPS works, but when you use something like PGP, it first compresses the data in order to increase the entropy, making it harder to crack. So while we're spending more CPU cycles on compression and encryption, doesn't the reduced transmission payload more than offset the cost of the computation? In general, communication is WAY more expensive (in terms of energy) than computation.

    Damnit, I'm going to have to read the article to find out if they did this right. :)

    • You can still compress the content sent inside of HTTPS just as you would with ordinary HTTP, it's just HTTP+SSL. But you can have that with or without HTTPS.

    • by Bengie ( 1121981 )
      Compressed data leaks information. There have been many side-channel attacks using the resulting size of the data and figuring out what information is in there. Pretty much, statistical analysis.
  • by TheGratefulNet ( 143330 ) on Thursday December 04, 2014 @12:45PM (#48523437)

    https://www.imperialviolet.org... [imperialviolet.org]

    in short, there is no cpu overhead anymore, in today's compute systems. https is not a barrier due to processing, at least.

  • If it comes from an authoritative source, Slashdot is less likely to question it. If it comes from me, I'm an idiot trying to run a slow box in the 21st century [slashdot.org].

    Lots of people are commenting here about how they want to inject ads. No threads are blasting them for suggesting that HTTPS can slow your browsing experience.

    • If you are upset because https at Google slows your internet experience, then never look at the size of the source code for that 'simple' page. Last time I checked, it was 400k.
  • And thus the beginning of the end of the RESTful fad. Not that there's anything wrong with RESTful architecture per se, but as a fad it has been shoe-horned by ideologues into so many inappropriate domains lately: embedded P2P, M2M spaces, etc. Sure, it makes sense for one-to-many patterns involving human-readable, human-discoverable resources, particularly of semi-static resources that can be cached and proxied by middle agents. But of course that later part only works for unsecured transactions. So now th

    • I don't think it's going to end soon; nearly every new web framework that comes out depends on RESTful stuff. It's going to get bigger before it gets smaller.
  • You can do caching, proxying, firewalling, parental control, etc with HTTPS. Create a certificate on those boxes and push it out to the client devices. You can then see all encrypted traffic. Problem solved.

1 + 1 = 3, for large values of 1.

Working...