Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
The Internet Your Rights Online

Hunting the Mythical "Bandwidth Hog" 497

eldavojohn writes "Benoit Felten, an analyst in Paris, has heard enough of the elusive creature known as the bandwidth hog. Like its cousin the Boogie Man, the 'bandwidth hog' is a tale that ISPs tell their frightened users to keep them in check or to cut off whoever they want to cut off from service. And Felten's calling them out because he's certain that bandwidth hogs don't exist. What's actually happening is the ISPs are selecting the top 5% of users, by volume of bits that move on their wire, and revoking their service, even if they aren't negatively impacting other users. Which means that they are targeting 'heavy users' simply for being 'heavy users.' Felten has thrown down the gauntlet asking for a standardized data set from any telco that he can do statistical analysis on that will allow him to find any evidence of a single outlier ruining the experience for everyone else. Unlikely any telco will take him up on that offer but his point still stands." Felten's challenge is paired with a more technical look at how networks operate, which claims that TCP/IP by its design eliminates the possibility of hogging bandwidth. But Wes Felter corrects that mis-impression in a post to a network neutrality mailing list.
This discussion has been archived. No new comments can be posted.

Hunting the Mythical "Bandwidth Hog"

Comments Filter:
  • by FrankDerKte ( 1472221 ) on Friday December 04, 2009 @12:03PM (#30324554)

    No, bandwidth hogs normally use file sharing which is implemented with tcp (i. e. bit torrent).

    The problem is tcp distributes bandwidth per connection. Someone using more connection gets a bigger part of the available bandwidth.

  • by Anonymous Coward on Friday December 04, 2009 @12:08PM (#30324616)

    I used QoS with iproute2 and iptables (see http://lartc.org/howto [lartc.org]) when I faced that issue.
    I do not mean to say I had room mates, but when I used bittorrent and noticed how it abused the network, I used that howto to limit it's bandwidth.

    It worked very nice.

  • by Anonymous Coward on Friday December 04, 2009 @12:11PM (#30324656)

    During a Slashdotting, the problem is rarely network-related (aside from people who use a cheap host and have very low artificial bandwidth limitations, or are hosting their site on a low-end cable connection).

    More often than not, the database goes down. MySQL is especially prone to just dying when put under any significant workload. That's why you'll often see error messages saying that the web front end can't connect to the database. You can still get to the site because the network can handle the volume of traffic just fine, and you can get the error message because the web server can also handle the volume of traffic just fine.

    The next most common problem is the server-side web apps being unable to handle the load. I don't mean the web servers, as most of those can handle huge amounts of traffic, even on ancient hardware. I mean web apps implemented using PHP, Ruby on Rails, ASP.NET, JSP and so on. Many sites don't use PHP bytecode caching, for instance, nor do they do much data caching. So it just ends up taking too long to generate pages, and your browser times out.

  • by Bill_the_Engineer ( 772575 ) on Friday December 04, 2009 @12:13PM (#30324674)

    Why do you think they are using UDP? Most of the bandwidth being used at this point, to my knowledge, is for streaming video (read: porn) and BitTorrent (read: porn). Both of them use TCP for the majority of their bandwidth usage (Some BitTorrent clients support UDP communication with the tracker, but the file is still transferred by TCP).

    Most of the streaming protocols that I dealt with used UDP as their basis. The need to deliver the next frame or sound byte as soon as possible outweighs the need to guarantee that every single frame or byte arrives. We accept the occasional drop out in return for expedited delivery of data.

    Unfortunately when trying to achieve the necessary data rate to satisfy the occasional drop outs, some protocols neglect being a good stewart of network bandwidth and have no throttle (ie congestion relief).

  • by ShadowRangerRIT ( 1301549 ) on Friday December 04, 2009 @12:13PM (#30324682)

    I should point out that this sort of thing, while true, is often overstated because of poor local network configuration. When I first set up my new Vista machine a couple years back, I noticed that torrents on it would frequently interfere with internet connectivity on other networked devices in the house. I hadn't had this problem before and was curious as to the cause. I initially tried setting the bandwidth priorities by machine IP and by port, setting the desktop and specifically uTorrent's port to the lowest priority for traffic (similar to what ISPs do when they try to limit by protocol, but more accurate and without an explicit cap), but that actually made the situation worse; the torrents ran slower, and the other machines behaved even worse.

    Turned out the problem was caused by the OS. Vista's TCP settings had QoS disabled, so when the router sent messages saying to slow down on the traffic, or just dropped the packets, the machine just ignored it and resent immediately, swamping the router's CPU resources used to filter and prioritize packets. The moment I turned on QoS the problem disappeared. The only network using device in my house that still has a problem is the VOIP modem, largely because QoS doesn't work quickly enough for the latency requirements of the phone, but it's not dropping calls or dropping voice anymore, it's just laggy (and capping the upload on uTorrent fixes it completely; the download doesn't need capping).

  • Re:Small ISP (Score:1, Informative)

    by Anonymous Coward on Friday December 04, 2009 @12:38PM (#30325040)

    If you advertise your service as "unlimited", then doing this means you need to be cockslapped to death. Not only are you treating your users like shit, but you're encouraging them to be leecher scum.

  • Re:I do it too (Score:5, Informative)

    by Colonel Korn ( 1258968 ) on Friday December 04, 2009 @12:42PM (#30325098)

    I also go through my client list and drop those that consume more of my time and resources in favour of the easier clients who ultimately improve my business at a lesser cost. What's wrong with that? My company, my rules. "We reserve the right to refuse service to anyone" -- it's in every restaurant. Why would you expect a business to serve you? Why would you consider it a right?

    Your company's service isn't based on federal subsidies meant to provide internet access to all citizens.

  • Re:Small ISP (Score:4, Informative)

    by Monkeedude1212 ( 1560403 ) on Friday December 04, 2009 @12:48PM (#30325200) Journal

    This upsets the customer. I know it sounds completely back-asswards, but most people would rather be blocked for an hour, told why they are blocked, and told to change, and then resume their normal speeds, as opposed to NOT getting a warning, having speeds decrease what they are paying for, and are left alone and angry to the point where they will go somewhere else.

  • Re:Bandwidth Hog (Score:5, Informative)

    by not_anne ( 203907 ) on Friday December 04, 2009 @01:43PM (#30325972)

    I work for a large ISP, and for residential accounts, we don't particularly care if you're a "bandwidth hog," as long as you're not affecting other customers around you. If we see that one person is causing significant congestion, then that's a problem that we'll address (but only when it happens repeatedly and consistently). Most of the time the customer is either unaware, has an open router, or has a virus/worm/trojan.

  • Re:Small ISP (Score:1, Informative)

    by Anonymous Coward on Friday December 04, 2009 @01:50PM (#30326068)
    You're off by a factor of 10... 2-3gbph is around 600-900kbps.
  • Re:Small ISP (Score:4, Informative)

    by Elshar ( 232380 ) <elshar.gmail@com> on Friday December 04, 2009 @02:15PM (#30326434) Journal

    Well, another small ISP here. Couple of things. First off, customers are NOT paying for what's called a CIR. So, of course the service is "oversold". Every service provider industry is "oversold". Landlines, Cell Phones, Car Mechanics, TV Repairmen, Satellite TV, even Tech Support. You think there's one guy in India sitting there waiting for you to call about your Dell? No, of course not. By definition, service providers HAVE to oversell to survive.

    Secondly, it's really not about just one person doing something like this as a small ISP. Yes, one person doing such can have a seriously negative impact on the rest of the users, but it's when you get multiple people doing it that really compounds the problem. One torrent user generally isn't too much of a problem. Get two or three with high connection limits, and up/down set to unlimited, and you have a serious problem on your hands.

    Finally, equipment is expensive, commercial connections are expensive. If you don't believe me, go price out some comparable commercial internet connections from Cogent, Level3, any of the baby bells (Verizon, Qwest, AT&T/Cingular, etc), and you'll see that you'll easily be paying 10x more than what a cable/FiOS user is going to pay for a residential connection. There's a reason, and it's up in the first point.

  • by Mezoth ( 555557 ) on Friday December 04, 2009 @02:21PM (#30326550)
    Except they are not "throttling" you, they are just giving you lower priority IF you use over 80% of your bandwidth for 15 minutes AND the whole segment is over 70% utilization. This means that grandma can still get her mail when you are seeding the new release of Ubuntu, but you "lose" bandwidth if you actually hit 100% congestion.
  • by Anonymous Coward on Friday December 04, 2009 @02:21PM (#30326554)

    Well, you're out of date. TCP is very far from perfect.

    TCP has proved itself entirely unsuited to heavy P2P usage like torrent, and has repeatedly been demonstrating its utter inadequacy to deal with that kind of communication (not least because of terrible consumer routers, and issues with firewalls/NAT, which are not entirely TCP's fault, but which you cannot work around while still using TCP).

    I can tell you as a P2P developer that TCP actually has an error interval of somewhere in the million bits, which is to say, not infrequent at all. (Why did you think we bothered with hash checking? Because it actually needs it. The packet checksum is only 2^16, remember!)

    As for the download of a 4MB block, with current proposals you'll likely catch an error after 4KB in fact, because of hash trees. (Catching an error as late as you say makes a DoS/poisoning attack more feasible, and wastes bandwidth.) And you shouldn't be running blocks as big as 4MB anyway, it makes the first-block and last-block issues of torrent far more pronounced.

    Torrent has a far more reliable and suitable transport protocol, which is indeed based on UDP instead of TCP, and you'll find that a lot of the clients in the wild already use it:
    http://en.wikipedia.org/wiki/Micro_Transport_Protocol [wikipedia.org]

    You'll find the congestion control in that far more advanced than anything TCP has, and indeed, far more gentle on the
    other protocols on the wire. TCP relies on latency and dropped packets to detect where the limit should be, and floats around that limit in a steadily climbing sawtooth which drops precipitously on a single dropped packet. ACKs must be prioritised or saturated uploads have a terrible effect on download speeds. However, consumer connections almost always have easily-determinable asymmetric speed limits with much more download than upload, and much more defined behaviour approaching, near, at, and beyond those limits. TCP has no idea about this and saturates the connection much more easily than a UDP-based protocol!

    But as a P2P developer you can do a lot better. uTP is surprisingly polite, but delivers very high performance. There are of course other protocols, such as SCTP, which also fix some of the shortcomings of TCP, although I understand that SCTP may not be available or usable on Windows, which of course presents a problem for anyone wanting to roll it out to a mass market.

    I would concur that "bandwidth hogs" are perhaps not as big a problem as you might think, and that in reality what is happening is that the ISPs are simply unwilling to provision ahead for the natural expansion of bandwidth. Over time, users use more bandwidth, and more bandwidth has to be plumbed in to meet and exceed that demand, and provide enough headroom for packets not to be lost, and latency/jitter to remain small. An ISP that shirks that demand simply doesn't have enough bandwidth, although in many cases underprovisioning is rife to keep profit margins fat and to allow for the reduction of cost. Even though core datacentre bulk bandwidth rates have fallen tremendously, often wholesalers are massively overcharged for actually delivering that bandwidth to exchanges. Many ISPs have therefore been resorting to packet shaping, essentially to select the most active users, and throttle them down, in exactly the way this article describes.

    A reasonable level of packet prioritisation is perfectly alright; for optimal performance, latency-sensitive protocols should have priority over web browsing, which should have priority over bulk protocols which aren't latency-sensitive; but because ISPs started using the same tools to throttle protocols down to unusability or block them, workarounds were implemented, and now the ISP itself is regarded by any sensible P2P developer as part of the threat model. Rampant abuse of deep-packet inspection by Sandvine resulted in protocol obfuscation and mass use of encryption. Further attacks on P2P

  • by MoralHazard ( 447833 ) on Friday December 04, 2009 @02:23PM (#30326596)

    comcast = cable = coax style networking in modern form, no?

    that is, its like going back to pre-hub style ethernet, where every computer is listening for the next millisecond of no signal on the coax so that it can hopefully push its next packet on there. There is a reason why this was quickly replaced with switches when said tech became available at acceptable prices...

    No, No NO! For the love of God, NO! You're completely wrong, and you have no idea what you're talking about. There is no such thing as "coax style networking", and there never has been. And the network behavior of cable broadband connectivity has nothing whatsoever to do with the fact that some cable connections use coaxial wiring.

    You are probably thinking of the old 10BASE2 Ethernet standard (http://en.wikipedia.org/wiki/10BASE2), which used coaxial cable with BNC connections and T-connectors to a shared cable bus medium. Cable broadband uses the DOCSIS protocol (http://en.wikipedia.org/wiki/DOCSIS) over coaxial cable with F connectors. The cable is the only really similar thing between the two technologies, everything else is pretty different.

    10BASE2, like all Ethernet technologies, is a shared-medium, PURE collision-detection protocol. The hosts share the cable segment as a broadcast medium, so that a transmission by one host will be "heard" by all the rest. Each host makes its own decisions about when it wants to transmit, independent of the rest, and then transmits when it senses that the cable is "silent". If multiple hosts start transmitting at almost exactly the same time, they will all shortly detect the "collision". They all cease transmitting, and each picks a short random-length interval to wait before trying to transmit again, unless another host that picked a shorter timeout window starts transmitting, first. Statistically, it's unlikely that two hosts will pick the same random wait timeout, so most collisions resolve quickly unless the network is particularly congested.

    DOCSIS uses a mixture of time-division, code-division, and collision-based contention behaviors (depending on the exact revision, too), but the impact of contention is really limited. From a bandwidth scheduling and congestion standpoint, it's nothing like 10BASE2, because the TDMA and CDMA elements of the protocol help each node sees a "fair share" of throughput. Plus, modern DOCSIS supports quality-of-service tags, which (if properly implemented) are pretty much a brick wall against congestion issues.

    mostly to me it seems that the ISPs that cries highest are the ones that geared up when the net was mostly static webpages and ftp file transfers, able to handle the odd spike of traffic when someone clicked a link. But now the gear they have sitting around, and that they where banking on where not to be replaced for the next decade or so, baring hardware failure, is being swamped by continual "spikes". And the only way they can fix that at their end is by replacing the gear ahead of schedule, playing havoc with their earnings estimates. And rather then doing that, they break out the whip, trying to force the "cattle" back into the "pen".

    I don't think you have any kind of real grasp on the technical implications of terms like "swamped" or "spike" in this context. You certainly understand the metaphor, and I bet you could analogize extensively comparing electrical, water, or highway systems to the Internet, but you don't seem to know too much about actual networking beyond setting up your home LAN.

2.4 statute miles of surgical tubing at Yale U. = 1 I.V.League

Working...