Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
The Internet Your Rights Online

Hunting the Mythical "Bandwidth Hog" 497

eldavojohn writes "Benoit Felten, an analyst in Paris, has heard enough of the elusive creature known as the bandwidth hog. Like its cousin the Boogie Man, the 'bandwidth hog' is a tale that ISPs tell their frightened users to keep them in check or to cut off whoever they want to cut off from service. And Felten's calling them out because he's certain that bandwidth hogs don't exist. What's actually happening is the ISPs are selecting the top 5% of users, by volume of bits that move on their wire, and revoking their service, even if they aren't negatively impacting other users. Which means that they are targeting 'heavy users' simply for being 'heavy users.' Felten has thrown down the gauntlet asking for a standardized data set from any telco that he can do statistical analysis on that will allow him to find any evidence of a single outlier ruining the experience for everyone else. Unlikely any telco will take him up on that offer but his point still stands." Felten's challenge is paired with a more technical look at how networks operate, which claims that TCP/IP by its design eliminates the possibility of hogging bandwidth. But Wes Felter corrects that mis-impression in a post to a network neutrality mailing list.
This discussion has been archived. No new comments can be posted.

Hunting the Mythical "Bandwidth Hog"

Comments Filter:
  • by afidel ( 530433 ) on Friday December 04, 2009 @11:51AM (#30324356)
    They are generally using UDP so the original assertion that degrading the other users experience should be true as UDP should break down long before TCP does. Though I do agree that if Comcast's system works as described it's probably the best solution for a network that can't implement QoS.
  • Why? (Score:3, Interesting)

    by Maximum Prophet ( 716608 ) on Friday December 04, 2009 @11:53AM (#30324392)
    Why would any business cancel paying customers that don't negatively impact operations?
  • Re:Why? (Score:3, Interesting)

    by BESTouff ( 531293 ) on Friday December 04, 2009 @11:57AM (#30324458)
    Because they're probably heavy music/movies "illegal" downloaders, so they inconvenience their friends the media moguls ?
  • by ShadowRangerRIT ( 1301549 ) on Friday December 04, 2009 @12:00PM (#30324482)
    Why do you think they are using UDP? Most of the bandwidth being used at this point, to my knowledge, is for streaming video (read: porn) and BitTorrent (read: porn). Both of them use TCP for the majority of their bandwidth usage (Some BitTorrent clients support UDP communication with the tracker, but the file is still transferred by TCP). Getting built-in error-checking, congestion control and streaming functionality in TCP makes much more sense than a UDP based protocol where you have to reimplement that yourself. I'm sure a few multiplayer games use UDP for latency reasons, but the data transferred for a multiplayer game is negligible and frequently loss-tolerant (if you missed where a player was one second ago, it doesn't matter once you get the new update).
  • by BobMcD ( 601576 ) on Friday December 04, 2009 @12:00PM (#30324488)

    I have personally witnessed hogging of bandwidth and, I'd wager, so have you. This term describes when an individual user uses more bandwidth resources than they were assumed to need.

    Example: My brother moves in with two of his friends. His latency is horrible. When his roommate is not home, the internet is fine. When he's away at work it becomes unusable. He calls me to look at the situation, and we determine that one of his roomies is a heavy torrent user. Turns out the roommate was ramping up torrents of anime shows he wanted to watch while he was gone. He was aware of the impact to his own internet experience, and so ramped it back down when he wanted to use it himself.

    If that's not hogging bandwidth, I'm not too sure what is.

    If this doesn't scale, logically, up to the network at a whole, I'm not sure why.

    Now, to be completely clear - I feel overselling bandwidth is wrong. I feel the proper response to issues like this on the larger network is guaranteed access to the full amount of bandwidth sold at all times. On the local scale, these men should have brought in another source of internet. On the larger scale, the telco should do the same.

    Denying that the issue can happen, however, is stupid to the point of sabotage.

    An end-user can download all his access line will sustain when the network is comparatively empty, but as soon as it fills up from other users' traffic, his own download (or upload) rate will diminish until it's no bigger than what anyone else gets.

    So, if I understand this statement, if a user is hogging all the bandwidth until no one gets any connectivity - since it is all the same it is totally fair. One user can bottleneck the pipes, but since their stuff isn't fast any more either, we're all good?

    How does an argument of this kind help anyone but a bandwidth hog?

  • Nice theory... (Score:3, Interesting)

    by thePowerOfGrayskull ( 905905 ) <marc...paradise@@@gmail...com> on Friday December 04, 2009 @12:01PM (#30324526) Homepage Journal
    Where are the facts again? Oh, right, he tells us!

    The fact is that what most telcos call hogs are simply people who overall and on average download more than others. Blaming them for network congestion is actually an admission that telcos are uncomfortable with the 'all you can eat' broadband schemes that they themselves introduced on the market to get people to subscribe. In other words, the marketing push to get people to subscribe to broadband worked, but now the telcos see a missed opportunity at price discrimination...

    It's nice of him to declare that without evidence. Now I know it to be true.

    I'm not saying he's wrong... quite possibly he's right, but seriously - how does someone's blog entry that doesn't provide one single data point to back up the claim make it to the front page?

  • Small ISP (Score:5, Interesting)

    by Bios_Hakr ( 68586 ) <xptical@g3.14mail.com minus pi> on Friday December 04, 2009 @12:05PM (#30324566)

    Lately I've had to deal with this problem. Our solution was rather simple. We use NTOP on an Ubuntu box at the internal switch. We replicate all the traffic coming into that switch to a port that the NTOP box listens on.

    It may not be a perfect solution, but it can easily let us know who the top talkers are and give us a historical look at what they are doing.

    From that report, we look for anyone uploading more than they download. We also look for people who upload/download a consistent amount every hour. If you see someone doing 80gb in traffic each day with 60gb uploaded, you probably have a file sharer. When you see the 24-hour reports for the user and see 2~3gb every hour on upload, you *know* you have a file sharer.

    After that, it's as simple as going to the DNS server and locking their MAC address to an IP. Then, we drop all that traffic (access list extended is wonderful) to another Ubuntu box. That box has a web page explaining what we saw, why the user is banned, and the steps they need to take to get back online.

    Most users are very apologetic. We help them to set up upload/download limits on their bittorrent client and then we put them back online.

  • by Qzukk ( 229616 ) on Friday December 04, 2009 @12:11PM (#30324658) Journal

    You think you're being sarcastic, but has anyone ever seen a network go down in flames due to slashdotting, or has it always been the server?

  • by 140Mandak262Jamuna ( 970587 ) on Friday December 04, 2009 @12:18PM (#30324760) Journal
    These ISPs sold what they ain't got. Sold more bandwidth than they can sustain, and when someone actually takes delivery of what was promised, these telcos bellyache, "we never thought you will ask for all we sold you! whachamagontodoo?". Eventually they will introduce billing by the Gigabytes, and pipesize. Like the electric utilities charge you by the kWh and limit the ampearage of your connection.

    Then they will introduce the "friends and family" of ISP, some downloads and some sites will be "unmetered", and the sources will be the friends and family of the ISP. You know? the "partners" who provide "new and exciting" products and content to their "valued customers". Net neutrality will go down the tubes. ha ha.

    Google needs the net to be open and neutral for it to freely access and index content. When the dot com bubble burst Google bought tons and tons of bandwidth, the dark fibers, the unlit strands of fiber optic lines. If the net fragments, I expect Google to step in, light up these strands and go head to head with the ISPs providing metro level WiFi. Since it is not a government project, it could not be sabotaged like Verizon and AT&T torpedoed municipal high peed networks.

  • by BobMcD ( 601576 ) on Friday December 04, 2009 @12:20PM (#30324776)

    Are you advocating a system where the ISP has mandate power over the OS configuration?

  • I do it too (Score:4, Interesting)

    by holophrastic ( 221104 ) on Friday December 04, 2009 @12:22PM (#30324810)

    I also go through my client list and drop those that consume more of my time and resources in favour of the easier clients who ultimately improve my business at a lesser cost. What's wrong with that? My company, my rules. "We reserve the right to refuse service to anyone" -- it's in every restaurant. Why would you expect a business to serve you? Why would you consider it a right?

  • by Zen-Mind ( 699854 ) on Friday December 04, 2009 @12:31PM (#30324926)
    I think you pointed-out the real problem. The telcos want you to pay for the 70Mbps line, but don't want you to use it. If you cannot support a users doing 70Mbps, don't sell 70Mbps. I know that building an infrastructure based on the assumption that all users will use maximum bandwidth would be costly, but then adapt your marketting practices; sell lower sustained speed and put a "speed on demand" service that is easy to use so when you want/need to download the new 8GB PS3 game you can play before the next week. Otherwise you can always have a maximum sustained bandwidth based on high/low period of the day, but this needs to be clear.
  • by betterunixthanunix ( 980855 ) on Friday December 04, 2009 @12:35PM (#30324994)
    "We aren't getting the advertised bandwidth! Waaah!"

    Yes, actually, false advertising is a problem. If an ISP tells me I can make unlimited use of my 10Mbps connection, I expect to be able to make unlimited use of it -- including sustaining 10Mbps or something reasonably close all day and all night. If such a level of service is impossible for an ISP to provide and remain profitable, why the hell are they advertising these plans?

    If they are lying to consumers about the level of service they can provide, they should cover themselves by increasing the network capacity, or they should admit they lied, reduce the bandwidth they provide to users, and hope that nobody sues them over it. Kicking people off the network for trying to use what they paid for is not an appropriate response to overselling, and if the FCC had any spine they would kill the practice before it gets out of hand.
  • Re:Small ISP (Score:5, Interesting)

    by imunfair ( 877689 ) on Friday December 04, 2009 @12:37PM (#30325020) Homepage
    Is there really a problem with allowing your users to actually use their connection? By my rough calculations 2-3gb/hr is only 60-90kb/s upload. I really don't understand why you can't handle that unless you're massively overselling. I would be a lot more sympathetic if we were talking about users maxing out fiber connections or something higher speed.
  • Network saturation (Score:2, Interesting)

    by davidwr ( 791652 ) on Friday December 04, 2009 @01:04PM (#30325436) Homepage Journal

    There have been times when telephone networks, wireless networks, and utilities have been knocked offline due to too much demand. Most utilities have "turn off" agreements with heavy industrial users who can tolerate a shutdown in exchange for a concession, such as a lower rate, to ensure consumers still have service.

    Even that doesn't work. Witness the jammed cell-phone lines after a major event or in days of yore jammed long-distance phone lines on holidays.

    Cutting off or throttling heavy users during times when the network is saturated is sensible. A more sensible route is to charge by the GB or TB, as long as the charge is fair and reasonable.

    A fair and reasonable charge would be "X" for your initial allowance plus Y for every block of bytes after that point. X would cover fixed costs such as the cost of billing. On a per-byte basis, X would be higher than Y. A fair and reasonable charge could include time-of-week-sensitive, bitrate-sensitive, and quality-of-service-sensitive billing. For example, there could be a 20% surcharge during afternoon and evening hours, a discount when you voluntarily throttle to a very low bitrate, or a discount when you accept a lower quality of service, such as for bulk file transfers.

    Involuntary speed reductions or QOS reductions could be imposed on either heavy users or those users who volunteered for them in exchange for a discount when the network is saturated. These should not last more than the duration of the saturation, and the effect should be spread around in a fair way. Of course, since customers are paying by the bit and paying higher for higher-rate/higher-quality transmission, every time a service provider did this they would be hurting their revenue stream, at least a little.

  • by war4peace ( 1628283 ) on Friday December 04, 2009 @01:14PM (#30325588)
    (disclaimer: I am living in Eastern Europe, so things may look very differently from US, but then again, maybe it's for the better for people to get a glimpse of how things are done somewhere else on the globe)

    Well, as usually the truth is somewhere right down in the middle.
    I have 2 ISPs (2 different providers). One is CAT5-based (plus optical fiber going out of the area) and the other uses CaTV (tohgther with those infamous CaTV modems I hate). To make things shorter, I'll name the CAT5-based one as ISP1 and the other as ISP2.
    ISP1 offers max metropolitan bandwidth 24/7. My city has roughly 300K home Internet subscribers, not counting businesses. I can download from any of them at 100 mbps max theoretical transfer rate. When using Torrent-based downloads, every single one caps at 95-97% of the maximum theoretical amount, which is impressive to say the least. Furthermore, I can browse at the same time without interruptions or latency. I was playing games such as EVE Online and WoW while downloading literally tens of gigabytes of data at max speed and my latency as shown in WoW was about 150-250 ms, which is excellent according to my view.
    I have never ever had any warnings from my ISP1 during last 3 years, mainly because they do not count metropolitan data transfers (I asked). They also told me why. All ISPs which offer metropolitan high speed access have an agreement to let those transfers flow freely (mutual advantage) and not count them against customers. It seems the logical thing to do. It's pretty much like throwing a cable between me and my neighbour and turning the pipe on. It's a self-managed thing, and if it works like shit, then it's our fault, not the ISP's.
    ISP2 offers CaTV-based Internet Access. Now I have my reasons to loathe Cable Modems because they proved to be unreliable, slower than other types of Internet Access and prone to desync. I've had countless problems with this sort of implementation. Anyways, ISP2 downloads cap at 2 MB/s when downloading from either metropolitan or external sources. They brag about offering 50 mbps transfer rates from Metropolitan sources, but this doesn't seem to be true. I keep ISP2 for backup purposes, so it goes largely unused (i think I used their service for like 10 days during last year or so).
    Maybe ISP1 or ISP2 do have a policy to cap heavy downloaders which access data from outside the metropolitan network area, but I've never heard of any case when they did. So either the policy exists but is not applied, or doesn't exist at all.Oh, I forgot to mention that ISP2 gas nation-wide coverage and ISP1 is just city-wide.
    So I was wondering what makes US-based (and probably other) ISPs come to such a conclusion and apply such policies. I think it's because their network implementation plainly sucks. Maybe they rely on third party networks to get data across areas where they have no coverage and that costs them. makes sense for a company looking to maximize profits (I don't like this approach though). Don't they have a minimum guaranteed bandwidth? We do have it here, and if one starts complaining that he only can download at 2x of minimum guaranteed speed limit, the ISP just laughs in your face, because that's twice what they guarantee. And to that I agree :)
    Let's assume I use videoconferencing from home. A lot. I know people who participate in high-bandwidth audio+video conferences all day, from home. So they eat up a lot of bandwidth for business purposes. They would be pissed to have a cap limit enforced on them :) So what's the ISP's take on such cases?
    one more thing: if this policy is written in your contract with them, then you're legally screwed. If not, they're legally screwed. it all comes down to this in the end.
    As a conclusion, I don't think "Bandwidth hogs" exist. They're mythical creatures indeed. But what is real is piss-poor network implementation, especially on WAN.
  • by TheLink ( 130905 ) on Friday December 04, 2009 @01:50PM (#30326060) Journal
    One problem is by default many network devices/OSes do bandwidth distribution on a per _connection_ basis not on a per IP basis. So if there are only two users and one user has 1000 active connections and the other has just one active connection the first user will get about 1000 times more bandwidth than the second user.

    P2P clients typically have very very many connections open. Wheres other clients might not.

    A much fairer way would be to share bandwidth amongst users on a per IP basis. That means if two users are active they get about 50% each, even if one user has 100 P2P connections and the other user has only one measly http connection.

    Then within each customer's "per IP" queue, to improve the customer's experience you could prioritize latency or loss sensitive stuff like like dns, tcp acks, typical game connections, ssh, telnet and so on, over all the other less sensitive/important stuff.

    Of course if you have oversubscribed too much, you will have way too many active users for your available bandwidth. A fair distribution of "not enough" will still be not enough.

    If you have two people and you give each a fair slice of one banana, they each get half a banana. Maybe both are satisfied.
    If you have 1000 people and you give each a fair slice of one banana, they each get 1/1000th of a banana. Not many are going to be satisfied ;).

    And that's where we come to the other problem.

    The problem with P2P is many customers will often leave their P2P clients on 24/7, EVEN when some of them don't really care very much about how fast it goes (some do, but some don't). To revisit the banana analogy, what you have here is 1000 people, and 1000 of them ask for a slice of the banana, EVEN though some of them don't really care - they'll only really feel like having a slice next week, when they're back from their holiday!

    So how do you figure out who cares and who doesn't care?

    Right now what many ISPs do is have quota limits - they limit how much data can be transferred in total. When the quota runs out "stuff happens" (connections go slow, users get charged more etc). So the users have to manage it.

    BUT this is PRIMITIVE, because if you can figure out when a user doesn't care about the speed etc, technology allows you to easily prioritize other traffic over that user's "who cares" traffic.

    So what's a better way of figuring it out?

    My proposal is to give the customers a "dialer" which allows users to "log on" to "priority Internet" (and then only something starts counting the bytes ;) ), BUT even when they "log out" they _still_ get always-on internet access except it's just on a lower priority (but NO byte quota!). A customer might be restricted to say 10GBs at "priority" a month.

    The advantage of this method is:
    1) There is no WASTED capacity - almost all the available bandwidth can be used all the time, without affecting the people who NEED "priority" internet access (and still have unused quota).
    2) It allows a ISP to better figure out how much capacity to actually buy.
    3) If there is insufficient capacity for "priority Internet" the ISP can actually inform the user and not put the user on "priority" (where the quota is counted). While the user might not be that happy, this is much fairer, than getting crappy access while having your quota still being deducted.

    Perhaps this system is not needed and will never be needed in countries that don't seem to have big problems offering 100Mbps internet access to lots of people.

    But it might still be useful in countries where the internet access and telcos are poorly regulated/managed. For example - you run a small ISP in one of those crappy countries and so you pay more for bandwidth from your providers- this system could allow you to make better use of your bandwidth and to be a more efficient competitor. And maybe even give your customers better internet service at the same time.

    Yes the ISP could always buy enough bandwidth so that _everyone_ can get the offered amount even though not everyone really cares all the time (believe me this is true). But that could mean the ISP's internet access packages being much more expensive than they could be.
  • Too cheap to meter (Score:3, Interesting)

    by Animats ( 122034 ) on Friday December 04, 2009 @02:17PM (#30326466) Homepage

    X would cover fixed costs such as the cost of billing.

    Billing is a very large cost. In the telco world, the cost of billing passed the cost of transmission about two decades ago. That's even more true for Internet transmission at the retail level. With proposed "economic" solutions, you have to factor in the costs of billing. Those costs apply both to the provider and the customer. If customers have to meter to control their expenses, that's a cost to the customer in attention, and drives business to competitors that require less attention.

    Handling one phone call about a billing complaint eats up months of profits from selling the service. Complex billing means big call centers.

  • One problem is by default many network devices/OSes do bandwidth distribution on a per _connection_ basis not on a per IP basis.
    Standard IP networks do bandwith distribution on the basis of the clients backing down (or if nobody backs down on the basis of who can get packets in when the queue isn't full). If the systems all use equally agressive implementations of TCP then each TCP connection will get a roughly equal bandwidth. OTOH a custom UDP based protocol can easilly take far more or far less than it's fair share depending on how agressive it is.

    A much fairer way would be to share bandwidth amongst users on a per IP basis. That means if two users are active they get about 50% each, even if one user has 100 P2P connections and the other user has only one measly http connection.

    It's a nice idea but there are a couple of issues
    1: it takes more computing resources to implement a source based priority queue than to implement a simple fifo queue.
    2: to be of any use such prioritisation needs to happen at the pinch point (that is at the place there is actually a queue built up), unless you deliberately pinch off bandwidth or you overbuild large parts of your network predicting the pinch point location may not be easy.

  • by kimvette ( 919543 ) on Friday December 04, 2009 @03:18PM (#30327374) Homepage Journal

    So I was wondering what makes US-based (and probably other) ISPs come to such a conclusion and apply such policies.

    Modern American society has a sense of entitlement, and that applies even to the government-granted monopolies. ISPs were given hundreds of BILLIONS of dollars to push broadband out to every address in the US. They didn't do it, never got spanked for it, and abuse the customers and continually charge more using "service enhancements" and "upgrades" as their justification, when in actuality, the upgrades were paid for with our tax dollars, and they are REDUCING service by enforcing undisclosed caps.

    Bandwidth hogs do not exist. Well, some do - ones who "uncap" their modems and get MORE bandwidth than they are paying for. Hacking a modem you rent, not own, is a problem and those users should be punted. However, simply using the bandwidth that was advertised and you purchased is NOT hogging bandwidth by ANY stretch of the imagination. Placing limits on users who simply use the product as advertised is unethical, immoral, and actually illegal because then those ISPs are engaging in a bait-and-switch - or more succinctly, FRAUD.

  • by jmorris42 ( 1458 ) * <jmorris&beau,org> on Friday December 04, 2009 @04:32PM (#30328398)

    > I quickly found a new ISP and they lost my business.

    You say that like you think you were really 'showing them' by taking your business elsewhere. They were trying to get rid of you and when you left their attitude was more of "I pity the fool that signs that a**hole!"

    > Voting with my dollars worked for me back in the 90s, but now, there are just less viable players
    > in the field, and none of them seem much better than the others.

    Yes, there used to be a few 'hacker friendly' ISPs who were usually run by people like thee and me and utterly clueless about ISP economics. They went out of business. It isn't quite as bad in the broadband world but back in the dialup era it was just insane to keep a nethog once we got past the period when those heavy users were also helping bring in new customers. Do the math.

    Customer 1 is a nethog. They nail up a connection pretty much 24/7/365 and push as many bits through it as they can. They pay regular price. In the dialup days that was typically $19.95/mo and you pretty much had to dedicate a modem and terminal server port to the idiot. ISP's cost one modem, port telco charge for one business line plus about 4% of a T1 for upstream. Hint, the ISP is paying more than $20/mo for the inbound phone line unless the ISP is AT&T.

    Customer 2 is a normal. They connect for four or five hours per day, perhaps six or seven the first six months. You can sign up four to six of these per port. And since most of their activity is bursty the impact on your upstream is minimal.

    Customer 3 is a light user. After the shiny wore off the Internet they typically do email and hit a few websites. One port will support ten or more of these people.

    It should be obvious that you should want to lose Customer 1 ASAP since they cost you more money than they pay in. If you have a good mix of the second and third type you can get six to eight customers per line and not have too much fussing. AOL used to run ten to twenty customers per dialup port.

    In the broadbad era the upstream is the biggest contended resource and depending on the market can be very expensive. Again, the P2P user is the one you want to gift to your competition if you can do it in a way that won't lose his friends/family or generate undue media attention.

    > The problem really is ignorance. Too many people just don't understand the service that they are
    > buying well enough to know when they are being offered less for their money than advertised...

    Agreed, but it is you who are ignorant. I admin at a public library. We have a 6MBps link delivered as four T1 circuits and we also have a 6Mbps business grade DSL that I use to push most of the http traffic through to help the load on the main link. The DSL link is pretty much what us normal people buy and is just a little over $100/mo. It is good but doesn't always deliver full capacity. If it goes out we just fail over to driving everything out the T1s. The T1 circuits cost a hell of a lot more but always deliver the goods and have a great SLA.

    Question, in your world why would we pay for the T1 lines? Why would anyone? Really, if we could lawyer up and force AT&T to give us the 6Mbps we 'paid for' 24/7 why are we paying for the dedicated service? Because we understand the real world. And the C block we get as part of the statewide WAN is a big plus. :)

  • Re:Why? (Score:2, Interesting)

    by Seumas ( 6865 ) on Friday December 04, 2009 @04:58PM (#30328762)

    Their logic is always "the average user only checks their email and maybe the sports scores and a news website". If that's the case, then what harm to those users are the "heavy" users really doing? The nature of the argument undermines the argument to begin with.

    What really frustrates me is that I use a lot of bandwidth and I would happily pay double what I pay now to have double the access (ie, pay for two accounts). Unfortunately, they won't let you do that. This "public utility" that always has a monopoly in each region as far as providing service offers one option and one option only. Period. That's pretty poor service.

    I and others in my household enjoy watching a lot of HD content on netflix, downloading entire games on XBOX, streaming radio stations, VPN'ing into work, watching videos online, keeping vital backups remotely with backup services, downloading PLENTY of IPTV and podcasts in high quality, playing video games, etc. It definitely ads up to a LOT of bandwidth. And I'm willing to pay (not ridiculous jacked up prices, mind you - but I'll pay double to use double, certainly). Unfortunately, I can't get what I as a customer and citizen am willing to pay for, even though the company is granted special access by local government on behalf of me.

    Instead, they consider people like me a pariah, because we don't have the same usage patterns as someone's elderly grandmother that just emails "the kids" once a month.

The moon is made of green cheese. -- John Heywood

Working...