Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
The Internet Your Rights Online

Hunting the Mythical "Bandwidth Hog" 497

eldavojohn writes "Benoit Felten, an analyst in Paris, has heard enough of the elusive creature known as the bandwidth hog. Like its cousin the Boogie Man, the 'bandwidth hog' is a tale that ISPs tell their frightened users to keep them in check or to cut off whoever they want to cut off from service. And Felten's calling them out because he's certain that bandwidth hogs don't exist. What's actually happening is the ISPs are selecting the top 5% of users, by volume of bits that move on their wire, and revoking their service, even if they aren't negatively impacting other users. Which means that they are targeting 'heavy users' simply for being 'heavy users.' Felten has thrown down the gauntlet asking for a standardized data set from any telco that he can do statistical analysis on that will allow him to find any evidence of a single outlier ruining the experience for everyone else. Unlikely any telco will take him up on that offer but his point still stands." Felten's challenge is paired with a more technical look at how networks operate, which claims that TCP/IP by its design eliminates the possibility of hogging bandwidth. But Wes Felter corrects that mis-impression in a post to a network neutrality mailing list.
This discussion has been archived. No new comments can be posted.

Hunting the Mythical "Bandwidth Hog"

Comments Filter:
  • by afidel ( 530433 ) on Friday December 04, 2009 @10:51AM (#30324356)
    They are generally using UDP so the original assertion that degrading the other users experience should be true as UDP should break down long before TCP does. Though I do agree that if Comcast's system works as described it's probably the best solution for a network that can't implement QoS.
    • Re: (Score:3, Interesting)

      Why do you think they are using UDP? Most of the bandwidth being used at this point, to my knowledge, is for streaming video (read: porn) and BitTorrent (read: porn). Both of them use TCP for the majority of their bandwidth usage (Some BitTorrent clients support UDP communication with the tracker, but the file is still transferred by TCP). Getting built-in error-checking, congestion control and streaming functionality in TCP makes much more sense than a UDP based protocol where you have to reimplement that
      • by Bill_the_Engineer ( 772575 ) on Friday December 04, 2009 @11:13AM (#30324674)

        Why do you think they are using UDP? Most of the bandwidth being used at this point, to my knowledge, is for streaming video (read: porn) and BitTorrent (read: porn). Both of them use TCP for the majority of their bandwidth usage (Some BitTorrent clients support UDP communication with the tracker, but the file is still transferred by TCP).

        Most of the streaming protocols that I dealt with used UDP as their basis. The need to deliver the next frame or sound byte as soon as possible outweighs the need to guarantee that every single frame or byte arrives. We accept the occasional drop out in return for expedited delivery of data.

        Unfortunately when trying to achieve the necessary data rate to satisfy the occasional drop outs, some protocols neglect being a good stewart of network bandwidth and have no throttle (ie congestion relief).

        • Yeah, on looking around, it looks like the streaming protocols are UDP based. That still doesn't give it a flat majority of traffic; BitTorrent, along with the dedicated file sharing programs, are huge bandwidth consumers (the customers that are maxing out their connections aren't actively streaming video 24 hours a day), so if overusers of congestion unfriendly UDP are the problem, dropping the users of the highest bandwidth won't solve the problem (because they're using the relatively network friendly TCP
          • by TheLink ( 130905 ) on Friday December 04, 2009 @12:50PM (#30326060) Journal
            One problem is by default many network devices/OSes do bandwidth distribution on a per _connection_ basis not on a per IP basis. So if there are only two users and one user has 1000 active connections and the other has just one active connection the first user will get about 1000 times more bandwidth than the second user.

            P2P clients typically have very very many connections open. Wheres other clients might not.

            A much fairer way would be to share bandwidth amongst users on a per IP basis. That means if two users are active they get about 50% each, even if one user has 100 P2P connections and the other user has only one measly http connection.

            Then within each customer's "per IP" queue, to improve the customer's experience you could prioritize latency or loss sensitive stuff like like dns, tcp acks, typical game connections, ssh, telnet and so on, over all the other less sensitive/important stuff.

            Of course if you have oversubscribed too much, you will have way too many active users for your available bandwidth. A fair distribution of "not enough" will still be not enough.

            If you have two people and you give each a fair slice of one banana, they each get half a banana. Maybe both are satisfied.
            If you have 1000 people and you give each a fair slice of one banana, they each get 1/1000th of a banana. Not many are going to be satisfied ;).

            And that's where we come to the other problem.

            The problem with P2P is many customers will often leave their P2P clients on 24/7, EVEN when some of them don't really care very much about how fast it goes (some do, but some don't). To revisit the banana analogy, what you have here is 1000 people, and 1000 of them ask for a slice of the banana, EVEN though some of them don't really care - they'll only really feel like having a slice next week, when they're back from their holiday!

            So how do you figure out who cares and who doesn't care?

            Right now what many ISPs do is have quota limits - they limit how much data can be transferred in total. When the quota runs out "stuff happens" (connections go slow, users get charged more etc). So the users have to manage it.

            BUT this is PRIMITIVE, because if you can figure out when a user doesn't care about the speed etc, technology allows you to easily prioritize other traffic over that user's "who cares" traffic.

            So what's a better way of figuring it out?

            My proposal is to give the customers a "dialer" which allows users to "log on" to "priority Internet" (and then only something starts counting the bytes ;) ), BUT even when they "log out" they _still_ get always-on internet access except it's just on a lower priority (but NO byte quota!). A customer might be restricted to say 10GBs at "priority" a month.

            The advantage of this method is:
            1) There is no WASTED capacity - almost all the available bandwidth can be used all the time, without affecting the people who NEED "priority" internet access (and still have unused quota).
            2) It allows a ISP to better figure out how much capacity to actually buy.
            3) If there is insufficient capacity for "priority Internet" the ISP can actually inform the user and not put the user on "priority" (where the quota is counted). While the user might not be that happy, this is much fairer, than getting crappy access while having your quota still being deducted.

            Perhaps this system is not needed and will never be needed in countries that don't seem to have big problems offering 100Mbps internet access to lots of people.

            But it might still be useful in countries where the internet access and telcos are poorly regulated/managed. For example - you run a small ISP in one of those crappy countries and so you pay more for bandwidth from your providers- this system could allow you to make better use of your bandwidth and to be a more efficient competitor. And maybe even give your customers better internet service at the same time.

            Yes the ISP could always buy enough bandwidth so that _everyone_ can get the offered amount even though not everyone really cares all the time (believe me this is true). But that could mean the ISP's internet access packages being much more expensive than they could be.
            • One problem is by default many network devices/OSes do bandwidth distribution on a per _connection_ basis not on a per IP basis.
              Standard IP networks do bandwith distribution on the basis of the clients backing down (or if nobody backs down on the basis of who can get packets in when the queue isn't full). If the systems all use equally agressive implementations of TCP then each TCP connection will get a roughly equal bandwidth. OTOH a custom UDP based protocol can easilly take far more or far less than it's fair share depending on how agressive it is.

              A much fairer way would be to share bandwidth amongst users on a per IP basis. That means if two users are active they get about 50% each, even if one user has 100 P2P connections and the other user has only one measly http connection.

              It's a nice idea but there are a couple of issues
              1: it takes more computing resources to implement a source based priority queue than to implement a simple fifo queue.
              2: to be of any use such prioritisation needs to happen at the pinch point (that is at the place there is actually a queue built up), unless you deliberately pinch off bandwidth or you overbuild large parts of your network predicting the pinch point location may not be easy.

              • by TheLink ( 130905 ) on Friday December 04, 2009 @02:15PM (#30327330) Journal
                > it takes more computing resources to implement a source based priority queue than to implement a simple fifo queue.

                Similarly it takes more computing resources to do what ISPs are already doing (throttling, disconnects based on XYZ) than to implement a simple fifo queue.

                > to be of any use such prioritisation needs to happen at the pinch point (that is at the place there is actually a queue built up), unless you deliberately pinch off bandwidth

                I'm sure they can find the pinch points - they're the spots where they keep having congestion (which should show up on one of their monitoring screens).
      • Re: (Score:3, Insightful)

        by afidel ( 530433 )
        Modern Torrent clients that support DHT (most of them) generally default to UDP. Since the Torrent protocol already includes block checksumming there's no reason to also use TCP for that, congestion control generally isn't an issue with Torrent traffic either, you just push the pipe till it's full. For video unless you have significant buffering there's little reason to have error checking or congestion control because if you can't get the bits in fast enough without retransmits then the video's not going t
        • I acknowledged my error on streaming video, but BitTorrent (and other file sharing programs) are still big TCP users.

          While DHT is UDP based, the file transfer is still TCP based, and no client I know of allocates more than 10% of its bandwidth to DHT use (usually much less). Beyond the protocol compatibility issues, why waste a download of an up to 4 MB block when you could have TCP fix the error much earlier? TCP has a rough error interval of one bit in every trillion bits (that's from memory, but it's wit

          • by war4peace ( 1628283 ) on Friday December 04, 2009 @12:14PM (#30325588)
            (disclaimer: I am living in Eastern Europe, so things may look very differently from US, but then again, maybe it's for the better for people to get a glimpse of how things are done somewhere else on the globe)

            Well, as usually the truth is somewhere right down in the middle.
            I have 2 ISPs (2 different providers). One is CAT5-based (plus optical fiber going out of the area) and the other uses CaTV (tohgther with those infamous CaTV modems I hate). To make things shorter, I'll name the CAT5-based one as ISP1 and the other as ISP2.
            ISP1 offers max metropolitan bandwidth 24/7. My city has roughly 300K home Internet subscribers, not counting businesses. I can download from any of them at 100 mbps max theoretical transfer rate. When using Torrent-based downloads, every single one caps at 95-97% of the maximum theoretical amount, which is impressive to say the least. Furthermore, I can browse at the same time without interruptions or latency. I was playing games such as EVE Online and WoW while downloading literally tens of gigabytes of data at max speed and my latency as shown in WoW was about 150-250 ms, which is excellent according to my view.
            I have never ever had any warnings from my ISP1 during last 3 years, mainly because they do not count metropolitan data transfers (I asked). They also told me why. All ISPs which offer metropolitan high speed access have an agreement to let those transfers flow freely (mutual advantage) and not count them against customers. It seems the logical thing to do. It's pretty much like throwing a cable between me and my neighbour and turning the pipe on. It's a self-managed thing, and if it works like shit, then it's our fault, not the ISP's.
            ISP2 offers CaTV-based Internet Access. Now I have my reasons to loathe Cable Modems because they proved to be unreliable, slower than other types of Internet Access and prone to desync. I've had countless problems with this sort of implementation. Anyways, ISP2 downloads cap at 2 MB/s when downloading from either metropolitan or external sources. They brag about offering 50 mbps transfer rates from Metropolitan sources, but this doesn't seem to be true. I keep ISP2 for backup purposes, so it goes largely unused (i think I used their service for like 10 days during last year or so).
            Maybe ISP1 or ISP2 do have a policy to cap heavy downloaders which access data from outside the metropolitan network area, but I've never heard of any case when they did. So either the policy exists but is not applied, or doesn't exist at all.Oh, I forgot to mention that ISP2 gas nation-wide coverage and ISP1 is just city-wide.
            So I was wondering what makes US-based (and probably other) ISPs come to such a conclusion and apply such policies. I think it's because their network implementation plainly sucks. Maybe they rely on third party networks to get data across areas where they have no coverage and that costs them. makes sense for a company looking to maximize profits (I don't like this approach though). Don't they have a minimum guaranteed bandwidth? We do have it here, and if one starts complaining that he only can download at 2x of minimum guaranteed speed limit, the ISP just laughs in your face, because that's twice what they guarantee. And to that I agree :)
            Let's assume I use videoconferencing from home. A lot. I know people who participate in high-bandwidth audio+video conferences all day, from home. So they eat up a lot of bandwidth for business purposes. They would be pissed to have a cap limit enforced on them :) So what's the ISP's take on such cases?
            one more thing: if this policy is written in your contract with them, then you're legally screwed. If not, they're legally screwed. it all comes down to this in the end.
            As a conclusion, I don't think "Bandwidth hogs" exist. They're mythical creatures indeed. But what is real is piss-poor network implementation, especially on WAN.
            • Re: (Score:3, Interesting)

              by kimvette ( 919543 )

              So I was wondering what makes US-based (and probably other) ISPs come to such a conclusion and apply such policies.

              Modern American society has a sense of entitlement, and that applies even to the government-granted monopolies. ISPs were given hundreds of BILLIONS of dollars to push broadband out to every address in the US. They didn't do it, never got spanked for it, and abuse the customers and continually charge more using "service enhancements" and "upgrades" as their justification, when in actuality, the

    • Re: (Score:2, Informative)

      No, bandwidth hogs normally use file sharing which is implemented with tcp (i. e. bit torrent).

      The problem is tcp distributes bandwidth per connection. Someone using more connection gets a bigger part of the available bandwidth.

  • Why? (Score:3, Interesting)

    by Maximum Prophet ( 716608 ) on Friday December 04, 2009 @10:53AM (#30324392)
    Why would any business cancel paying customers that don't negatively impact operations?
    • Re: (Score:3, Interesting)

      by BESTouff ( 531293 )
      Because they're probably heavy music/movies "illegal" downloaders, so they inconvenience their friends the media moguls ?
    • Re:Why? (Score:4, Insightful)

      by godrik ( 1287354 ) on Friday December 04, 2009 @10:59AM (#30324468)

      Because the operators pay for the bandwidth. The high bandwidth users are less profitable than the other ones.

      • Re:Why? (Score:5, Insightful)

        by olsmeister ( 1488789 ) on Friday December 04, 2009 @11:12AM (#30324664)
        I guess it's cheaper to sacrifice 5% of revenue than to have to undertake a network upgrade.

        This mentality is part of why the U.S. lags so much in broadband.
      • by Bob9113 ( 14996 )

        Because the operators pay for the bandwidth. The high bandwidth users are less profitable than the other ones.

        That is why tiered services would solve the problem. High bandwidth users should be more profitable than other ones. Then the ISPs would be profit motivated to encourage heavy bandwidth usage, and the users would be cost motivated to be efficient with their usage.

        No single thing is more at the root of this problem than the word "unlimited."

    • Re:Why? (Score:4, Insightful)

      by Trepidity ( 597 ) <[delirium-slashdot] [at] [hackish.org]> on Friday December 04, 2009 @10:59AM (#30324480)

      They don't negatively impact operations in the sense of taking up a scarce resource that degrades other customers' performance. However, they do still use above-average amounts of bandwidth, which costs ISPs money. When offering a flat-rate, unlimited-use service, your economics come out ahead if you can find some way to skew your customers towards those who don't actually take advantage of your claimed "unlimited use".

    • Because they're worried that if they don't, they'll have to pay for equipment upgrades to handle the extra load, and I doubt they don't have the monitoring in place to figure out whether a "hog" is actually impairing the experience of other customers (after all, you'd need to analyze a whole lot of factors at each link in the chain where connections join, and that costs money too). Their paranoid belief is that half the customers will up and leave because there connection is one step shy of perfect, so they
  • by BobMcD ( 601576 ) on Friday December 04, 2009 @11:00AM (#30324488)

    I have personally witnessed hogging of bandwidth and, I'd wager, so have you. This term describes when an individual user uses more bandwidth resources than they were assumed to need.

    Example: My brother moves in with two of his friends. His latency is horrible. When his roommate is not home, the internet is fine. When he's away at work it becomes unusable. He calls me to look at the situation, and we determine that one of his roomies is a heavy torrent user. Turns out the roommate was ramping up torrents of anime shows he wanted to watch while he was gone. He was aware of the impact to his own internet experience, and so ramped it back down when he wanted to use it himself.

    If that's not hogging bandwidth, I'm not too sure what is.

    If this doesn't scale, logically, up to the network at a whole, I'm not sure why.

    Now, to be completely clear - I feel overselling bandwidth is wrong. I feel the proper response to issues like this on the larger network is guaranteed access to the full amount of bandwidth sold at all times. On the local scale, these men should have brought in another source of internet. On the larger scale, the telco should do the same.

    Denying that the issue can happen, however, is stupid to the point of sabotage.

    An end-user can download all his access line will sustain when the network is comparatively empty, but as soon as it fills up from other users' traffic, his own download (or upload) rate will diminish until it's no bigger than what anyone else gets.

    So, if I understand this statement, if a user is hogging all the bandwidth until no one gets any connectivity - since it is all the same it is totally fair. One user can bottleneck the pipes, but since their stuff isn't fast any more either, we're all good?

    How does an argument of this kind help anyone but a bandwidth hog?

    • Re: (Score:2, Informative)

      by Anonymous Coward

      I used QoS with iproute2 and iptables (see http://lartc.org/howto [lartc.org]) when I faced that issue.
      I do not mean to say I had room mates, but when I used bittorrent and noticed how it abused the network, I used that howto to limit it's bandwidth.

      It worked very nice.

    • Re: (Score:3, Insightful)

      by randallman ( 605329 )
      This could just be a problem with your router. Maybe it struggles to handle all of the torrent connections.
      • I've seen a similar thing with a neighbor of mine in our apartment building. We're on Comcast--and the congestion stopped when he switched to DSL (He had been sharing the connection of another neighbor, who moved out.)
    • You pay for a 70Mbps connection. The ISP is saying that if you buy that service and then have the audacity to use the service you buy you're doing something wrong. Taking up 60Mbps and leaving 10Mbps for your roommate is one thing, but if the two of you are paying for 70Mbps you should get to use it.

      The ISP should be required to provide the service paid for. If they throttle, they should be required to specify say 70Mbps instantaneous rate 10Mbps sustained, for example. That would provide a clear descriptio

      • by BobMcD ( 601576 )

        We agree on this. Do we also agree that denying that hogging bandwidth is possible is not helpful to the discussion?

      • Re: (Score:3, Interesting)

        by Zen-Mind ( 699854 )
        I think you pointed-out the real problem. The telcos want you to pay for the 70Mbps line, but don't want you to use it. If you cannot support a users doing 70Mbps, don't sell 70Mbps. I know that building an infrastructure based on the assumption that all users will use maximum bandwidth would be costly, but then adapt your marketting practices; sell lower sustained speed and put a "speed on demand" service that is easy to use so when you want/need to download the new 8GB PS3 game you can play before the nex
        • Re: (Score:3, Insightful)

          by bws111 ( 1216812 )
          A more accurate way to put it is this: the telcos want you to pay for a 1Mbps line, but let you run it at 70Mbps if resources are available.
      • Re: (Score:3, Insightful)

        by bws111 ( 1216812 )
        You pay for an 'up to' 70Mbps connection. 'Up to' means exactly what is sounds like - you are never going to go above that rate. It says absolutely nothing about the minimum or average rate. Since they make no claims at all about minimum or average rate, there is no false advertising. Every consumer is well familiar with what 'up to' means. How many times do you see an ad that says 'Sale! Save up to 50%'. Does that imply that you are in fact going to save 50% on everything you buy? No, it implies th
    • The basic counter argument is that TCP "fairness" assumes everyone wants the same experience. As you pointed out, a true bandwidth hog doesn't care about the latency during their hogging sessions since they plan around them, and therefore arguing that TCP is fair because it treats all packets the same is pure rubbish. If everyone (including the hog) is trying to make a VOIP call or play WOW then sure, the system is fair because the hog has degraded service just like everyone else. The enterprising hog si

    • by ShadowRangerRIT ( 1301549 ) on Friday December 04, 2009 @11:13AM (#30324682)

      I should point out that this sort of thing, while true, is often overstated because of poor local network configuration. When I first set up my new Vista machine a couple years back, I noticed that torrents on it would frequently interfere with internet connectivity on other networked devices in the house. I hadn't had this problem before and was curious as to the cause. I initially tried setting the bandwidth priorities by machine IP and by port, setting the desktop and specifically uTorrent's port to the lowest priority for traffic (similar to what ISPs do when they try to limit by protocol, but more accurate and without an explicit cap), but that actually made the situation worse; the torrents ran slower, and the other machines behaved even worse.

      Turned out the problem was caused by the OS. Vista's TCP settings had QoS disabled, so when the router sent messages saying to slow down on the traffic, or just dropped the packets, the machine just ignored it and resent immediately, swamping the router's CPU resources used to filter and prioritize packets. The moment I turned on QoS the problem disappeared. The only network using device in my house that still has a problem is the VOIP modem, largely because QoS doesn't work quickly enough for the latency requirements of the phone, but it's not dropping calls or dropping voice anymore, it's just laggy (and capping the upload on uTorrent fixes it completely; the download doesn't need capping).

      • Re: (Score:3, Interesting)

        by BobMcD ( 601576 )

        Are you advocating a system where the ISP has mandate power over the OS configuration?

    • by betterunixthanunix ( 980855 ) on Friday December 04, 2009 @11:17AM (#30324728)
      "If this doesn't scale, logically, up to the network at a whole, I'm not sure why."

      Plenty of reasons why that won't scale up to the network as a whole. First and foremost, your ISP's network topology is a lot more effective for many users than the simple "star" topology most home router/switch combos give you. Beyond just the topology, the ISP uses better equipment that can cap bandwidth usage and dynamically shift priorities to maintain a minimum level of service for all users even in the presence of a very heavy user. The ISP also has much higher capacity links than what you have at home, and certainly more than the link they give you, and so even if there were a very poor topology and no switch level bandwidth management, it would be very difficult for a single user to severely diminish service for others.

      I do not have any sympathy for ISPs when it comes to this issue. If they sell me broadband service and expect me to not use it, then they are supremely stupid, and retaliating against those users who actually make use of the bandwidth they are sold is just insulting. They oversold the bandwidth and they should suffer for it; blaming the users is just misguided.
      • by BobMcD ( 601576 )

        They oversold the bandwidth and they should suffer for it

        I agree.

        blaming the users is just misguided.

        I disagree. The system wasn't designed, nor sold, with torrents in mind. End points are supposed to be content consumers, not content providers. It isn't exactly the ISP's fault that those end users want the system to function in a way against which it is designed.

        The ISP should redesign the system. Absolutely, without a doubt. Meanwhile those users that aren't getting what they want shouldn't necessarily be ruining it for everyone else, should they?

        • "It isn't exactly the ISP's fault that those end users want the system to function in a way against which it is designed."

          I would agree if the ISPs were honest about what how they built their network, but they continue to lie and then complain about people believing their lies. If an ISP designed its network with downloading in mind, and provides only the minimum upload capacity needed to facilitate such service, they should be very clear about that. It is not "unlimited Internet access," it is "Intern
          • by BobMcD ( 601576 )

            The ISP lied, and that is where the blame stops. Nobody should be blamed for believing that their ISP is selling them the service that was advertised.

            Except you're comparing the advertising tagline with the execution of the actual contract.

            One is very short, the other is not.

            It is not "unlimited Internet access," it is "Internet download service."

            I'd agree to an extent that this should be rephrased to "unlimited Internet access for most users, your experience may vary".

        • Re: (Score:3, Insightful)

          by gbjbaanb ( 229885 )

          They oversold the bandwidth and they should suffer for it

          I agree.

          I disagree. overselling is fine, the problem comes when they squeeze too much overselling out of what they've got.

          For example, ISP A had 100gb of bandwidth and 1000 customers. They sell each customer 0.1gb, everyone's fine but no customer will use that much bandwidth so most of the network cap is wasted, and when the upstream ISP sells it to you a quite a large sum, you'll find you have no customers as the price you have to charge them is proh

        • by Hatta ( 162192 ) on Friday December 04, 2009 @12:13PM (#30325578) Journal

          The system wasn't designed, nor sold, with torrents in mind. End points are supposed to be content consumers, not content providers.

          This is incorrect. TCP is designed so that every computer on the network is a peer. There is no fundamental difference between my computer at home and slashdot.org. The great promise of the internet is that everyone can be a content provider. The ISPs seek to destroy this notion in favor of simply creating a content distribution mechanism that they control. That is far, far worse than any "bandwidth hog" could ever be.

        • by sjames ( 1099 )

          The Internet is and was always intended to be peer to peer from day one. That's not exactly a secret. They offered the Internet. Big surprise people expect to have peer to peer capability.

          Torrents are just one application that fits within the meaning of peer to peer. It isn't the users fault the ISP failed to design their network to handle the service that they sell.

          If the ISPs cared and/or weren't packed with bumbling incompetents, they would have implemented fair queueing years ago and it would be impossi

      • Re: (Score:3, Insightful)

        by tibman ( 623933 )

        Companies overselling is a very popular and acceptable thing too (for them). Airlines, hotels, and movie theaters often do this expecting no-shows and cancels. But i expect the percentage oversold is based on historical facts for that particular day the previous year. ISPs might have been able to oversell so much in the past but as more content moves from tv/phone/radio to the internet, the typical usage might be outstripping the previous years usage numbers. Just my thoughts..

    • Re: (Score:3, Insightful)

      by Hatta ( 162192 )

      That's just bad configuration, not bandwidth hogging. Prioritize ACK packets and you can run torrents all day without affecting other uses of the network.

      One user can bottleneck the pipes, but since their stuff isn't fast any more either, we're all good?

      If the "bandwidth hog" isn't fast anymore, he's no longer a hog.

      • by BobMcD ( 601576 )

        Of course he is. A saturated network is as hogged as any, even when no one gets what they want.

    • If that's not hogging bandwidth, I'm not too sure what is.

      Yes. Your brother's room-mate was hogging the available bandwidth in their apartment.

      If this doesn't scale, logically, up to the network at a whole, I'm not sure why.

      It will only scale up to the network as a whole if you're overselling your bandwidth.

      Now, to be completely clear - I feel overselling bandwidth is wrong.

      Err... Well, if it's wrong to oversell bandwidth... Then it is wrong to create a situation where it is possible to hog bandwidth...

      If I buy a 5 Mbps connection from a small ISP here in town, I expect to be able to get roughly 5 Mbps. And unless they make it very clear to me ahead of time, I'm going to expect that "unlimited" really mean

      • by BobMcD ( 601576 )

        So... Yes, it is possible to hog the bandwidth. But only if the ISP oversells their bandwidth. Which means that if the ISP is being honest in its marketing and sales material, it should be impossible to hog the bandwidth.

        And it has been generally accepted that the network is oversold by design and that using it in a manner that pretends that it was not oversold is hogging it.

        Yes, overselling wrong.

        Pretending that it was not oversold is also wrong.

        And to go one further, pretending that your price for a non-oversold network would be the same is also wrong.

    • by Archangel Michael ( 180766 ) on Friday December 04, 2009 @11:46AM (#30325166) Journal

      Now, to be completely clear - I feel overselling bandwidth is wrong.

      Over selling isn't wrong, it is necessary for services like this. The fact is, all service providers oversell their capacity. The trick is to manage the overselling to a ratio that on average, within a specific scope, doesn't cause bandwidth jams for a prescribed statistical level.

      Having run an ISP, the oversell ratio is about 10:1 - 15:1 depending on who your subscribers are, and their usage patterns. This means you can get 10-12 people on a data circuit that is really designed to handle 1 fully loaded client. This statistical usage only works at large scales, and actually as the scale increases, may raise the over subscription to 20 or 25 to one.

      I guarantee you that if everyone wanted to Torrent all the time, at full speed, the internet could not handle the traffic. It wasn't designed to. It has been over sold.

      Bittorrent is not normal traffic pattern. A Torrent is a congestion point on the internet, at a place where one is not expected. Most people don't need 80 gigabytes of streaming data, day in and day out. If this were DVD movies, you'd be downloading more movies than you could watch.

      I don't have any complaints for ISPs that throttle Torrents and take other measures against "high usage" users, who are file sharing. I'm not against filesharing, I'm against idiots who cause congestion because they don't know how to configure Bittorrent client to be "reasonable".

      • Re: (Score:3, Insightful)

        Comment removed based on user account deletion
  • If there's no such thing as a bandwidth hog, then why are is anyone worried about "hunting" them?

    Something tells me PETA is behind this...

    PS: Yes we'd all like to be able to download 20 TB of movies a month for free. We'd all also like free gasoline so we can drive Humveees with 30 inch chrome wheels.

  • Nice theory... (Score:3, Interesting)

    by thePowerOfGrayskull ( 905905 ) <marc,paradise&gmail,com> on Friday December 04, 2009 @11:01AM (#30324526) Homepage Journal
    Where are the facts again? Oh, right, he tells us!

    The fact is that what most telcos call hogs are simply people who overall and on average download more than others. Blaming them for network congestion is actually an admission that telcos are uncomfortable with the 'all you can eat' broadband schemes that they themselves introduced on the market to get people to subscribe. In other words, the marketing push to get people to subscribe to broadband worked, but now the telcos see a missed opportunity at price discrimination...

    It's nice of him to declare that without evidence. Now I know it to be true.

    I'm not saying he's wrong... quite possibly he's right, but seriously - how does someone's blog entry that doesn't provide one single data point to back up the claim make it to the front page?

    • Re:Nice theory... (Score:4, Insightful)

      by eldavojohn ( 898314 ) * <eldavojohn@gm a i l . com> on Friday December 04, 2009 @11:23AM (#30324838) Journal

      I'm not saying he's wrong... quite possibly he's right, but seriously - how does someone's blog entry that doesn't provide one single data point to back up the claim make it to the front page?

      The important thing that he's doing is trying to shift the burden of proof back onto the ISPs and telcos. They just declared that some people are bandwidth hogs and terminated their connection. They didn't give the public any proof that they were ruining the internet experience for anyone else ... nor did anyone come forward after the purge and say, "Gee, my internet sure is fast now that the bandwidth hogs are disconnected!"

      So he calls for proof since he hasn't seen any. He has to say that there are no bandwidth hogs in order to get a response from the telcos. Saying someone might be wrong is not the same impact as calling someone a liar. Yes, he's basing this on an assumption but it's just the same that everyone assumed there were individuals out there ruining the experience. All of us just let the telcos terminate the service of whoever they wanted to and then we moved on with our lives.

      I welcome his opposing viewpoint and challenge to "because we said so." They can release anonymous usage data without harming anyone so why not open it up to a request?

    • Actually, that's logic. You don't need evidence for simple symbol manipulation.

      What he states in that quote there is telcos call people hogs when they maximize their utilization of the connection they were sold. The telcos blame them for causing network congestion, ergo they believe that they cannot provide what they sold to their customers.

      The telco T claims they can provide bandwidth B to the customer C. The average customer Q never uses what they've been sold, while the alleged hog H does, all the time

  • Small ISP (Score:5, Interesting)

    by Bios_Hakr ( 68586 ) <xptical@@@gmail...com> on Friday December 04, 2009 @11:05AM (#30324566)

    Lately I've had to deal with this problem. Our solution was rather simple. We use NTOP on an Ubuntu box at the internal switch. We replicate all the traffic coming into that switch to a port that the NTOP box listens on.

    It may not be a perfect solution, but it can easily let us know who the top talkers are and give us a historical look at what they are doing.

    From that report, we look for anyone uploading more than they download. We also look for people who upload/download a consistent amount every hour. If you see someone doing 80gb in traffic each day with 60gb uploaded, you probably have a file sharer. When you see the 24-hour reports for the user and see 2~3gb every hour on upload, you *know* you have a file sharer.

    After that, it's as simple as going to the DNS server and locking their MAC address to an IP. Then, we drop all that traffic (access list extended is wonderful) to another Ubuntu box. That box has a web page explaining what we saw, why the user is banned, and the steps they need to take to get back online.

    Most users are very apologetic. We help them to set up upload/download limits on their bittorrent client and then we put them back online.

    • Why not just throttle them? Or limit the maximum bandwidth provided to a level that is less likely to allow one user to disrupt service for everyone else?
      • Re:Small ISP (Score:4, Informative)

        by Monkeedude1212 ( 1560403 ) on Friday December 04, 2009 @11:48AM (#30325200) Journal

        This upsets the customer. I know it sounds completely back-asswards, but most people would rather be blocked for an hour, told why they are blocked, and told to change, and then resume their normal speeds, as opposed to NOT getting a warning, having speeds decrease what they are paying for, and are left alone and angry to the point where they will go somewhere else.

    • Re:Small ISP (Score:5, Interesting)

      by imunfair ( 877689 ) on Friday December 04, 2009 @11:37AM (#30325020) Homepage
      Is there really a problem with allowing your users to actually use their connection? By my rough calculations 2-3gb/hr is only 60-90kb/s upload. I really don't understand why you can't handle that unless you're massively overselling. I would be a lot more sympathetic if we were talking about users maxing out fiber connections or something higher speed.
      • "unless you're massively overselling."

        Hammer, meet the head of the nail.
      • It's more like 600-800 KiB/s (assuming the grandparent was just lazy in not capitalising any part of the 'gb's they used). 2-3 GiB per hour is about 700 KiB/s. 2-3 Gib is only about 80-100 KiB/s.

        Handy listing of prefixes (si followed by binary for each):
        k: 1000 (yes, the 'k' prefix is supposed to be lowercase)
        Ki: 1024 (yes, they decided to capitalise this 'K')
        M: 1000000
        Mi: 1048576 (1024 * 1024; 2^20)
        G: 1000000000
        Gi: 1073741824 (1024 * 1024 * 1024; 2^30)

        Handy listing of units:
        b: bit (a single zero or one)
        B:

      • Re:Small ISP (Score:4, Informative)

        by Elshar ( 232380 ) <elshar@[ ]il.com ['gma' in gap]> on Friday December 04, 2009 @01:15PM (#30326434) Journal

        Well, another small ISP here. Couple of things. First off, customers are NOT paying for what's called a CIR. So, of course the service is "oversold". Every service provider industry is "oversold". Landlines, Cell Phones, Car Mechanics, TV Repairmen, Satellite TV, even Tech Support. You think there's one guy in India sitting there waiting for you to call about your Dell? No, of course not. By definition, service providers HAVE to oversell to survive.

        Secondly, it's really not about just one person doing something like this as a small ISP. Yes, one person doing such can have a seriously negative impact on the rest of the users, but it's when you get multiple people doing it that really compounds the problem. One torrent user generally isn't too much of a problem. Get two or three with high connection limits, and up/down set to unlimited, and you have a serious problem on your hands.

        Finally, equipment is expensive, commercial connections are expensive. If you don't believe me, go price out some comparable commercial internet connections from Cogent, Level3, any of the baby bells (Verizon, Qwest, AT&T/Cingular, etc), and you'll see that you'll easily be paying 10x more than what a cable/FiOS user is going to pay for a residential connection. There's a reason, and it's up in the first point.

  • You need a TCP/IP stack with queueing. You give each IP address a fair chance to transfer and/or receive some data, and as always you drop any traffic for which you have no time.... but you drop them from the bottoms of queues first, and the queues are per-IP (or per-subscriber, which is harder to manage unless you are properly subnetted... in which case, they can be per-network, for which computation is cheap.) This should be the only kind of QoS necessary to preserve network capacity for all users and pre

    • by godrik ( 1287354 )

      Ensuring fairness at one point of the network won't ensure fairness over the whole network which is what you really want.

  • by 140Mandak262Jamuna ( 970587 ) on Friday December 04, 2009 @11:18AM (#30324760) Journal
    These ISPs sold what they ain't got. Sold more bandwidth than they can sustain, and when someone actually takes delivery of what was promised, these telcos bellyache, "we never thought you will ask for all we sold you! whachamagontodoo?". Eventually they will introduce billing by the Gigabytes, and pipesize. Like the electric utilities charge you by the kWh and limit the ampearage of your connection.

    Then they will introduce the "friends and family" of ISP, some downloads and some sites will be "unmetered", and the sources will be the friends and family of the ISP. You know? the "partners" who provide "new and exciting" products and content to their "valued customers". Net neutrality will go down the tubes. ha ha.

    Google needs the net to be open and neutral for it to freely access and index content. When the dot com bubble burst Google bought tons and tons of bandwidth, the dark fibers, the unlit strands of fiber optic lines. If the net fragments, I expect Google to step in, light up these strands and go head to head with the ISPs providing metro level WiFi. Since it is not a government project, it could not be sabotaged like Verizon and AT&T torpedoed municipal high peed networks.

    • by teg ( 97890 )

      These ISPs sold what they ain't got. Sold more bandwidth than they can sustain, and when someone actually takes delivery of what was promised, these telcos bellyache, "we never thought you will ask for all we sold you! whachamagontodoo?

      Basically, they want to sell a product with high speed - but not continual use. A product where more of the bandwidth is used - or dedicated, not oversubscribed - is vastly more expensive, and is what they sell to businesses. To fix this problem, they should start with met

  • I do it too (Score:4, Interesting)

    by holophrastic ( 221104 ) on Friday December 04, 2009 @11:22AM (#30324810)

    I also go through my client list and drop those that consume more of my time and resources in favour of the easier clients who ultimately improve my business at a lesser cost. What's wrong with that? My company, my rules. "We reserve the right to refuse service to anyone" -- it's in every restaurant. Why would you expect a business to serve you? Why would you consider it a right?

    • Re:I do it too (Score:5, Informative)

      by Colonel Korn ( 1258968 ) on Friday December 04, 2009 @11:42AM (#30325098)

      I also go through my client list and drop those that consume more of my time and resources in favour of the easier clients who ultimately improve my business at a lesser cost. What's wrong with that? My company, my rules. "We reserve the right to refuse service to anyone" -- it's in every restaurant. Why would you expect a business to serve you? Why would you consider it a right?

      Your company's service isn't based on federal subsidies meant to provide internet access to all citizens.

    • by hitmark ( 640295 )

      because the net have become as integral to modern life as water and electricity?

      with a restaurant, one can go somewhere else to get a ready meal, or one can make ones of by parts sold almost anywhere. The net however is not something one can make at home if needed, and rarely one find more then 2 suppliers (or even that many) in a area...

    • Re:I do it too (Score:5, Insightful)

      by Ephemeriis ( 315124 ) on Friday December 04, 2009 @11:56AM (#30325336)

      I also go through my client list and drop those that consume more of my time and resources in favour of the easier clients who ultimately improve my business at a lesser cost. What's wrong with that? My company, my rules. "We reserve the right to refuse service to anyone" -- it's in every restaurant. Why would you expect a business to serve you? Why would you consider it a right?

      Let's say you sell widgets.

      You have 5 people come to you, each one wants to buy 1 widget. And another guy shows up and wants to buy 5 widgets.

      You only have 5 widgets in stock, you need 10, but you really want their money. So you sell each of those people a coupon for their widgets, and tell them to pick it up at your warehouse. You figure they won't all run over there right now, and you'll probably have time to get a couple more widgets in stock before anybody notices.

      Of course you don't tell your customers this. You don't tell them "I only have 5 right now, you'll have to wait 'til the next shipment" You just take their money and leave them with the impression that the widget is there, waiting for them, available for pickup whenever they want.

      So all of them show up at the warehouse about 5 minutes later. All of them want their widgets now. But you don't have enough widgets to go around. So you call the guy who bought 5 widgets a "widget hog", cancel his order, and throw up a hastily-made sign that says "limit 1 per customer."

      Legal? Yeah, I guess... Assuming you refund his money.

      Right? Not so much. You should have clearly explained that you only have 5 widgets in stock, or that the coupon couldn't be redeemed for a week, or that there was a limit of 1 per customer, or something. You mis-represented what you were selling to your customers.

      Likely to leave a good impression on your customers? Nope.

    • "We reserve the right to refuse service to anyone" -- it's in every restaurant.

      Actually, only sort of. If there's a pattern to who you refuse service to, it can get you into big trouble. For instance, if you consistently refuse service to black people, you are in violation of a number of civil rights laws.

  • Ted Nugent has 10 of these hanging on his walls.
  • If you bought a month of internet use at up certain speed, you can't be blamed if you use it, even if you use all of it. If doing that causes problems to other customers or the ISP, is isp fault for selling more than what they have, not yours.
  • Bandwidth Hog (Score:5, Insightful)

    by Aladrin ( 926209 ) on Friday December 04, 2009 @11:49AM (#30325212)

    First of all, I am, and always will be, a bandwidth hog. Why? Because I'm better at using the internet than everyone around me. That means I find more things, and bigger things, to download. If they someone banned P2P, I'd still have more streamed video than anyone I know. If they banned that, too, I'd still download more images. If they banned that, i'd still have more web traffic, email, IM, etc etc etc. I will always be a 'hog' in any environment. I was even told that I was "#1 abuser" of the "Unlimited" service when I was on dial-up in a small town and they tried to charge me an extra $300 that month. As someone else had just come into town, I switched, obviously.

    I don't pay for the top tier of residential service to just let it sit idle. I'm going to -use- it.

    I have absolutely no sympathy for people that sell me something and then get upset when I actually use it within the original limitations. I have only a small amount of sympathy for people that sell me something and I use it beyond their arbitrary limitations, even if I agreed to them.

    Why?

    America has -crap- for internet compared to other developed countries. We are quickly falling behind the rest of the world in terms of internet bandwidth. This is purely from greed and laziness on the part of the ISP. They refuse to upgrade and try to prevent competition at the same time. Sprint even has the nerve to advertise Pure and claim that it's faster than Cable internet, despite being 1/10th of the speed.

    • Re:Bandwidth Hog (Score:5, Informative)

      by not_anne ( 203907 ) on Friday December 04, 2009 @12:43PM (#30325972)

      I work for a large ISP, and for residential accounts, we don't particularly care if you're a "bandwidth hog," as long as you're not affecting other customers around you. If we see that one person is causing significant congestion, then that's a problem that we'll address (but only when it happens repeatedly and consistently). Most of the time the customer is either unaware, has an open router, or has a virus/worm/trojan.

Technology is dominated by those who manage what they do not understand.

Working...