Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
Compare cell phone plans using Wirefly's innovative plan comparison tool ×
The Internet Programming IT Technology Your Rights Online

Enhancement To P2P Cuts Network Costs 190

psycho12345 sends in an article in on a study, sponsored by Verizon and Yale, finding that if P2P software is written more 'intelligently' (by localizing requests), the effect of bandwidth hogging is vastly reduced. According to the study, redoing the P2P into what they call P4P can reduce the number of 'hops' by an average of 400%. With localized P4P, less of the sharing occurs over large distances, instead making requests of nearby clients (geographically). The NYTimes covers the development from the practical standpoint of Verizon's agreement with P2P company Pando Networks, which will be involved in distributing NBC television shows next month. So the network efficiencies will accrue to legal P2P content, not to downloads from The Pirate Bay.
This discussion has been archived. No new comments can be posted.

Enhancement To P2P Cuts Network Costs

Comments Filter:
  • by TubeSteak ( 669689 ) on Friday March 14, 2008 @10:25AM (#22750776) Journal

    For other ISPs to reap the benefits Verizon did in the test, they too would have to share information about their networks with file-sharing companies, and that they normally keep that information close to their chests.
    Excuse my ignorance, but what about their network is secret, other than the prices they're paying?
    Network topology isn't & can't be a secret...
  • New math (Score:5, Interesting)

    by ZorbaTHut ( 126196 ) on Friday March 14, 2008 @10:34AM (#22750844) Homepage
    Reducing hops by 400%, eh? That's a nice trick. Can we reduce bandwidth usage by the same amount? I wouldn't mind some free bandwidth.

    I honestly can't figure out where "reduce by 400%" came from. They say the average hops were reduced from 5.5 hops to 0.89 hops, which is either 84% if you're not an idiot or 616% if you are. So I'm really quite confused here. Go figure.
  • by FredFredrickson ( 1177871 ) * on Friday March 14, 2008 @10:42AM (#22750922) Homepage Journal
    For this reason, Verizon doesn't suck for broadband uses. In my area, I have Verizon DSL (they haven't given us Fios yet, but they ran the fiber cables a few years back) and I don't have any port blocking (that's right folks, I can send email to ANY server), and they don't limit P2P or Bittorrent (My downloads are fast and fresh). And they haven't turned records over to the government (or at least not reportedly, yet). So far, in the category of BIG ISPs Comcast vs Verizon, Verizon is being the underdog. Which is funny, because start arguing cell phone policies and prices, and watch the argument change completely.
  • by darthflo ( 1095225 ) on Friday March 14, 2008 @11:14AM (#22751244)
    ISPs could easily achieve this without changing a single bit in most bittorren implementations: Jack up the bandwidth within their backbone to whatever's possible. Instead of limiting that ADSL2+ line to 5 mbps running it at 25 and throttling traffic to/from it to 5 mbps at the edge of their network. Connections within the ISP's network would tend to max out those 25 mbps; given some fiber connectivity and recent hardware, users could seed at gigabit throughputs within the provider's network.
    Going back to the previous 25 mbps example, this could reduce the outside traffic from, say, 1.4 GByte (an average movie) to some 150 MB (1.4 GB @ 20 mbps takes some 5 minutes during which some 180 MB could be retrieved thru the 5 mbps connection to the outside world) without any software optimisations. If the industry would start doing something like this, most P2P clients would probably use it. If they'd use it, ISPs would save even more bandwidth (== money).
  • Re:400%? (Score:2, Interesting)

    by LandKurt ( 901298 ) on Friday March 14, 2008 @11:15AM (#22751260)
    Well, technically going from 25 to 100 is a 300% increase, since the increase is 75. But I realize that whenever the ratio between numbers is four to one it's going to be commonly referenced as 400%, regardless of whether it should actually be a 300% increase or 75% decrease. The mind fixates on the factor of four and wants to use 400 as the percentage. The correct numbers just feel wrong.

    Interestingly this mistake doesn't happen with small changes like 10 or 20 percent. But as soon as something doubles its a 200% increase rather than the mathematically correct 100% increase.
  • by leuk_he ( 194174 ) on Friday March 14, 2008 @11:42AM (#22751600) Homepage Journal
    ISP are always very reluctant to tell that they do not have any redundancy in their number of outside links to the rest or the internet. That information just is not available. And how peering agreements work is mostly hidden.

    They simply do not tell, and there is no established protocol to get that information reliable. This p4p would give this information in a way usable to p2p applications.

    One disadvantage of p4p is that not everyone will be equal according to p4p. It might reason that all Americans can be served at a at a lower cost than people in europe. To Europeans that have as good connection to US as to neighbor states it might look like the American community is leeching them. They only prefer to serve eachother, and leave the scraping to foreigners. As a result Trackers in Europe will ban US leeches, making p4p less useful. (This is an example, but assumed is that

    This p4p is only useful to users if it serves ADDITIONAL bandwidth that was not available before. Currently however most connections as asymetrical, it is very easy to use the full upload, while there is plenty of room left in the download spectrum.

    For the current connection I have now i have very little trouble to use up all available upload BW.

  • Re:P2P - P4P? (Score:3, Interesting)

    by budgenator ( 254554 ) on Friday March 14, 2008 @03:46PM (#22754122) Journal
    No seriously it it hasn't, in fact because the US is a first to conceive instead of first to file, they will not file an patent application until it's everybody is using it!
  • by laird ( 2705 ) <lairdp@gmail.ERDOScom minus math_god> on Friday March 14, 2008 @08:19PM (#22756372) Journal
    "Why not just do a 'traceroute' to all of the seeds as you discover them, and penalize the ones that are more hops away?"

    That would help peers pick between known peers to exchange data with. The problem is that if you're in a large swarm, you'll only know about a small subset of the swarm, and thus almost certainly miss the best peers to connect to. For example, if you're in a swarm with 10,000 peers, and you know about a random 50 peers, you are 99.5% likely not to find out about the closest peer on the first announce (for BitTorrent, which I'll use as the example, since it's well known). The Tracker has global knowledge, so it can tell a peer 100% of the time about the closest peer. Yes, it's true that over time BitTorrent will converge on good data sources, but in large swarms it takes a very long time to connect to and test all peers, so the time that it takes to find a good, nearby data source could well be much longer than the download time,

    What we found in the P4P field test is that guided peer connections yielded much faster download speeds almost immediately, because the first peer connection was "close" in the ISP's network, resulting in fast connection and transfer times, and that while the BitTorrent connection logic eventually found good data sources, on average the downloads were over 200% faster (for FTTH users) when the p2p connections were guided.

Natural laws have no pity.