Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet Programming IT Technology Your Rights Online

Enhancement To P2P Cuts Network Costs 190

psycho12345 sends in an article in News.com on a study, sponsored by Verizon and Yale, finding that if P2P software is written more 'intelligently' (by localizing requests), the effect of bandwidth hogging is vastly reduced. According to the study, redoing the P2P into what they call P4P can reduce the number of 'hops' by an average of 400%. With localized P4P, less of the sharing occurs over large distances, instead making requests of nearby clients (geographically). The NYTimes covers the development from the practical standpoint of Verizon's agreement with P2P company Pando Networks, which will be involved in distributing NBC television shows next month. So the network efficiencies will accrue to legal P2P content, not to downloads from The Pirate Bay.
This discussion has been archived. No new comments can be posted.

Enhancement To P2P Cuts Network Costs

Comments Filter:
  • by Anonymous Coward on Friday March 14, 2008 @10:33AM (#22750834)
    The answer is "a lot"

    How much capacity a device has, how many links it has, how much it might cost a carrier to use those links. How much capacity the switching devices in that network have, what firewall/filtering might be in place. Where the devices are phyiscally located.

    There's a lot more to a network that just IP Addresses.
  • innumeracy (Score:4, Informative)

    by MyNymWasTaken ( 879908 ) on Friday March 14, 2008 @10:37AM (#22750874)

    reduce the number of 'hops' by an average of 400%
    This glaring example of innumeracy is from the submitter, as it is nowhere in the article.

    On average, Pasko said that regular P2P traffic makes 5.5 hops to get its destination. Using the P4P protocol, those same files took an average of 0.89 hops.
    That works out to an average 84% reduction.
  • Re:400%? (Score:4, Informative)

    by IndustrialComplex ( 975015 ) on Friday March 14, 2008 @10:38AM (#22750880)
    They probably discussed the number so many times that they lost track of how it was referenced. Lets say they cut it down to 25 from 100. If they went from their method, to the old method, then it would be a 400% increase in the hopcount.

    Sloppy, but we can understand what they were trying to say.
  • Re:400%? (Score:4, Informative)

    by MightyYar ( 622222 ) on Friday March 14, 2008 @10:42AM (#22750920)
    The number 400% appears nowhere in the article.
  • by ThreeGigs ( 239452 ) on Friday March 14, 2008 @10:47AM (#22750966)
    less of the sharing occurs over large distances, instead making requests of nearby clients (geographically).

    How about a BitTorrent client that gives preference to peers on the *same ISP*?

    Yeah, less hops and all is great, but if an ISP can keep from having to hand off packets to a backbone, they'll save money and perhaps all the hue and cry over P2P will die down some. I'm sure Comcast would rather contract with UUnet to handle half of the current traffic destined for other ISPs than they do now.

    Sort of a 'be nice to the ISPs and they'll be nicer to the users' scenario.
  • by truthsearch ( 249536 ) on Friday March 14, 2008 @10:50AM (#22750988) Homepage Journal
    My guess is geographic location of IPs, since they're not just talking hops, but distance. If the hops are all geographically local the data likely transfers between less ISPs and backbones. I don't know much about the details, so this is just my interpretation of the claims.

    But wouldn't a protocol that learns and adjusts to the number of hops be nearly as efficient? If preferential treatment were given to connections with fewer hops and the same subnet I bet they'd see similar improvements.
  • Re:New math (Score:5, Informative)

    by MightyYar ( 622222 ) on Friday March 14, 2008 @10:54AM (#22751034)
    I think I figured out their math, and you aren't going to like it:

    5.5 * 0.89 - 0.89 = 4.0050 or 400%

    As opposed to:

    ( 5.5 - 0.89 ) / 5.5 = 84%
  • by br00tus ( 528477 ) on Friday March 14, 2008 @10:56AM (#22751052)
    it's called Mbone [wikipedia.org]. It was created 15 years ago by a bunch of people including Van Jacobson, who had already helped create TCP/IP, wrote traceroute, tcpdump and so forth.


    It would have made Internet broadcasting much more efficient, but it never took off. Why? Because providers never wanted to turn it on, fearing their tubes would get filled with video. So what happened? People broadcast videos anyhow, they just don't use the more efficient Mbone multicasting method.

    Furthermore, when I download a video via Bittorrent, there are usually only a few people, whether they have a complete seed or not, who are sending out data. So how local they are doesn't matter. If there are more people connected, usually most people are sending data out at less than 10K, while there is one (or maybe 2) people sending data out at anywhere from 10K to 200K. So usually I wanted to be hooked to them, no matter where they are - I am getting data from them at many multiples of the average person.

    I care about speed, not locality. The whole point of the Internet and World Wide Web is locality doesn't matter. Speed is what matters to me. For Verizon however, they would prefer most traffic goes over their own network - that way they don't have to worry about exchanging traffic with other providers and so forth. Another thing is - there is tons of fiber crisscrossing the country and world, we have plenty of inter-LATA bandwidth, the whole problem is with bandwidth from the home to the local Central Office. In a lot of countries, natural monopolies are controlled by the government - I always hear about how inefficient that would be and how backwards it would be, but here we have the "last mile" controlled by monopolies and they have been giving us decades-old technology for decades. In fact, the little attacks by the government have been rolled back, in a reversal of the Bell breakup, AT&T now owns a lot of last mile in this country. Hey, it's a safe monopoly that the capitalists, I mean, shareholders, I mean, investors can get nice fat dividends from in stead of re-investing in bleeding edge capital equipment, so why give people a fast connection to their homes? Better to spend money on lawyers fighting public wifi and the like, or commissars and think tanks to brag about how efficient capitalism is in the US of A in 2008.

  • by mr_mischief ( 456295 ) on Friday March 14, 2008 @11:01AM (#22751112) Journal
    You seem so certain.

    Your traceroute program doesn't tell you when your traffic is being routed four hops through a tunnel to cut down on visible hops and to save space in the ISP's main routing table. Without the routing tables at hand you don't know the chances of being routed through your usual preferred route and through a backup route kept in case of congestion. Nothing from the customer end shows where companies like Level 3 and Internap have three or four layers of physical switches with VLANs piled on top between any two routers. Nothing tells you when you're in a star build-out of ten mid-sized cities that all go to the same NOC vs. when you're being mesh routed over lowest latency-weight round robin, although you might guess by statistical analysis and mesh routing of commercial ISP traffic outside the main NAPs is getting more and more rare.

    There's a lot you can easily deduce, especially if your ISP uses honest and informative PTR records. There's still much that an ISP can do that you'll never, ever know about.

    I worked for one ISP where we had 5 Internet connections in four cities to three carriers, but we served 25 cities with them. We had point-to-point lines from our dial-in equipment back to our public-facing NOCs. We had a further 18 or so cities served by having the lines back-hauled from those towns to our dial-in equipment. We had about 12k dialup customers and a few hundred DS1, fractional DS1, frame relay, and DSL customers. Everyone's traffic went through one of two main NOCs on a good day, and their mail, DNS, AAA, and the company's web site traffic never touched the public Internet unless we were routing around trouble. In a couple of places we even put RADIUS slaves and DNS caching servers right in the POP.

    I worked for another that served over 40k dial-up and wireless customers by the time they sold. We had what we called "island POPs". Each local calling area we served had dial-in equipment and a public-facing 'Net connection. Authentication, Authorization, and Accounting, DNS, Mail, and the ISP's website traffic all flowed over the public Internet except in the two towns we had actual NOCs. There were tunnels set up between routers that made traffic from the remote sites to the NOCs look like local traffic on traceroute, but that was mainly for our ease of routing and to be able to redirect people to the internal notification site when they needed to pay their late bills. We (I, actually) also set up L2TP so that we could use dial-up pools from companies like CISP who would encapsulate a dial-in session over IP, authenticate it against our RADIUS, and then allow the user to surf from their network. We paid per average used port per month to let someone else handle the customer's net connection while we handled marketing, billing, and support.

    The first ISP I worked for had lines to four different carriers in four different NAPs in four different states, lots of point-to-point lines for POPs, and a high-speed wireless (4-7 MBps, depending on weather, flocks of birds, and such) link across a major river to tie together two NOCs in two states. Either NOC could route all of the traffic for all the dozens of small towns in both states as long as one of our four main connections and that wireless stayed up (and all the point-to-point ones did, too). If the wireless went down, the two halves of the network could still talk, but over the public Internet. That one got to about 10k customers before it was sold.

    At any of those ISPs, I couldn't tell you exactly who was going to be able to get online or where they were going to be able to get to without my status monitoring systems. On one, all the customers could get online even without the ISP having access to the Internet, but they could only see resources hosted at the ISP. Yet that one might drop five towns from a single cable break. Another one might keep 10k people offline due to a routing issue at a tier-1 NAP, but everyone else was okay. However, if that one's NOC went offline, anyone surfing in other
  • by kbonin ( 58917 ) on Friday March 14, 2008 @11:17AM (#22751296)
    Some of us working in the bleeding edge of p2p have been playing with these ideas for years to improve performance (I'm building open VR/MMO over P2P), here's the basics...

    Most true p2p systems use something called a Distributed Hash Table (DHT) [wikipedia.org] to store and search for metadata such as file location and file metadata. Examples are Pastry [wikipedia.org], Chord [wikipedia.org], and (my favorite) Kademlia [wikipedia.org]. These systems index data by ids which are generally a hash (MD5 or SHA1) of the data.

    Without going into the details of the algorithms, the search process exploits the topology of the DHT, which becomes something called an "overlay network" [wikipedia.org]. This lets you efficiently search millions of nodes for the IDs you're interested in in seconds, but it doesn't guarantee the nodes you find will be anywhere near you in physical or network topology space.

    The trick some of us are playing with is including topology data in our DHT structure and/or search, to weigh the search to nodes which happen to be close in network topology space.

    What they are likely doing is something along these lines, since they have the real topology instead of what we can map using tools like tracert.

    If they really want to help p2p, then they would expose this topology information to us p2p developers, and let us use it to make all our applications better. What they're likely planning is pushing their own p2p, which will be faster and less stressful on their internal network (by avoiding peering point traversal at all costs, which is when bandwidth actually costs THEM). The problem is their p2p will likely include other less desired features, like RIAA/MPAA friendly logging and DRM, and then they'll have a plausible reason to start degrading other p2p systems which aren't as friendly by their metrics, such as distributing content they don't control or can't monetize... Then again, maybe I'm just a cynic...
  • obvious (Score:3, Informative)

    by debatem1 ( 1087307 ) on Friday March 14, 2008 @02:29PM (#22753404)
    This is really freaking obvious. I wrote a p2p application that cached based on search requests and then fetched based on router hops years ago, and presumed it was nothing new then. I strongly doubt this will be an unencumbered technology if it ever sees the light of day.
  • by tattood ( 855883 ) on Friday March 14, 2008 @03:53PM (#22754200)

    In the case there are not enough peers to fill your tube it will work the same as P2P

    This is true, but if 80% of your P2P bandwidth is going within your ISP's local network, then only 20% is going out their interconnect links, which is better than 95% that would happen normally. And by better, I mean better for the ISP. If you have less traffic going out their interconnect links, that is more available bandwidth for other non-P2P traffic. I think their goal is to allow P2P to happen with having the minimal impact on the non-P2P customers.
  • Re:400%? (Score:5, Informative)

    by laird ( 2705 ) <lairdp@@@gmail...com> on Friday March 14, 2008 @04:53PM (#22754790) Journal
    Speaking as the guy that ran the test, I should explain the "hop count" decrease observed in the test in more detail than the article. First, I should clarify that the 'hop' is a long-distance link between metro areas, because that is the resource that is scarce - we ignored router hops, because they aren't meaningful, and generally aren't visible inside ISP infrastructures for security reasons. This means that data that moves within a metro area is zero hops, data pulled from a directly connected area is one 'hop', and so on.

    So in the field testt we saw data transmission distance drop from an average of 5.5 'hops' to 0.89 'hops'. This happens because P4P provides network mapping information, allowing the p2p network to encourage localized data transfers. Generic p2p moved only 6.27% of data within a metro area, while p4p intelligence resulted in 57.98% same-metro area data transfer. Thus deliveries are both faster and cheaper.
  • by laird ( 2705 ) <lairdp@@@gmail...com> on Friday March 14, 2008 @08:47PM (#22756538) Journal
    "If they really want to help p2p, then they would expose this topology information to us p2p developers, and let us use it to make all our applications better. What they're likely planning is pushing their own p2p..."

    P4P isn't a p2p network. P4P is an open standard that can be implemented by any ISP and any p2p network, and which has been tested so far on BitTorrent (protocol, not company) and Pando software, and the Verizon and Telefonica networks. Participants include all of the major P2P companies and many major ISP's. Participation in the P4P Working Group is open (and free) to any P2P company or ISP. Email marty@dcia.info, laird@pando.com, or doug.pasko@verizon.com if you're interested in joining the working group, or in getting email updates.

    There's more information at http://www.pandonetworks.com/p4p [pandonetworks.com] and at http://www.dcia.info/activities/ [dcia.info].
  • Re:400%? (Score:3, Informative)

    by laird ( 2705 ) <lairdp@@@gmail...com> on Saturday March 15, 2008 @01:57PM (#22760288) Journal
    "wouldn't it be a MASSIVE improvement if the ISPs just gave you a flat list of IPs within your metro area, no routing or anything like that?"

    That's an improvement, but if there's information about the structure of the ISP's network you can connect people within their network much more efficiently. For example, Verizon Internet has customers all over the US, Japan, Europe, etc., and it's better to connect people with (for example) the New York metro area to each other first, and avoid moving data through trans-Atlantic and trans-Pacific links. So far in talking with ISP's, these network maps aren't hard to generate, because they use automated systems to configure their routers, and the same data can generate network maps for P4P.

Software production is assumed to be a line function, but it is run like a staff function. -- Paul Licker

Working...