Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
News Your Rights Online

Ian Clarke on Peer-to-Peer 135

Simone of O'Reilly writes "On Freenet, the more popular information gets, the more copies it generates--and the easier it is to find and download. That's just one significant feature of this promising peer-to-peer network. Freenet inventor Ian Clarke may not be talking about his new company, Uprizer, but he has a lot to say about how decentralized architectures can fix what ails the Internet. Here's the interview." We've heard from Clarke before, but this is an interesting piece.
This discussion has been archived. No new comments can be posted.

Ian Clarke on Peer-to-Peer

Comments Filter:
  • Ever heard the term slashdotted? Come on, Web pages NEED a distributed solution.

    Freenet does support frequently updated content: see here [sourceforge.net] for instructions on web pages on freenet. The solution used is arguable better than the current one, in that older websites/old version of websites are not lost.

  • The problem with Usenet is that it indiscriminately distributes information. By making the propagation decision based on popularity, size [and to a lesser extent, location], you solve the problem Usenet has.

    Whatever happens, local storage is always going to be cheaper than bandwidth, so caching is good.

    I'm a bit freaked out, actually. About three years ago, I came up with a similar design called 'Osmo'. (Unfortunately, I wasn't really in the position to do anything about it) In that interview, Ian used almost identical arguments and examples.

    The thing is, Freenet will build the virtual infrastructure. The next step is to remodel content. By building state machines coded in, say, Java (or any other mobile language), you can model a dynamic resource as a set of transforms from raw data (either from a fixed source, such as a corporate database, or from distributed files) to displayed content.

    These state machines can be transmitted the same way static data currently is. By making the propagation decision based on weighing up the amount of data lost by transmitting the state machine versus the amount of data saved by using the transform (through the removal of undisplayed data), you can balance the system across a number of machines from source to destination.

    This also gives the benefit of allowing two-way interactive stream-based content -- something the page-based web cannot handle.

    Now, the next step is metadata: one thing Freenet currently doesn't handle too well. By propagating metadata, the entire Freenet network would act as a single, distributed directory... a worldwide decentralised Yahoo! with a lot of redundancy.

    Once that's done, logging/usage data needs to be tackled. This can be done by packaging the information stream and reverse-propagating it: almost like magnetic data that slowly works its way back home. This can be done with a high importance but a low urgency.

    So, we've got Popularity, Size, Location/Distance-From-Home (little use in keeping a copy of something when the canonical source is only one hop away), Importance, Urgency, Cost, Benefit and Local resources.

    Convert all of those to fractions between zero and one. Multiply them together. You get an 'X' factor. If that 'X' factor is in the top, say 10% of items you currently store, try to propagate it. If it's in the bottom 10%, ditch it.

    When requesting an object from another node, make sure the other node tells you if it knows anywhere else you could've got it from within that node's immediate vicinity. Cross-reference these. Work out what nodes are near and have the information you want the most, and reconfigure your peering list. Hey presto: a dynamic, self-repairing, self-optimising network.

    Put it all together and you get the replacement for the world-wide-web.

    Persuade Microsoft to write 'Word 2010' using the mobile state machine architecture, and you have truly distributed applications.. if they increase the resource requirements in 'Word 2011', don't bother running around all 60000 PCs in your organisation with a screwdriver and a buttload of DIMMs.... just pop them all in a big application server on your network, and make sure you overspec all new PCs.. then, the equation will rebalance in favour of leaving parts of the application on other machines, like your colleague's while they're on a business trip.

    So, now we've not only got a new world-wide-web, we've got a true '.Net' strategy. Chew on that one, Microsoft.

    Okay, I've rambled there, but since it looks like Ian Clarke's being a bit tight-lipped about his ultimate goal, there's one scenario which might work.
  • That's why fnnews was created. It hasn't been updated for Freenet 0.3 (I'm very busy and some other people who are thinking of reimplementing it such as Brandon Wiley haven't gotten around to doing so). What it is a Usenet-like enumeration-based newsgroup system. Individual posts are static content, but fnnews newsgroups behave like dynamic content (of course, you have to have a client which supports it - and fnclient hasn't been updated for Freenet 0.3.x). Once fnnews is updated, then Freenet can really have discussion over it. There also is fnindex, IIRC may have been updated by Brandon. It is an index system similar to fnnews except that it is designed for in-Freenet key indices rather than discussion.
  • On more technically grounds you may also be able to get around the AUP, by the fact that many clients are also servers and many servers are also clients.

    If we take the definition of "server" to be any system for the transmission of data to someone requesting it i.e. a "client" then when a webserver looks at your cookie that it planted there you are being a server and the webserver is the cleint in the transaction, so if they put a "no servers" clause in the AUP I suggest you mail them about this, it might send their lawyers into an infinately recursive loop.
  • by Eimi Metamorphoumai ( 18738 ) on Thursday November 16, 2000 @04:49PM (#619175) Homepage
    Actually, very little content is stored directly under a named key. What happens is you store the data under a key whose name is a hash of the contents. Then in a seperate key with a real name you include a redirect to the hash key. So you would only have one copy of, say, the GPL, even if it has a dozen names. MP3s might have a lot more "duplicates", but none exact (ie, you'd have one at 128 bps, one at 112, another at 128 that wasn't quite ripped as well, etc). Nothing can be done about that (well, not easily).
  • Mail over Freenet has already been figured out and implemented (not by me - I only designed and created. In-Freenet indices (fnindex), and Freenet newsgroups (fnnews) have already been implemented, but that was in the summer of 2000, and now that everyone is more busy, no one has had the time to reimplement them for Freenet 0.3.
  • The point is not to replace the WWW - but to provide an alternative to distributing static content which is free from censorship, the slashdot effect, and inefficient network infrastructures. None of the Freenet team seriously think that Freenet will replace the WWW completely (if you disagree - please provide evidence).

    --

  • by Anonymous Coward
    ... for data on your computer in a non-centralized data environment? I imagine you are. So that oh so popular (insert favorite illegal item, ie juvi porn) that is stored on your computer is found by (insert your favorite government agency here), they are sure to understand that it is not your responsibility. Try explaining to (insert your favorite computer illiterate judge) that the forementioned item was encrypted to protect you from the data not the data from them. P.S. How big a partition do you use for linux swap? And then, is the space for Freenet to use half that amount or twice it?
  • Popularity only makes it closer to the people who want content.

    That's all.

    It doesn't cause unpopular things to disappear.
  • I'm not a guru on Freenet but the FBI can track data as it moves between any two IP addresses. It doesn't matter if the data file is encrypted, it can be tracked unless every machine-to-machine exchange is uniquely ambiguated.

    End-to-end encryption is implemented. All significant (>32KB) data is inserted under its SHA hash. Forgery is impossible. Pointers to data may be inserted encrypted and signed with the publisher's public key.

    Link-by-link PKI will emerge shortly. Traffic analysis is nevertheless possible. More complex node behaviors have been proposed (transferring filler data, regular intervals for transfers, etc.) but may prove too costly for widespread deployment. An ultra-secure variant of Freenet is a plausible endeavor.

    Wheelchair salesmen wield chainsaws in search of new customers.

  • Freenet is only good for 'popular' information. Ian says that it drops the least popular information. Just because the masses like something, doesn't mean it's good. I may not be interested in a Britney Spears mp3 or a 'get rich quick' article. I may be interested in something obscure, and by it's very definition, Freenet will make that bit of information/file/whaterver difficult to find, if it stores it at all. That's a pretty serious flaw in this system. On the Web, you can find any information, no matter how obscure or strange it is. If the web was like this, then the only thing we'd have would be a lot of news and shopping web pages. ugh!

  • I'm now on a cable modem, and we're not permitted to run servers by the AUP. Nor do I have a choice, because DSL is unlikely to become available any time soon, because I'm too far from my CE. Based on the probability of increased legal pressure by parties like RIAA, MPAA, etc, who want publication to remain a 'big-shop' capability, I expect the no-server trend to continue and increase. From the ISP's point of view, it's a way to control liability. (Common-carrier status is far from assured, erosion has occurred already.) From Joe Sixpak's point of view, no big deal. Small-time hobby-oriented web space at the ISP and places like GeoCities serve just fine.

    Give it less than 5 years, and the ability to 'Peer' may well be a big-budget item, past the range of the small-timer.
  • Instead of clicking on a reply link, you write a counter-argument or suppliment to a piece by another author and upload it. If its well-written and informative, then it will presumably be passed around by lots of people and spread through Freenet. Which would also (in theory) create interest in other pieces on the same subject....

    Yes, I am sure something like slashdot could be built within freenet - but the end user experience would be radically different. I could not respond to your response as quickly as I am today - heck, I might not even see your response. It just would not be the same.

    Look at Usenet, it is distributed, and some people have attempted to build status and moderating systems into it all using smart clients (as another respondent suggested) but it still produces nothing like the experience of the real slashdot or other discusssion based web sites.

    There are some services that a distributed system with the current communication limitations of the internet cannot provide.

    -josh

  • "Instead of clicking on a reply link, you write a counter-argument or suppliment to a piece by another author and upload it. If its well-written and informative, then it will presumably be passed around by lots of people and spread through Freenet. Which would also (in theory) create interest in other pieces on the same subject...."

    Wow! You've just reinvented fidonet!!

    on a more serious note, I really don't want to go back to something that takes that long to proliferate, if that is indeed what would be required for this to work. Slashdot's strength is in its real time and its relatively small downtime. If I have to wait days, hell, even hours to post and reply in discussions then I'll just not bother. This may sound a little bit conceited, but I believe that there is such thing as sacrificing _some_ security in favour of speed, assuming that actual gains outweigh the failings. Since Slashdot is very fast updating, and is fairly easy to make anonymous, I'll stay with it. Freenet is a good idea, but I don't know enough about its logistics to know if it even can be done on the scale that currently is used, let alone the speed...

    "Titanic was 3hr and 17min long. They could have lost 3hr and 17min from that."
  • From the article:
    On Freenet's conceptual forebears: "The intention of the original Arpanet was ... to create a decentralized system, the idea being that if there was a nuclear war, the only two things to survive would be cockroaches and the Internet. ... I think that really Freenet in some ways is the realization of the original creators of the Internet."

    Wasn't it proven that it is a myth that part of the design of the internet was to withstand a nuclear war. I remember hearing a quote that even the military isn't stupid enough to build a system that will be around even when there wouldn't be anyone left to use it.

  • Yes, there are some kludges using subspaces, but fundamentally each document stored is a static document.

    Suppose I don't care about storing versions of my web site? Suppose I change prices on my products on a daily basis and I cannot afford to have a single straggling copy of a web page with the wrong price on it? Suppose I have to absolutely positively guarantee that every portion of my web site is viewable to every user who visits?

    There are just some things freenet will never do well.

    That said, when is someone going to come up with some good solutions for mirroring and distributing dynamic content?

    -josh
  • I attribute this fault to Freenet's coarse-grained object model (which I in turn blame on Unix, but that's an entirely different story). It's built having in mind heavy-weight "documents", not light-weight, generalised, abstractly-modeled "objects". A proper implementation of the Freenet concept would be like a mega-Slashdot (or more appropriately, a mega-Everything2 [everything2.org] or somesuch): a gigantic distributed object database. Better, instead of having a predetermined interface, the software could have transparent bindings for all popular languages; it also support easy object format change; et cetera, et al...

    But of course, that's "awfully hard" to understand and implement... especially when you're dealing with stupid, static languages... sigh

  • The data is always the same. It is encrypted. The key is only known by Freenet. Not by the author not by anyone.

  • IIRC, If you know the key for the official copy of the data, then that's what you're getting. Freenet's architecture includes checksum-based keys -- meaning that if you know the key (which includes the checksum), your client can verify that the file is correct.

    That, and you can just use public-key authentication for verification of authenticity.
  • Seriously, Most of Stirling works are about authoritarian worlds where slaves are kept because it is "better" for them. You see people are too stupid too run their own lives in his novels and his protagonists enslave others for the good of humanity. At least John Norman's Gor novels had some good Aliens to go with the S&M Bondage, etc.

    I believe he came out of the Belgian Congo and still thinks like an Afrikaner from 1930's.
    SM Stirling's Draka novels reflect his personal ideas just a little too much from what I see of his postings in Baen's Bar. http://www.baen.com/bar/Default.htm

  • then it's a problem. You know the banner ad that looks like a Windoze dialog box? It says something like "Warning: Your web site has not been submitted to all search engines. Click here to submit it to 10,000 search engines."

    I can imagine it now. You're using Freenet, and you see this banner ad that says, "Warning: Your freenet content may not be permanent! Click here to subscribe to our service, which guarantees to request your content 10,000 times a day." It would be a kind of popularity inflation effect. Everybody who didn't abuse the system would get their content labeled "unpopular."

    --

  • It's not all that bad.

    If you want your node to store your information, you can force your node to retain that information. OTHER nodes may not store it if nobody else requests it, but that's fine -- nobody'll mirror a big web page that nobody wants to read, nor should they be forced to.

    Anyhow, once enough people run nodes, information will need to be quite unpopular to be completely dropped off the network. Only marginally unpopular stuff may become rare enough to take longer to retrieve (not mirrored close to you) but that's a big difference from entirely unavailable.
  • IIRC, VC has been discussed on the Freenet mailing list -- indeed, I think proposals exist for implementing it.
  • Ahh I remember those days.. the free Unix BBS systems... Free Nets! Societies of computer people.. who knew how to use unix. Are people getting dumber? Because it seems that now people can't use Unix or even Windows anymore.. yet ages ago they could! Hrmm what's going on here?

  • I don't think you read the article too closely. I am NOT a Freenet guru, but I think I can address this. True Freenet gurus, feel free to jump in and correct.

    1 - There is no way to identify who asked for the document. All you should know is which of your neighboring nodes requested it. You should have no idea if it was that node, or one 30 hops away.

    2 - Please elaborate. What makes you think that could mess up the system? In what way?

    To deal with your third point, which is blatantly wrong: This is not Gnutella. There is no Gnutella style search - you have essentially a URL. You request it, and it gets passed hand over hand to the correct node, NOT to every node in the net. Then, it does it's cool little data caching thing wherein it may move the data closer to you if a lot of people in that "direction" are requesting it. I don't know the internals of the URL, but I KNOW it doesn't work like craptella.

    Everything I have read about this system says that it is brilliantly designed. I am waiting anxiously for it to hit prime time.

    Cheers,
    Jason
  • So what exactly is your point?

    My point, in case you missed it the first time, was that even rabid info-anarchists like Clarke get nervous about their own information "wanting to be free".

    The disposable credit card numbers do not "effectively" solve the problem. You still have a permanent account number that, if made public, can allow someone to have a field day with your money.

    You really think all information "wants to be free"? Fine. Gimme all your passwords. While you're at, it give me whatever information is necessary to drain your bank account.

  • There are already html documents on freenet which link to other documents using the freenet: URI scheme. This makes it possible to write a spider to index freenet much like web search engines use today. This also means that searchability doesn't have to be built in to the protocol itself, just as searchability isn't built in to http. You just need hypertext, a spider, and a search engine. Of course, this way, searches would take place outside the freenet domain (probably via a regular web site), but that would be fine for regular use.

    Other, "sensitive" lists of links could be published via freenet periodically in someone's own key subspace and with a predictable date-based format.

  • "On the Web, you can find any information, no matter how obscure or strange it is..."

    Not if you try to use any of the big search engines! You'll find links to porno javascript popup windows from hell instead!


    "Titanic was 3hr and 17min long. They could have lost 3hr and 17min from that."
  • by Alioth ( 221270 ) <no@spam> on Thursday November 16, 2000 @12:52PM (#619199) Journal
    The problem with Freenet (and Ian Clarke has never really discussed it) is that data that's not accessed a lot will get dropped.

    The great thing about the Internet now is that I, as an individual, can publish pretty much anything. I can write music and put it out, I can write fiction and put it out where people can come by and access it. Until the early 90s this was just not possible. If my stuff is not enormously popular - so what - people who enjoy that kind of thing can still get it. I can publish to my heart's content and the few hundred readers can read it. Similarly, I can go and get obscure stuff myself - something that wasn't possible before the internet showed up in its current form due to publishing barriers.

    But Freenet will just drop this stuff because it's not popular - and this seems like a retrograde step to me. It re-erects those old barriers to publishing that the Internet is destroying - and eventually, Freenet just holds what the Sheeple want. We end up with a network that's no better than TV or the print press - containing only what's popular. We end up with masses of Britney Spears or Blink 182, but you can't find something like the Bottom Feeders or Bradley N. Litwin.

    So to summarize: Automatic for the Sheeple.

  • I agree that that would be a prolbem becuase the whole point of the Internet (besides ARPANET and all that so don't flame me) is for people to be able to share information regardless of who they are or what they think. Only making the most popular documents availibe makes freenet more like mass media on TV than the Internet. Perhaps a good search engine could solve this. PS. I know someone is going to post and say that the internet is dominated by biased information from the same sources that give us TV media but I wish to point out that the other information is far more accessible than it would be on Freenet.
  • Umm, no. What's hotline? And how did you get a negative Karma? You're not a troll!
  • You might be right but I thought it was not so it would last but so the country could keep communications open during and attack and cordinate the defense/retaliation (most likely) The distributed idea is what made is to useful for this because knocking out one node would not kill the whole network.
  • Firstly, I told the journalist that it was a joke and then checked it to prove it to him. If my credit card number was publicly available then it wouldn't take Freenet to distribute it widely - it could be posted on Usenet, any number of mailing lists, or even here on Slashdot, and I would have no legal recourse. If you want to keep a secret, please do, but don't rely on the law to enforce your secrecy if you are careless.

    --

  • This guy clearly hasn't even looked at the Freenet webpage, nor does he have any idea of how Freenet works.

    --

  • Most of these issues are addressed in the FAQ and the rest are addressed elsewhere on the website.

    --

  • Freenet's primary goal is to provide people with free speech - Mojonation doesn't. Freenet is also supposed to be globally scalable (ie. they whole system acts as one giant network). Mojonation relies on servers - sure you can have more than one, but there is more than one Napster server too - it is still a centralized architecture. Freenet is designed to rely on no person computer or organisation. Mojonation relies completely on a "bank" to sign Mojodollars - and were this bank to be corrupted it could kill Mojonation by flooding the system with newly forged currency.

    --

  • As he said in the interview, as far as having info closer, it aids in nullifying the Slashdot Effect, and also allows you to view popular information quicker. The relative value of pieces of information has absolutely nothing to do with the project at all.

    Today was just a day fading into another-Counting Crows

  • "No legal recourse"? If someone posted the numbers but nobody used them, maybe. But if someone started spending your money without your permission, that would be fraud, and you certainly would have legal recourse. The person who posted the numbers could be busted for facilitating a crime, or consipracy, or acting as an accomplice, etc.
  • Not really. What about the situation where the web page is just a (and possibly not the only) 'front-end' interface to a real-time backend system?
  • Fortunately, Freenet would propagate the content towards the node that requests it. Eventually, the node would be sending 10,000 requests to the nearest node - itself. That would likely result in that node flooding itself with requests, and not likely affecting the system as a whole. Sure, this would likely contributed to the permanence of your content, but at what cost to the Service? For what they would charge you for the service, you'd probably be better off storing the content on a static web site if permanence is your goal. Or if you need distributed permanence, look into The Eternity Service.
  • You really think all information "wants to be free"? Fine. Gimme all your passwords. While you're at, it give me whatever information is necessary to drain your bank account.

    Sure thing. Just give me your real name, address, time you will be doing the transaction, time and place the police can pick you up, and the accounts you've placed your ill gotten gains. This information will also want to be free.

  • Couldn't the nodes that cache the information in the existing structure respond to cache timeout values generated by the original source of the content? Then the intermediary nodes could update the information they store periodically, keeping it current, and distributed. How would you handle sites that store client-based state information though? Would cookies still work?
  • by Apotsy ( 84148 ) on Thursday November 16, 2000 @01:03PM (#619213)
    Remember that Time Magazine article [time.com] from last year that talked about Ian Clarke? They went through all the usual "information wants to be free" stuff, then the last paragraph of the article descibes him coming across a file called "Ian Clarke's credit card numbers". He checked to make sure that it was just a joke and didn't really contain his credit card numbers. Guess not all information wants to be free -- eh, Ian?

    Note: For those of you too lazy to read the whole thing, the part I am talking about is on the second page [time.com] of the aforementioned article.

  • What hapenned to good old fashioned anonymous FTP?
    What happened to FSP similarly?

    Phil
  • On a time scale, this works wonderfully for daily updates where a slightly popular piece of content is pretty sure to be mirrored nearby. But for slashdot, it would probably generate just as much traffic (or more, depending on the overhead of requesting updates to the latest second). For dynamically generated content using what-have-you, it would just not work at all. All my slashdot pages say "Pink Daisy" on them somewhere... I'm sure no one else would desire to cache those. I could find my computer at work cacheing a copy of slashdot for each of my coworkers, who skips over it for the latest version. Also, and I'm not entirely sure about this, I thought that old content becoming less popular and being removed was one of the serious difficulties with Freenet.

  • Peer to Peer is not just about file sharing, it's about colaboration and communication at a higher level. For the ISP's to stifle it will be about as dificult as taxing email, simply put it ain't gonna happen.

    Bandwidth is just getting faster, the p2p technology is getting more more fancy. To block any kind of 'incomming call' from any source on the internet to dsl/broadband customers is basically television.

    I can understand your fear, but rest assured.. your prophesy of a non-contributing internet nation is not reality.

    This was not meant to be a flame. Cheers




    --------------------
  • As you say, the regular old internet of today is a great place for "niche" information that is of interest to only a few. But, as Ian says, it doesn't scale well as "niche" information becomes more popular. For example, if you're distributing a piece of software from your little web site and suddenly all of /. wants to download it, you're hosed. But if you move it to Freenet, suddenly you're a lot less hosed than before.

    In short, I don't think Freenet's distributed distribution system (for lack of a better term) and the ftp/web's centralized distribution system need be exclusive - there's room enough for both. And we need something to ease the problem of getting popular data (like the latest Linux kernel, or a new distribution, or whatever) distributed.

  • Is there any possibility of building Freenet capability into existing web browsers? So we could just type a URL like fnet://key=k3jdJd8LDjdk/ and access the file?
  • Ah. A number of myths. There is a bank in MojoNation, but it is only
    an arbitrage service for mojo, allowing you to convert between dollars
    and mojo. Anyone else can set up a rival arbitrage service. You
    don't ever need to interact with the bank, even to set up a server.

    There is no central server either: anyone can run any service, and
    evryone has to do *something* to obtain mojo. The fundamental
    difference between MojoNation and Freenet is the different ways they
    seek to tackle the free rider problem. I think MojoNation does it in
    a way that better ensures longevity of unpopular data.

    I suggest you check the FAQ to avoid spreading myths.

  • Ouch! Sorry, Ian, I don't think you're ignorant. I stand by the
    content of my other post, though. Especially I think the point about
    longevity is important: if I want to keep some large and
    boring-but-important historical archives around, I don't need to win a
    beauty contest with MojoNation, and so I think it is better for this
    kind of application. Freenet deals with some free speech issues, but
    it doesn't deal with them all.
  • Ian: actually I think we are on the same side. We intend for Mojo Nation to provide people with free speech, and we intend for Mojo Nation to be globally scalable, just as you intend for Mojo Nation.

    The only reason Mojo Nation was launched as a separate project is because the founders believed that an anarchic system could never scale without integrated microcurrency to solve the Tragedy of the Commons.

    I would rather discuss how Freenet and Mojo Nation can cooperate than how they can compete, at this stage. We are both open source projects with the same goals, and the whole point is to share information between peers, so it seems natural to link the two networks together.

    If you'd like to talk, e-mail me at "zooko@mad-scientist.com".

    Regards,

    Zooko

    Evil Geniuses For A Better Tomorrow

  • Duh -- obviously I meant to say "just as you intend for Freenet". Sorry!

    Zooko

  • I don't buy this at all. It's mostly true, but it's not a mantra, and it doesn't happen automatically.

    In particular this 'local not global' doesn't address the issue of supporters of an idea scattered all about.

    I can imagine ideas and files dissipating prior to attaining critical local mass, although its supporters may attempt to contact one another and set up such a node where their ideas would be popular. So the idea of popularity being local has merit, but its attainment is not automatic - that is to say a desirable equilibrium might not be reached except by considerable effort. It's not an 'invisible hand' kind of equilibrium.

    There may indeed be enough demand for a file, but maybe it will disappear before the demand coalesces
    ---

  • A few questions/comments on my mind that weren't touched were:

    The web is becoming much more of a cached entity, such that the "1000 people in UK causing a page to cross the Atlantic 1000 times" scenario is becoming less true, especially for the Yahoo!s, CNNs, etc. Given that this is the way the web is going, besides adding encryption and anonymity how is FreeNet really different than a heirarchy of web caches?

    Isn't replication of data the reason that we were supposed to have URNs (Universal Resource Names) in addition to URLs? My understanding was that a URN would address data in a location-independent way, and would resolve into a valid URL which the browser would use to retrieve the data. It seems to me that URNs were skipped because they posed a difficult problem and now we're paying the price for taking the easy path in the early days.

    One of the great things about the advent of mosaic was that it placed a unified interface in front of what were, up until that time, separate services (gopher, nntp, http, and to some extent telnet). Why is it that we haven't seen more growth in the protocol (e.g. "http://") area of URLs so that newer technologies can leverage the public's acceptance of the browser interface? It seems to me that this spawning of applications will lead us back to the confusion of having to use specific applications to use specific services. In other words I'd like to see freenet://... (or gnutella, napster, etc.) URLs (er, URNs) in mozilla someday, if not the commercial browsers. Is there any chance of that happening?

    Anyway - I hope these comments aren't complete drivel. :-)
  • This really isn't a problem. If somebody wants to set up a server farm with 10,000 machines, each requesting a certain document, then you are free to do that... even if you distribute them across the world.

    With Freenet, it isn't a problem because you are then saying that all the nodes have a specified piece of information, and it is all done on your dime anyway, so who is hurting? All that really happens is if I decide to access that data as well along the line, the likelyhood of finding a node with the data is going to be pretty high. Indeed, in such a scheme it still wouldn't be a problem even to the people running Freenet, because you've just added 10,000 nodes to Freenet and at least some of the server space will still be available on those nodes to store stuff that belongs to other Freenet users. It would be a win-win situation.

    As far as making a bot to keep requesting a piece of information, all that it would affect is your local node, so it would at least allow others to grab it off of your node if it somehow became a piece of "popular" data. That sounds like a very good piece of software you should write... so please submit it!
  • even rabid info-anarchists like Clarke
    I disagree with Clarke's position against all ownership of information, but that doesn't make him "rabid." The guy is an idealist who's done something about his ideals (unlike Shawn "Bertelsmann" Fanning).

    get nervous about their own information "wanting to be free".
    Traditional anarchists aren't against all social organization, and they certainly don't envision a situation where roving bands of skinheads come to your home and walk off with the furniture. Anarchism involves replacing legal protection of property with a system of voluntary social cooperation. I don't know if Clarke would object to "info-anarchist" or not, but advocating an end to ownership of information is not the same as saying that there can be no privacy. It just means that you protect your privacy by other means besides intellectual property laws.

    --

  • by vergil ( 153818 ) <vergilb&gmail,com> on Thursday November 16, 2000 @01:08PM (#619227) Journal
    Recently, Kuro5hin hosted a discussion [kuro5hin.org] that focused on sci-fi author S.M. Stirling's rabid reaction to concept of Freenet.

    According to the K5 article, Stirling advocated the implementation of laws requiring that ID-tags be affixed to data transversing the Freenet.

    "I propose a law requiring a transparent tag showing origin and history on any file on any server, and that the file be immediately accessible on request. The authorities should develop and send out a "sniffer" intelligent agent program to detect files not meeting these criteria. Immediately shut down any server/node that doesn't reply properly. With really... severe... penalties for anyone owning hardware harboring pirate files. Sufficient to make them take elaborate precautions not to do so."

    Furthermore,

    Stirling claims that he talked to the FBI, who told him that they have the ability to penetrate Freenet's anonymity. I suspect that either they were (a) blowing happy smoke Stirling's way, or (b) they were thinking of Carnivore catching the evil copyright violator's insertion at the ISP, before it actually enters the Freenet.

    To some extent, I can empathize with Stirling's fears as an author -- I wouldn't necessarily want someone to reproduce my copyrighted works with impunity and scatter to texts to the winds. However, I find Stirling's "draconian" (to use his own words) reaction unsettling.

    I'm wondering about the possibility of Stirling's proposed restrictions to Freenet. Are such measures feasible (legally and technologically)?

    Sincerely,
    Vergil

  • I can think of several problems with Freenet, and how malicious people will gum up the works. I'd like to see these points addressed somehow.
    1. The issue of goodwill: Freenet can be subverted. Because anyone can run a Freenet node, it would be trivial for a black-hat to claim to have any information which is to be censored, and either return something else or look to see who is submitting the requests.
    2. The issue of spoofing. Merely faking the metadata on documents could really mess up the system.
    3. The issue of request propagation. A document which is widely distributed will be returned quickly, but a document which is present in only one place potentially requires the request to visit every Freenet node. Suppose that someone generates a bunch of bogus requests for documents which do not exist? Each request goes to every Freenet node, flooding the system.
    I know next to nothing about these issues, but I was still able to formulate the questions. Why don't we have answers? If Freenet is supposed to be able to function even in a very hostile environment, shouldn't it be proof or at least resistant to these attacks? And we know it will be attacked, by bored script kiddies if no one else.
    --
  • by SaidiaDude ( 155962 ) on Thursday November 16, 2000 @01:25PM (#619229) Homepage
    Thoughts regarding P2P: What are the implications for security in the P2P world? Seems like it would be very easy for someone to crack into the local client S/W and figure out to breach security on a Peer's machine by sending scripts/etc. If this is possible, the implications could be profound as a cracker could gain access to hundreds of machines as the crack propagated itself around the P2P network. Infected clients could update S/W from a site other than the one intended by the end-user (and thus infect more computers, etc). The possibilites for security violations are endless...how do we prevent/reduce the chances of such harm for P2P networks? I.e. besides using regular security measures, open source, etc. what else would work? Redesign P2P clients to use more client/server architecture for S/W updates/patches (but maintain P2P connections for data - still issues of passing cracks disguised as data remain)? What else?
  • Suppose I change prices on my products on a daily basis and I cannot afford to have a single straggling copy of a web page with the wrong price on it?

    Suppose that you change your prices every time a customer comes to your site. Oh wait, someone already thought of that. :)
    _____________

  • Stop smirkin' those pills, mate.

    knock, knock, crash ...

    Err! Never mind
  • The problem with Freenet (and Ian Clarke has never really discussed it) is that data that's not accessed a lot will get dropped.
    Well, if you are so concerned of having your stuff being dropped, you just write a bot that request it once in a while.

    --
    Americans are bred for stupidity.

  • In Mojo Nation we deal with the same problems of herding cats, in this case it is a problem of how you can cache content effeiciently and effectively without knowing what is really going on. Markets are one such distribution system that has a very long history of people tweaking (and trying to cheat, therefore building up its protections against most types of fraud) and it actually does solve some of the problems you address. We are not as gung-ho on the whole "information must be free" bit as Ian, but we do know that these sorts of P2P systems are the ideal infrastructure for the next step in internet content distribution.

    jim
  • Is this the wanker whose idea of utopia is the
    ubiquitous existance of "truth machines" that
    make all forms of dishonesty "impossible"? As if
    politicians won't find technological means to
    nullify the truth machines....while requiring
    everyone else to have no protection.
  • I am not sure that it matters for items of low value, but if a network is to be highly valuable, does it not also need to provide some level of versioning? It seems that access is part of a larger more complex problem.

    As information is distributed, there is the opportunity for errors (or active manipulation) to change the meaning or value of the document.

    (The book Darwin's Dangerous Game touches on some of these issues and is the seed that is fueling my comment)

  • The problem with doing dynamic content is that the P2P model fights against a lot of the basic simple assumptions a content provider can make in the current web model. We deal with these same problems in Mojo Nation [mojonation.net] and have been working on a few simple hacks for mutable content. Our data is basically a hash tree to preserve integrity (what you get is exactly what you asked for) but this means that when one comment changes the whole tree changes and the top-level ref is somewhere else; we could just point people to the new "master ref" for this tree and provide a slashdot-like experience but the tradeoff is that we are back to centralization.

    The next step for these systems are to pass around the code to turn the database of objects (which P2P systems like Mojo Nation and Freenet are good at distributing) into something dynamic and structured on the local client. Imagine giving the user a chunk of the /. database for an article along with code to explain how everything should be formatted, etc. The presentation and organization is local (along with any dynamic effects) while the data is just a selection from the pool of possible objects. This would also mean that when you download the articles you can pre-load the higher ranked articles or use collaborative filters to trim out the bits you are not interested in and avoid having to download these in the first place.

    Freenet has some good cacheing mechanisms in there but there is a balance which needs to be maintained between de-centralization (which provides the censorship resistant features of systems like Mojo Nation and Freenet) and dynamic information features that require a trusted codebase for execution. If Java had lived up to some of its hype perhaps we could be passing around dynamic objects that contained information and presentation all in one bundle and we would run these in our browsers without fear, but it just didn't turn out that way...

  • So freenet can't do everything. Big deal; it's not intended to. HTTP sucks at doing "real" dynamic content, too; that's why chat rooms, "news tickers" and suchlike are done as Java applets.

    While you can't do full database lookup things, you could do versioning fairly easily by adding an "update key" to stuff that could be updated. This would be the owner's public key and a string encrypted with the owner's private key. When a message with the appropriate tag comes in to the local server, it supercedes the old information. It might simply add a "superceded by" pointer to the old version; there are good reasons for keeping old versions around, not the least of which is to guard against post-facto censorship. To get an old version, you simply add the version number to your request.

    As to something like Slashdot, note that it is not really "dynamic" information. It is a sequence of static submissions. Submissions could simply be Freenet messages. The only thing that is really dynamic is the moderation scores. These could either live on a normal website or go out as frequently updated messages (see above) from the main site. Should be fairly easy to do a Java applet that pulls it all together. (It'll be a while before browsers unserstand "freenet://" URLs :-)

    BTW, I'm not familiar with the exact architecture of Freenet, but I would assume that when you put a message into the system, your local system tags it as the "definitive" version. It is then outside of the LRU cache scheme. It may take a while to get it from elsewhere in the network, but it's there.

    The big problem that I see with Freenet is that 95% (at least) of the data will be pr0n and warez. The pr0n and wares kiddies are collecters and love to play "minez bigger than yourz" games. If one of them has, say, 100 pirated versions of Microsoft Office, he'll happily pump them all into Freenet just for bragging rights. Other kiddies will check them out, and like as not, relabel and retransmit them.

    There's no way that I can see to stop them; basically, it's a built-in denial of service attack.

    --
  • We have created some software called "FProxy" which does something similar - which is to allow you to type URLs like http://localhost:8080/thisisafreenetkey and the web browser will (after a short delay) return the data for that key. With this it is possible to create hyperlinked websites on Freenet.

    --

  • Isn't there a fundamental conflict between Freenet's anti-censorship goals and its popularity metric? In a democracy like the U.S. (excluding Florida), the temptation is for government to try to respond to the will of the majority by promoting popular speech (Christian prayers at football games,...) and suppressing unpopular speech (porn, socialist tracts, DeCSS,...). It's become pretty much axiomatic with organizations like the ACLU that unpopular speech is the kind of speech that needs protecting. So how does it make sense for Freenet to rely so heavily on popularity? (I suppose the opposite might be true in China, where the government promotes unpopular speech and suppresses popular speech...?)

    Another thing that worries me is that one of the characteristics of censorship is that it's mysterious: you don't know what you're missing due to censorship. This sounds a lot like what happens when your speech mysteriously disappears from Freenet, presumably due to low popularity. How do you know that it was really due to low popularity, and not to someone cracking Freenet? To me, the issue isn't really permanence, since dead tree format is the only format that's really permanenet on time scales of more than 30 years. It's the issue of not knowing how long your information is supposed to last. I'd rather know that my information will be there until I stop paying the bill to my webhost.

    Finally, it seems that a lot of the agenda behind proposals like the .porn TLD is to make it easier to recognize unpopular speech so that it can be censored. Doesn't it seem like running a Freenet node is the ultimate red flag being waved at the censors, saying, "Secret police, here I am"? Maybe the information is already out there and free, but your own wetware is now in a smelly jail cell...

    --

  • If you think that there is no more to Freenet than its caching effect then you really need to do more research [freenetproject.org] before demonstrating just how little you know.

    --

  • The author looks at the terrible p2p implementations in napster and nutella and concludes that the p2p community in general understands the nature of the net so poorly as to make the same mistakes. If you look in the archvies for the decrentralization list on egroups you'll see that some people have addressed and are addressing the very issues he says will be a stubling block. Just because the author can't think his way through the problems of bandwidth, infrastrure, and reliability doesn't mean that people with better minds than him can't.
    --
  • by cpt kangarooski ( 3773 ) on Thursday November 16, 2000 @01:43PM (#619253) Homepage
    It's a myth alright. As we saw last year, the Internet has trouble with a well-placed backhoe. Things are getting more robust all the time, but there's always a shortage of bandwidth, and when any significant amount is lost it's acutely felt by everyone.

    Packet-based networks were pretty much the development of people who had seen the benefits of then-new timesharing. The ARPANet was bandwith-sharing. (there just weren't that many data lines back then, though early maps of the ARPANet will show how few links there were between IMPs) For any number of nodes n greater than 2, a minimum of n-1 lines are needed; yet there isn't the danger of having a single potential point of failure as in a star topology. (naturally, you want a hell of a lot more lines than n to guard against failure, but it took years to get to that stage)

    The nuclear war thing comes from an unrelated but contemporary (late 60's) RAND paper on the subject.
  • Mojo Nation uses digital token technology. The Mojo tokens actually sit on each users drive. However, they are digitally signed with RSA, so you can't make new tokens. Each token can only be spent once. Go read Applied Cryptography for an in-depth discussion of how it works.

    You get Mojo when you actually provide service for someone else (i.e. you let someone download a block from you, accept a block from someone, return search results, or relay messages for others). Tokens are given directly to the counterparty (though they exchange them for fresh ones right away so you can't spend the Mojo behind their back).

  • by rjh ( 40933 ) <rjh@sixdemonbag.org> on Thursday November 16, 2000 @02:35PM (#619258)
    Are such measures feasible (legally and technologically)?

    Nope. Both of them are utter pipe dreams. The "transparent tag showing origin and history" already exists today, except it has a much shorter name and a much more spotty record. They're called "watermarks", and they're pretty much a joke. Just look at SDMI, which has had some brilliant minds tackling the watermark problem and, even after millions of dollars in research, they still haven't managed to come up with a way to stop a really determined 15-year-old.

    Translated into modern idiom,
    • I propose a law requiring watermarks on every file on every server, and that the files be immediately accessible on request.
    Problem number one: watermarks don't work.

    Problem number two: if the law is going to require that every file on every server be immediately accessible on request, that's going to play hob with e-commerce. Do you really want to place that order for Naked Amazon Women In Bondage from Amazon.com, knowing that anyone can send an email to Amazon saying, "Hi! Pursuant to the new Federal laws, I want to investigate your site to make sure you're not using any of my IP. Please send me all of your customer purchase records."

    The alternative to this, which Stirling probably means, is that the watermark be kept available, although the file may not necessarily be. That defeats the purpose of a good watermark; one of the principles of good watermarks is they can't be removed.
    • The authorities should develop and send out a `sniffer' intelligent agent program to detect files not meeting these criteria.
    Stirling, meet the First Amendment. If I don't want to include watermarks in my original works, neither you nor the government get to say whoopty-doo about it. :)

    On a technological note, I've got some experience with smart agents. At the present time, they're really not very smart. Remember that there exist such things as countermeasures; once people figure out what ruleset the expert system behind the agent is using, they'll figure out ways to avoid triggering the agent.
    • Immediately shut down any server/node that doesn't reply properly
    Violates due process of law. Shutting down a server does Nasty Stuff to online businesses, and would require that a court hearing be held. Remember, nobody can be deprived of life, liberty or property without the due process of law.

    This is the only proposal which is feasible technologically, BTW. After all, to take down a server all you need is a fire axe and strong arms.
    • With really severe penalties for anyone owning hardware harboring pirate files
    Violates the legal principle of mens rea, which basically means--"if you had no criminal intent, then you didn't commit a crime". If I'm an ISP and someone is running warez off their shell account, I'm not liable until I'm notified of the illegal copying and I have time to verify the allegations myself.

    Technologically unfeasible, too, given that many systems will be harbored in foreign countries which are not signatory to any such ludicrous treaty as Stirling is suggesting. To penalize the owners of those servers would require... well, a small Special Forces team could probably convey the US's displeasure, but that seems like overkill, doesn't it?
    • Stirling claims that he talked to the FBI, who told him that they have the ability to penetrate Freenet's anonymity
    Maybe true, maybe false. Sounds more like happy smoke to me. Think about this: if the FBI does have this capability, why in God's name would they tell anyone about it?

    Stirling needs to talk to his dealer about the purity of his rock.
  • What metric do you expect to use for this problem? There is a finite amount of space in which to store information and more information that we have space for. Something has to give. In the end it will always come down to making this sort of a choice. The name for this problem is "distributed resource allocation" and Freenet and Mojo Nation are two systems that at least consider this problem and provide a stab at an answer. Popularity is actually a pretty good metric for most things, and if you really want eternity storage then check out Mojo Nation [mojonation.net] where you can spend your credits to make sure that the one file you really care about sitcks around regardless of whether or not someone else reads it (by paying others for its storage of course...)

    This is not erecting new barriers to publishing, it is lowering them and letting anyone get in on the action. Nothing is for free, but if people work together we can make the cost so close to free that no one will really care. In the end, you need to have at least some cost for publication or else you are just shifting the problem from one of publishing to one of filtering out all of the crap that everyone else is publishing (which turns into its own set of messy problems.)

    jim


  • (I believe this is in our faq.)

    Freenet is a work in progress, and it isn't even half done at the moment. Ways to update data on the network have been on the table for almost half a year. It's not an easy thing to achieve, but we believe that it can be done, and I think we are 90% agreed on the method (I wrote up a detailed proposal a few months ago, which should be somewhere on the webpage). Don't hold your breaths, but I would certainly like to get started on it in the somewhat near future.

    That said, there are of course things that cannot be done on Freenet the way they are done on the web. Obviously you can't allow limited lookups against a database for example, but more often than not it will be a question of thinking different (let Freenet be the database...)

    / Oskar Sandberg
  • I hope you're right and I'm wrong. I just remember the RIAA boast about how he'd firewall us in. Quickly retracted, but I suspect it was a mistake of revalation rather than intent.
  • Unfortunately, in my experience, MojoNation is still too early in development to be usable. It also seems to be too centralized.

    Check out the new 0.920 version, just released yesterday! A much better install, faster and a lot less buggy as well. The centralization issue is being worked on (there is a single bank, although peers use micro-credit between any two counterparties so bank failure != system failure) and we are pushing things out to the edges as quickly as we can.

    Part of the advantage we think we have with a market-based structure is that it is easy for us to be flexible about control decisions and letting local choices provide emergent behavior. For now, some stuff is centralized just because it was easier for us to do it that way and move on to the important bits that needed to get coded -- we are paying the price of this and going back to replace certain centralized features with most distributed solutions, but in the end it is simple for anyone to build a better mousetrap to solve a problem within Mojo Nation and replace an existing market actor by offering the other agents a better deal. :) jim

  • by Gumby ( 425 ) on Thursday November 16, 2000 @12:02PM (#619266)
    Here was something I didn't understand from the explanation of this decentralized, caching system. If I want to post an encrypted document that only I know about for later retrieval (say in 5 years) how does the system prevent it from getting deleted from all nodes for unpopularity? If there is no central authority, doesn't that imply that either: 1. documents can be lost or 2. each peer has to be able to talk to all other peers to preserve unique but unpopular files? DOS sounds a problem with this also.
  • The problem isn't that less people know how to use them, it's that more people are using computers. Makes the proportion of knowledgable look smaller.

    ...I still get my email through the local freenet (which is no longer labelled as such)...
  • Ian Clarke mentions reliability of the network, but he doesn't mention reliability of the data. I'd better spend 1 minute getting reliable data over the Atlantic than get possibly falsified data from a neighbor.

    Anybody remembers util-linux with a backdoor on a server in Holland?

    I'm not going to rely on any data from untrusted sources.
    I don't mean slashdot :-)

  • It is not my intention to compete with MN - I think that it is a good solution to the tragedy of the commons issue, but I still assert that the aims of the projects are different. With Freenet we feel that people's ability to participate in the network should be proportional to their personal wealth, which is the case with MojoNation when you really get down to it. This is not a criticism, it is just a difference in our goals.

    --

  • Last month I had to prepare a presentation about the perspectives of P2P networks and the CNET article The P2P Myth was one of the most usefu
    l sources of both ideas and links for further reading.
  • by Lazarus Short ( 248042 ) on Thursday November 16, 2000 @12:11PM (#619273) Homepage

    It doesnt!

    From the Freenet FAQ [sourceforge.net]:

    Freenet is not intended to be an eternal archive. Because the system is completely democratic, it does not inherently distinguish between the U.N. Universal Declaration of Human Rights and my kindergarten drawings -- documents are scored solely by requests. It is anticipated, however, that the current low cost of storage will make enough storage available to Freenet that documents will only rarely have to be discarded.


    --
  • by TheDullBlade ( 28998 ) on Thursday November 16, 2000 @01:50PM (#619274)
    Look around. How much of your web-surfing time is spent reading totally static documents?

    Don't you spend far more time on sites with some form of interactivity, or at the very least, which are updated from hour to hour?

    Incidentally, I think the terminal client to terminal client approach is technologically backward. It may have some advantages in preventing censorship (though I'm willing to bet that it would be pretty easy to spoof freenet, one way or another, to lower it's signal-to-noise ratio below slashdot in flat mode, ignoring moderation scores), but it would make far more sense with a true "web" structure than with the internet which is closer in many ways to a free tree. Caching on machines that are only connected to each other through a backbone makes much less sense than caching on the backbone.

    --------
  • by TheTomcat ( 53158 ) on Thursday November 16, 2000 @02:04PM (#619279) Homepage
    It doesnt!

    Not directly, but with storage being so cheap (as the FAQ says), and the number of freenet nodes growing, it's VERY likely that the document in question will stay available, perhaps not infinitely, but at least for a long time.

    On that note, I just read the FAQ and didn't read anything about how they're dealing with duplicate informtion. Sure, reduncancy is good, but just looking at something like Napster, one could make the assumption that a LOT of 'content' on a given freenet node will be the duplicate of another piece of 'content' on the same node. Back to the napster thing: it reports almost 7 terrabytes of 'content,' but we all know that a large portion of that is a seemingly infinite number of duplicates of the same top40 song.

    Maybe there's really only 2 terrabytes of unique data on Napster. It would be great if I could search for "Phish Live" and not have 90% of the results be files of the same name, bitrate, length and filesize. This seems trivial to implement.
  • The great thing about the Internet now is that I, as an individual, can publish pretty much anything. I can write music and put it out, I can write fiction and put it out where people can come by and access it. Until the early 90s this was just not possible. If my stuff is not enormously popular - so what - people who enjoy that kind of thing can still get it. I can publish to my heart's content and the few hundred readers can read it. Similarly, I can go and get obscure stuff myself - something that wasn't possible before the internet showed up in its current form due to publishing barriers.

    But Freenet will just drop this stuff because it's not popular - and this seems like a retrograde step to me. It re-erects those old barriers to publishing that the Internet is destroying - and eventually, Freenet just holds what the Sheeple want. We end up with a network that's no better than TV or the print press - containing only what's popular. We end up with masses of Britney Spears or Blink 182, but you can't find something like the Bottom Feeders or Bradley N. Litwin.

    Hm, I guess I had assumed that if you continued to request something from freenet, freenet would continue to supply it. If you were the only person to request it, then presumeably it would be stored in the node closest to you.... That is, on your computer. Would this be much different from having a web page on your server that no-one else ever reads?

    But then, I don't understand how freenet works. No one does at this point, I suppose, because it's not finished. In any case, saying that "unpopular" stuff will be dropped isn't quite the same as saying that only the 1000 (or whatever) most popular documents will be kept in the system. Could you explain why you believe that Freenet would drop data that's still requested, but only by a small number of people?

    Currently, yes, anyone can put something on the web, but only if they're willing to pay---sure, you can get free web space if you're satisfied with poor bandwith and banner ads, but if you want to get your stuff out to more than a few people, than you need resources.

    ---Bruce Fields

  • by blanu ( 128654 ) on Thursday November 16, 2000 @04:06PM (#619285)
    As a Freenet developer, I feel compelled to correct some of the inaccuracies being presented by commentors as fact.

    "Freenet is an attempt to replace the web." - This is more true than saying that Freenet is a replacement for Napster, but it's still not true. Freenet is better than the web in a couple of ways, mainly anonymity and decentralization. If you don't need these features, then by all means use the web.

    "You can't create Slashdot on Freenet because Freenet doesn't have dynamic content." - Sure you can. A web forum was already created, but is currently being overhauled. We already have a web frontend and newsgroups, mail, and hyperlinked documents in Freenet. A web forum is just an HTML frontend to a newsgroup with some bells and whistles. The reason that they use dynamically generated pages is because they use RDMS backends so that the servers can handle the load. Since the load in Freenet is distributed, this isn't necessary. Sometimes you really do need dynamically generated content, but in the case of web forums it's mostly just a performance enhancement.

    "Popular == worthless. Freenet will be filled with worthless stuff." - Popularity is local, not global. If you connect to your friends instead of random strangers then the local network will be filled with items of shared interest.

    "The problem with Freenet is that unpopular items are dropped." - Popularity is local, not global. You want items that no one in your local network is requesting to disappear. Files go to where they are wanted and disappear from where they are not wanted.

    "I can't trust the information that I get out of Freenet." - We have tamper-proof keys that rely on digital signatures and content hashes. If you are worried about authenticity, then use those.

    "Freenet must track what people request because it knows what is popular. That leaves an audit trail that compromises anonymity." - Popularity is local, not global. Your node discards items that have not been requested in a while. There is no global rating or tracking of any kind.

    "Freenet requires a high-speed connection" - No, but it would certainly be nice.
  • by Pac ( 9516 )
    Sorry, I pressed "Submit" too early:
    The P2P Myth [cnet.com]
  • Why? Easy, surface area to volume ratio.

    Think of the net's elite as the surface, and the Joe Users as the volume. Now, as the net's volume increases, so does the surface area. It's just that the ratio is expanentially getting higher.

    There are more smart users than there were 10 years ago. It's just that the number has been growing slower than the number of keyboard-monkeys.

  • By using a decentralized network structure thingy, I can help to spread love and joy to the whole world! And then I can switch gears in mid-stream to move into brainwash mode: world domination begins! Spread the love! Spread the love!

    Oh, wait... we're talkin' 'bout 'puter stuff here, aren't we? Damn... I was hoping to pollute the world's water supplies with LSD.

    The colors, the colors... I can see the music!

  • Mojo Nation deserves a plug. [mojonation.net]
    It has an ingenious solution to the freeloader problem, namely an
    internal currency system, which may make the system more scaleable
    than Freenet. Advogato [advogato.org] also
    runs some good discussions of these issues.
  • by b0z ( 191086 ) on Thursday November 16, 2000 @12:19PM (#619291) Homepage Journal
    I see too many limitations in peer to peer networking in anything but small groups. I have read some about freenet, and it seems like a good idea, but it isn't like napster: it really needs high speed always connected computers. While there may be a technological solution to help alleviate the legal problem of dealing with material that is unwanted, I do not see anything that can help as much as a legal solution. By legal I mean we have lawyers go in there and fight with other lawyers and politicians and get some of these laws straightened out. Every once in a while, some technology comes out that is so big it can change the law, such as the printing press, but the majority of technology does not have that big of an impact on our lives. I do think we are heading in the correct direction, but I don't think freenet is the final solution. I still prefer the traditional client and server relationship which helps things be a little less chaotic, and easier to use.

    It seems to me like freenet also needs to track data to a certain extent because it caches the most popular content on other sites. That means there is an audit trail, even more than me setting up an ftp server on the different IP addresses I get with a dialup account, and just send an encrypted email to my friends to upload/download the content that is illegal from there. I would think there is less chance of me getting caught, even though I have to more actively do something considered illegal. Also, keep in mind that I realize freenet is not only going to be used for illegal stuff. But, it is the illegal (illegal is not always immoral, but often unpopular to the powers that be) content that is going to need the most protection and need to be cached the most. Stuff like deCSS mirrors are one example. We might be able to build on the peer to peer model some, but we still need there to be a strong structure based on a server passing clients to each other, or something. The only problem is that we know they won't allow this easily. Just look at napster, who does not distribute mp3's, but is getting in trouble for users doing it.

    Anyone else have any better ideas?

  • by joshv ( 13017 ) on Thursday November 16, 2000 @12:20PM (#619292)
    Ian wants to basically replace the web with freenet and has said as much. But what he doesn't get is that he is not going to replace the web as we know it with static documents (which is all freenet serves up).

    Come on, how could a web site like slashdot possibily exist in freenet? It couldn't. It is simply too dynamic, too frequently updated, and reliant on a coherent and consistent database of comments and articles that simply cannot exist in a distributed network.

    Freenet will be a boon for the archival of static and infrequently updated content and web sites, but for anything more dynamic, freenet fails to offer a solution - and as such will nicely complement, but never replace the web.

    -josh
  • by Sanity ( 1431 ) on Thursday November 16, 2000 @04:25PM (#619296) Homepage Journal
    I have never claimed that the current incarnation of Freenet could replace the WWW, merely that it could, as you suggest, be used for static content (which is probably most of the content on the WWW anyway).

    - Ian Clarke

    --

  • Of course we address this issue - we never stop discussing it. Freenet is not intended to be a permenant archive of information (and neither is the WWW), in fact is can't be a permenant archive of information.

    --

  • The intention of freenet is not to be a permenant archive of information.

    --

  • Clarke never claimed that the current Freenet could replace all aspects of the WWW, merely that it could be a better way to distribute static content efficiently and without fear of censorship. There are future possibilities for certain systems such as web-logs to be implemented on Freenet, but it is unlikely that it could ever replace everything the WWW can do.

    --

  • Couldn't dynamic sites be restructured as static bits of information, assembled by a smart client? Each comment is a meta-data tagged item, client finds them and assembles, much like a newsgroup. You could even publish the client programs as signed .jar files distributed over FreeNet. I'm not saying this is better, or easier, but it is possible, and interesting to consider.

    --

  • Actually, from what (little) I've read about Freenet, something like Slashdot would be possible - albiet in an altered form. Since its much "cheaper" for a user to upload something to Freenet than it is to create and acquire hosting for a web site, Freenet itself could be used for the kind of discussion Slashdot and Kuro5hin-like sites try to achieve. And in a much more permanent form. Instead of clicking on a reply link, you write a counter-argument or suppliment to a piece by another author and upload it. If its well-written and informative, then it will presumably be passed around by lots of people and spread through Freenet. Which would also (in theory) create interest in other pieces on the same subject....


    -RickHunter

Life is a healthy respect for mother nature laced with greed.

Working...