Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Patents Your Rights Online

Transparent Web Caching Patented 309

JohnQPublic writes "BIND author and all-around Internet personality Paul Vixie and Mirror Image Internet have recently received US patent 6,581,090, specifically '..technology that efficiently stores and retrieves content requests and balances Web traffic between origin servers to improve performance and speed' - sounds an awful lot like what Akamai do. There's a press release from last week that gives some lovely 'details', including this little gem from CEO Alexander M. Vik: 'We anticipate that these patents and our technology solutions will encourage large groups of corporations to become customers of Mirror Image services. We also recognize that this technology is a critical component of other content delivery services and weâ(TM)ll be attempting to work cooperatively with our competitors and their customers to address this issue.' Can you say 'patent infringement suit'?"
This discussion has been archived. No new comments can be posted.

Transparent Web Caching Patented

Comments Filter:
  • squid (Score:5, Interesting)

    by Unominous Coward ( 651680 ) on Thursday June 26, 2003 @06:35AM (#6301222)
    This better not affect squid.

    If it does, I can see a number of ISPs going belly up due to the increased cost of traffic.

    • Re:squid (Score:2, Interesting)

      by indros ( 211103 )
      Only if iptables is set up to forward any port 80 requests to 3128 automatically, it would!
    • Re:squid (Score:2, Interesting)

      by tijsvd ( 548670 )
      If this would affect squid, it would be a very strong case of prior art.
      • Re:squid (Score:3, Insightful)

        by Alan ( 347 )
        I'm not sure when they actually registered for the patent, but we were doing transparent web caching in 1998 with squid and linux. Prior art case, here we come.

        Course, I have a feeling these people patent things just to see who will fall for it, or to get other people up in a tizzy.
      • Re:squid (Score:5, Interesting)

        by Zeinfeld ( 263942 ) on Thursday June 26, 2003 @10:17AM (#6302998) Homepage
        I strongly suspect that this was actually a defensive patent filing. Of course such filings frequently fall into the wrong hands. I suspect that the real issue here would be interference between the mirror image patent and the Akamai patents. Both are in my view unenforceable, both because of prior art and in the case of the Akamai patents failure to disclose relevant information.

        Yet again we see the old patent lawyer trick of stating large amounts of prior art in the description then making claims that dircetly cover the prior art. Essentially the inventive step here is claiming ownership of all possible embodiments of an idea that have not already been invented - the fact that the contributions of the inventor are miniscule not being considered relevant in the corrupt USPTO system.

        If this would affect squid, it would be a very strong case of prior art.

        There are much earlier examples of prior art. Tim Berners-Lee described the basic concept of Web caches in his CHEP/Annecy address in 1992. CERN distributed a caching Web proxy in 1993, the HTTP specifications were extensively adapted in 1995 to support cache use with input from Jeff Moghul and Jim Gettys. People can also find W3C notes that were published arround that time that describe extended cache architectures by Phill Hallam-Baker. This was the original purpose of the W3C log format.

        The Akamai scheme is also compromised by prior art. The W3C deployed a system for serving web pages from multiple servers in 1995. Requests from Europe went to the French server, first at CERN, then Inria. This is a particularly important piece of prior art since I told the alleged inventor about it and it was in any case operating out of the same floor as one of the alleged inventors. Rohit Khare and myself had extensive discussions concerning the alleged invention but we are not listed as inventors, another probloem for the Akamai patent.

        Looking at the later claims some would appear to be pe-empted by the Open-Market patent application several years earlier, this was an EU patent filing that was hastily withdrawn after a ton of prior art was dumped on the applicants.

        The idea of transparent web caching is not new either. TIS created a transparent Web proxy sometime before 1998, the concept of web proxies and web caches have always been closely related. The combination is both obvious and covered by prior art.

        Also there are extensive discussions on the HTTP WG mailoing list and the www-talk list before that on the topic of transparent caches. These are generally considered a bad thing.

        What should happen here is prosecution of the USPTO under RICO. Their activities resemble a protection racket more closely with every corrupt patent they issue.

    • Re:squid (Score:5, Informative)

      by wfrp01 ( 82831 ) on Thursday June 26, 2003 @07:24AM (#6301463) Journal
      Oops [oops-cache.org] is a more than worthy alternative, that was developed outside the US. I'm not sure how patent law applies in such a situation.
      • Re:squid (Score:3, Insightful)

        by michael_cain ( 66650 )
        IANAL, but -- if it infringes on the US patent, and you use it in the US, you're infringing. Patents give the holder complete control over the use of the invention for the period of the patent. Having an implementation done outside the US, which may be perfectly legal, doesn't gain you anything in terms of being able to use it legally inside the US.
    • Re:squid (Score:3, Interesting)

      by Xformer ( 595973 )
      Unless you're using squid to balance the load between servers all over the Internet, it doesn't look like it will. It seems like all of the claims are based around a system that redirects requests from a client to a server to a mirror of that server that may be closer and, consequentially, quicker to access.

      Basically, if you're in Finland and try to hit google.com, something like this might automatically redirect you to google.fi instead, so you're not going over a slower trans-Atlantic link.
    • Re:squid (Score:5, Informative)

      by cait56 ( 677299 ) on Thursday June 26, 2003 @09:34AM (#6302543) Homepage

      I believe the patent predates Squid, so there could be a problem to whatever degree that Squid infringes. Just because a later developer is open source does not mean that the original claim was invalid.

      However, reading the patent carefully, you realize that it actually only describes a very specific solution. Specific enough that it truly is describing a solution, not a problem. And specific enough that it might legitimately be considered novel for the time it was filed (I really don't have time to search the source code of all proxy servers in the 1996 time frame -- let someone with a finanicial stake do that).

      Specifically the patent deals with websites that are identified by their IP Address and where certain content (by default all) is held in an alternate (and presumably closer) server.

      There is nothing in this patent about determining if the content is fresh. The description presumes that the cached copies were pushed by the server.

      So this would only seem to proxy servers that are transparent to the user, but not to the servers. The proxy servers that are of most interest to an ISP would either be transparent to the server as well, or more of an akamai style strategy where the first-response page is localized to directly fetch pre-positioned material from edge caches.

      Interestingly, the patent seems to be worded to cover a single box which handles both the intercept and the decision to proxy, but does not handle the actual proxy response. A firewall transparently redirecting a port to a proxy server is prior art. The basic claim to being novel here is that the client does not have to be configured to use the proxy, and diversions only take place if certain content is requested, non-proxied sites are passed through "unaffected" (which is a false claim, BTW, which I'll deal with in a moment).

      There are some serious omissions in the description, would could undermine its enforceability.

      • It speaks about identifying "requests" and forwarding those that are not "web requests" to their original destination "unmodified". It fails to disclose that TCP does not naturally delimit "requests", and that identification of a complete "web request" is a complex matter.
      • It does not disclose that "other requests" are not amenable to the same parsing algorithms as for "web requests", and that in fact they must be dealt with at another protocol layer.
      • It does not disclose that there can only be a single "request" per session, and a single "reply" from either the original source or the alternate. Specifically there is no disclosure on how to splice responses, which it obviously does not do, or on the lifespan of a session that makes the short-version possible.

      Perhaps most importantly, the invention described here is working as an application level gateway. It is incapable of quickly identifying TCP connections that do not require proxying and leaving those connection truly unaltered. Terminating a TCP connection, examing the first request in it, and then deciding to actually forward the request to the real server is not "transparent".

      The "preferred embodiement" either a) deferred establishiing the connection until the "true source" was to be known (clearly unacceptable, what if the "true source" is not accepting connections?, or b) established the connection, and then aborted it, once the decision to substitute was made.

      The implications are not discussed or disclosed. Which isn't surprising, because this patent describes techniques that only work for HTTP 1.0

      Caching for HTTP 1.1 is a new problem. You have to deal with caching hints, persistent connections, cookies that might affect the material supplied, etc.

  • akamai overseas ? (Score:3, Interesting)

    by selderrr ( 523988 ) on Thursday June 26, 2003 @06:38AM (#6301231) Journal
    if they move all their assets to India or other low-cost countries, they hit a double whammy : cheaper labour + no pattent fees.

    The US pattent office is well on it's way to push every profitable tech offshore... hey, maybe I should pattent that !
    • by Ice Tiger ( 10883 )
      The legal landscape in the US is very favourable to pushing tech industry with less costly labour and probably more importantly IP laws, overseas.
    • Why, it's not profitable. Unless of course your only plan was to sue them when they did it. But nobody does that right?
    • if they move all their assets to India or other low-cost countries, they hit a double whammy : cheaper labour + no pattent fees.

      Any servers they have in America (and their business requires them) would still be subject to American patent law.
    • Might not really work. As long as the customers of Akamai are in the US and not in India, i would think the customers can be sued for dealing with a company which breaks the US laws. It can probably be argued that the "use" of the technology happens when a US customer clicks on the webpage or something
      As long as the main market for services is in US, nobody is going to break any US laws. When the rest of the world start demanding stuff, then time to start thinking of registering companies in Maldives or
    • by Peer ( 137534 ) on Thursday June 26, 2003 @07:00AM (#6301316) Homepage
      if they move all their assets to India or other low-cost countries

      Isn't the only added value of Akamai that their servers are always close to you? Moving to India wouldn't do them any good. They'd still need servers in the US.
    • Re:akamai overseas ? (Score:2, Informative)

      by Groote Ka ( 574299 )
      My experience is that quite some US companies only file patent application in the US. On the other hand, Japanese and European companies file at home AND in the US.

      Guess who's really laughing...

      Mirror Image Internet, Inc., since they were wise enough to file almost everywhere, contrary to quite some others... Go to the Espacenet [espacenet.com], the European Patent Office search database and search for Mirror Image Internet as applicant.

      The fat lady will be singing for quite a while in this case.

    • by WCMI92 ( 592436 ) on Thursday June 26, 2003 @08:33AM (#6301896) Homepage
      "The US pattent office is well on it's way to push every profitable tech offshore... hey, maybe I should pattent that !"

      You are dead on there... The United States is PATENTING itself out of it's tech lead.

      Patents when they work as originally intended are supposed to ENCOURAGE, not stifle innovation. Our current "rubber stamp" patent process encourages innovation only in the Microsoft meaning of the word: command and coercion.

      Software and business method patents should NOT be granted. Patents should only be granted for MATERIAL things, actual PRODUCTS.

      But, this is the 21st Century world. Intellectual Property is the new oil, the new gold, and the rules are set up so that only the largest multi corporate cartels can own, develop, or exploit any of it.

      Anyone outside of that strata who comes up with a new idea will either find it stolen from them (because they can't afford to patent it or even FIGHT patent infringement by a corp), or will find themselves sued out of existance by any IP cartel that finds itself threatened.

      Look at Napster... One of the last truly "killer apps" invented. Instead of BUYING it and running it as a paid service, while they could have done so, the RIAA chose to sue the world.

      The p2p battle is just the beginning of what is going to happen all over the place in the coming years: UNDERGROUND IP.
  • GPL'd patents (Score:5, Interesting)

    by Anonymous Coward on Thursday June 26, 2003 @06:38AM (#6301232)
    What we need is people to allow their patented software to be used royalty-free in GPL'd software. This way, they would still get royalties from commercial vendors (which they should) without hurting open source. It seems like this would be a perfect candidate.
    • The whole goal of open source is to get enhancements back into the source tree to improve the software. If a big company tweaks some open source software they have incentive to publish the patches so they don't have to keep patching every version. Everyone benefits.

      Just look at the linux kernel - we all benefit from changes submitted by commercial companies. IBM isn't charging users to use the substantial features they've contributed to the kernel.

      Also - an individual developer is probably more likely
    • Re:GPL'd patents (Score:3, Informative)

      by JoeBuck ( 7947 )

      This is already being done. Here are some examples:

      • IBM and Rice University have both licensed patents having to do with register allocation, so that GCC can use them.
      • IBM has licensed its RCU patent, which is used by the Linux kernel (this is a case where SCO is claiming ownership of the technology even though IBM owns the patent!)
      • Raph Levien, of Advogato [advogato.org] and Ghostscript fame, has licensed a whole series of patents he holds with respect to printing technology for use in GPLed code.

      In all cases, t

  • by Eric(b0mb)Dennis ( 629047 ) * on Thursday June 26, 2003 @06:41AM (#6301240)
    I am patenting a system for the moderation of online discussion boards using different models of messages and a scoring system that can allow users to browse only high-rated or sometimes-funny but nefariously modded down posts.

    They will range like so:
    Insightful
    Funny
    Offtopic
    Troll
    Informati ve
    Flamebait

    The scoring will range from a -2 though 5 basis, users may boost certain catagories of messages by as many points as they want, further allowing them to see what they want to see.

    This system is free to use, but you must agree to the EULA

    EULA:

    Me ( Eric(b0mb)dennis ) hereby reserves the right to freely use any property you own, including, but not limited to:
    Kitchen and/or pantry (I need to eat)
    Toiletry (The bush just aint cutting it anymore)
    Your personal computer(s) (My name is Juan and I am posting this message from a chicken coupe in The Country formally known as Iraq, i am using RFC 1149 in conjunction with a new form of animal telepathy to post this message -=patent pending=-)

    I also reserve the right to change the moderation of any post I see fit.. (This should be modded +5 Troll, if it's not done, expect some SCO action on your butt)

    Thank you, and please be sure to invite me over for dinner... 50,000 Dinas isn't even buying me a zucchini anymore
  • ...for the lazy (Score:5, Informative)

    by Anonymous Coward on Thursday June 26, 2003 @06:42AM (#6301246)
    The patent [uspto.gov]
  • This is an Excellent example of why software patents are bad

    Right now, if you are a European citizen, like I am, then Write to your European Member of Parliament (MEP), and tell them that you think software patents are a bad thing, and that they should vote against them on June 30th.

    The forthcoming European vote was covered here on slashdot [slashdot.org] a few days ago, but did not make the front page, so did not get much coverage.

    You can find a list of European Members of Parliament here [ark.ac.uk] To avoid annoying them, do write you your MEP, not to a party leader. If you have several, please take a look at which issues they cover, and chose the one that take an interest in trade/technology etc.

    Remember, Write NOW! we don't want this sort of cr*p in

    • by biehl ( 580274 )
      Hi,

      If you read the texts published by the EP-members that published eg

      this [eu.int]

      then you will see that they claim to avoid these kinds of "frivolus patents". I guess that one has to really read the proposal to be sure. At least make sure that you say something else than "Don't do it - it will be like horrible like in America" - because the MEP's are told that it won't be like that.

      You should remark that the examples of "ok to patent" from the linked text

      # an invention in which an X-ray apparatus wa
    • ... I am a resident of one of the many countries that are joining, but do not yet have a voice in the European Parliament?

  • by Manic Ken ( 678260 ) on Thursday June 26, 2003 @06:43AM (#6301250)
    This is starting to get out of hand!
    I dont live in US(god bless that nation), but I am afraid that these "IP properties" enforcements is going to far. I use the term "IP properties in general though I shouldn't, but you know what I mean. Will patents and court decision made in US affect me in europe? Will I have to move to some obscure island where no enforcment of US interests exist?
    • Uh - which island would that be?

      The only obscue lands where there is no enforcement of US interests are lands where there are no US interests in the first place. What might make a land have no US interests?

      1. Absence of important resources. (ie don't plan on living in the middle east)
      2. Absence of potential terrorist training camps. (a deserted island next to an Al Qaeda hideout is probably not a safe place)
      3. Absence of civilization. (civilization means people - people means a market for exported go
  • by Sique ( 173459 ) on Thursday June 26, 2003 @06:43AM (#6301252) Homepage
    Having worked with Mirror Image I have to say that the way Mirror Image is doing the caching differs strongly from Akamai's.

    While Akamai is putting cache servers in many IP provider's locations (I think more than 5000 so far), Mirror Image is concentrating its caches in about 20 locations connected to the big exchange and peering points. The Mirror Image presenters were explicitely stressing this point and that this other approach is the key to Mirror Image's success. So I guess the patent covers the Mirror Image Way Of Doing Things rather than the idea to cache websites to speed up transfer rates.
    • by Bartmoss ( 16109 ) on Thursday June 26, 2003 @06:57AM (#6301301) Homepage Journal
      That difference is absolutely negligible. Even if it's a difference in the fine points of the company strategy, it should NOT be sufficient for a patent. To me this is definitely another case of patent abuse.
      • That difference is absolutely negligible.

        It is not. I did actually read the presentation of the intervention given in the patent statement, and I remember some parts of the presentation we once got.

        Here are the main differences to Akamai and Squid:

        1. Mirror Image determines which source to use by looking where the request comes from first. This is actually done by having ".mirror." (or another subdomain similar to this one) as part of the requested URL and the subdomain's resolver being part of the Mirr
      • That difference is absolutely negligible. Even if it's a difference in the fine points of the company strategy, it should NOT be sufficient for a patent. To me this is definitely another case of patent abuse.

        So basically you're saying that you, some anonymous guy on Slashdot, knows US Patent Law than the patent examiners who work for the US Patent Office.

        Yeah, okay, right.
        • No. He's saying that he knows technology better. And that the technical difference between the two approaches is tiny.

          I somehow doubt that the USPTO examiner has as much as looked at, e.g., squid.

          And could this claim made now be used successfully as a defense in the future? Do you want to bet *your own* company/job+bank account on that?

    • by dpilot ( 134227 ) on Thursday June 26, 2003 @07:00AM (#6301317) Homepage Journal
      So I guess this means that I probably don't have to worry about Mirror Image filing suit if I were to (only hypothetically, of course, ;-) ) run a Squid+IPTables transparent proxy at home.

      Who's going to patent adding SquidGuard to that mix?
    • More like 15-20K
      -an ex Akamaite
    • The point, however, is not how Mirror Image actually operates, but what their patent covers. If you read the patent, it really describes any transparant proxy. The system just needs to intercept the request, see if the request is for a particular web page, then redirect it if it is or pass it on if not. From just browsing the published patent it appears that the priority date of the patent is in 1997, so this patent has been tossing around in the PTO for a while. It might be a pain to find some decent p
  • by Rosco P. Coltrane ( 209368 ) on Thursday June 26, 2003 @06:43AM (#6301253)
    Can you say 'patent infringement suit'?

    Yeah, I bet Akamai even installed a vixie-cron job to launch the suit automatically in the next few days.
    • Can you say 'patent infringement suit'?

      Yeah, I bet Akamai even installed a vixie-cron job to launch the suit automatically in the next few days.


      From Akamai's /var/log/cron:


      Jun 25 04:00:00 main.akamai.com CROND[5248]: (root) ALL UR CACHE R BELONG TO US!!!


    • Why does the image of launching a suit appeal to me?

      Suit: Um... What are you doing with that... Thing?
      Tech: Oh, nothing... Just stand right there and wait for the cron job to kick off.
      Suit: "Cron" what???

      Device: *poomf!*
      Suit: Aieeeee!
      Laws of physics: *splat!*

  • Pretend web caching. That's where you:
    mv hellworld.c superduperwebcacherino.c and charge
    joe cluelesscorp 25 grand for it. It's so perfect
    you don't even have to reconfigure your browsers!
    Then you go to mexico and live like a king for
    3 years before coming back with a new identity
    and patent pretend IDS.
  • Why (Score:2, Insightful)

    by mgcsinc ( 681597 )
    Does the patent office seem more interested with getting new patents out of the hideous "pending" ohase instead of investigating the ramifications of such patents? Patent law is specific, and it is not supposed to be this easy to get a patent on a technology that everyone has been using for so long...
  • by brunes69 ( 86786 ) <slashdot@keir[ ]ad.org ['ste' in gap]> on Thursday June 26, 2003 @06:47AM (#6301267)
    If we want to fight this patent battle the solution is not to sit back, wait until a patent comees out, then bitch about it. We need to be pro-active.

    The EFF and/or the FSF should be actively going out and patenting every software idea that comes out of free software under the sun. They could then just release the patents with some kind of liberal license, with a revocation clause.

    This would give you two things... it would supposedly prevent other companies from patenting the same thing, and it would also give us a portfolio a-la IBM to use as defense... as in if these guys try to sue squid, the EFF comes in and says "well we have a patent on using the color red on menus... so if you sue squid we will revoke your patent license and sue you!" Stupid example but you get the idea.
    • Yeah but then wouldn't they have to somehow exercise their patents over evil or lose them anyway?
    • by femto ( 459605 ) on Thursday June 26, 2003 @07:09AM (#6301363) Homepage
      There is no need for EFF/FSF to patent stuff.

      Instead, EVERYONE should be writing 'Free' patents on every idea under the sun and publishing them far and wide. Once it is published, it's prior art. Surely 100,000 geeks can match a bunch of lawyers? We have the advantage that we don't have to pay for 'free patents' and there is no red tape.

      Maybe EFF/FSF's contribution can be to set up a wiki which can be used to make a permanant, easily searchable, record of all these ideas?

      I guess such a scheme addresses the 'prevention aspect' but doesn't address the 'bargaining' aspect.

      If a patent is only gong to be used as a bargaining chip, it probably doesn't have to be particlarly strong, so it might be possible to D.I.Y. and eliminate legal fees. That way, it might be affordable to patent some of the 'better' ideas.

      • If you only publish it, someone will patent it anyhow. I personally came up against a patent for which I also found prior art, but I can't afford to fight it, and the EFF ignored me when I tried to talk to them about it.
    • by kinnell ( 607819 ) on Thursday June 26, 2003 @07:09AM (#6301365)
      The EFF and/or the FSF should be actively going out and patenting every software idea that comes out of free software under the sun

      This is completely pointless. Simply by making the software available, you can claim prior art, which invalidates any patents filed later by third parties on the same technology. Similarly, if the software source has been made public, it can't be patented anyway by anyone.

      • by roystgnr ( 4015 ) <`gro.srengots' `ta' `yor'> on Thursday June 26, 2003 @07:40AM (#6301538) Homepage
        By documenting prior art, you can invalidate any patents filed later by third parties on the same technology.

        By filing a defensive patent, you can negotiate against any patents filed later by third parties on different but still obvious (*cough* - 1-click anything - *cough*) technology.

        Since the patent office seems to currently approve patents on "doing something people have done for centuries... but with a computer!" this may be necessary.

        Of course, I'd want to see in writing something which grants open source software a perpetual license to use such patents, as opposed to say Red Hat's "we promise not to enforce our patents against free software unless someone buys us out or we change our minds" promise.
      • Simply making software source publically available doesn't constitute publishing w.r.t ip law. It has to be published using only technology that existed at the time the U.S. constitution was orginally authored, i.e. the printing press.


        The only thing I've found that comes close is disclosing things in a usenet discussion group, and even that does not constitute publishing. It simply makes it a little harder for a patent applicant to show that no prior art exists.

        • Ok, If it needs to be "published" in dead tree format. Then perhaps someone should start a new magazine. Published monthly, it could collect ideas online and publish them to preserve their freedom.

          On the other hand, with patents if you were to very generally describe an idea, others could still patent a very specific implementation. If you were to be specific, others would just have to be specifically different. It would still be better to actually patent some of these ideas (ala IBM) for defensive use.

          Al
    • Sorry...I don't see where the 'interesting' mods come from unless maybe the poster has 5 aliases. Last I heard it cost $20k a pop to get a patent through. That's not counting the $3000 some companies like HP pay the inventor for filing (or at least they used to). I don't know much about the EFF financial situation, but I seriously doubt that they're going to come up with that kind of money. And the funny part is that they don't have to. If they publish the idea it becomes prior art and no one else can get
      • The EFF should not just publish stuff. The EFF should publish stuff, and keep an anti-patent database that can be mirrored, downloaded, printed out, and whatnot.

        The source for these things should be recorded, if the person desires [for example, I can have this under my name, and give contact info. That makes me a possible expert witness for $$$, or it makes me in line for consultation, if someone wants to carry this through] or it can be anonymous.

        If they did this, I would frequent that location often,
    • If we want to fight this patent battle the solution is not to sit back, wait until a patent comees out, then bitch about it. We need to be pro-active.
      But if the EFF and/or FSF argue that software patents are an illegitimate attempt to appropriate obvious or inevitable innovations of the computer using public, they can't then turn around and participate in the same evil scheme without undermining their own position.
  • by AndrewNelson ( 171986 ) on Thursday June 26, 2003 @06:48AM (#6301274) Journal
    I worked for MII a few years ago (the company I worked for was bought out by them) and everything I saw indicated they were on their way to destruction and should make their time.

    The Vik brothers have been using MII as part of a shell game for years - I wonder when they got the bright idea to try and make money with it?
  • Vixie (Score:5, Insightful)

    by technoid_ ( 136914 ) on Thursday June 26, 2003 @06:49AM (#6301275) Homepage Journal
    I commented before about not trusting Vixie very much due to some of the things he has done in recent years, and he proves me right.

    Makes MAPS a pay service after orbs (their biggest competitor at the time) closes up shop.

    Creates the "members-only" only bind list which deals with bind related security problems.
    (http://slashdot.org/article.pl?sid=01/ 02/03/16562 43&mode=nested&tid=95)

    I don't know if maybe he is pissed off since he isn't thought of as a god or what, but turning internet community related items into pay services isn't a way to become the net got.

  • by tyagiUK ( 625047 ) on Thursday June 26, 2003 @06:50AM (#6301278) Homepage

    The generally accepted term for this type of technology is "Content Distribution Networking" or "Content Delivery Networking". Akamai [akamai.net], Speedera [speedera.com], Digital Island [digitalisland.net] etc. are Content Distribution companies which will (according to the necessary commercial agreements), take a customer's content and distribute it around their overlay CDNs. Generally speaking, these CDNs overlay the traditional Internet using co-located space in customer or exchange point datacentres. There are, however, some CDN organisations who take the approach of building their own infrastructure.

    "Transparent Web Caching" on the other hand is generally a term applied to the transparent redirection of TCP port 80 IP traffic on access equipment through a set of HTTP proxy devices. This technique is used by many ISPs to force users to use their Webcaches even if the user thinks they are being clever by disabling the pre-defined HTTP Proxy settings in their Web browser.

    Until recently, you could build your own CDN ($$$) using software from people such as Inktomi [inktomi.com], but can still use devices from other manufacturers such as Network Appliance or Cisco Systems [cisco.com].

    • The usual sites that echo back your browers client paramters, e.g. ip address, etc..., to determine whether your ISP is doing this, won't work because they set or should set http header attributes to prevent the returned page from being cached for privacy reasons. You'd need a site that deliberately set the header to fake out a cache server and returned innocuous data to allow you to determine what is going on.

      This is worrysome since most ISP terms of use agreements state that they can track customers we

    • build your own CDN ($$$)

      Build your own Canadian dollar? I know its a weak currency, but thats going a bit far....
  • Filed in 1996? (Score:3, Informative)

    by pgregg ( 185457 ) on Thursday June 26, 2003 @06:51AM (#6301283) Homepage
    From the article: "Mirror Image developed the transparent Web caching patent in 1996"

    From Mirror Images "About Us"
    1997: Mirror Image Internet Inc. is founded.

    The earliest date on the Patent itself is September 30, 1997.

    IIRC Squid also was around in '97.

    The exact dates will be interesting.

    • Re:Filed in 1996? (Score:4, Informative)

      by mpsmps ( 178373 ) on Thursday June 26, 2003 @07:19AM (#6301430)
      This is a quirk in US patent law. Prior art needs to exist a year before for the filing date to invalidate a patent provided the patentholder can provide evidence that they developed the technology before then.

      By contrast, non-US patents can be invalidated by any art prior to the filing date.
    • For what it's worth, US patent law can actually protect inventions starting from the "date of invention", usually meaning the first time that a description of the invention was written down.

      And anyway, it doesn't look to me like all transparent caching is covered, just MII's old (late 90s) "supercaching" system. Nevertheless, the language in the patent is reasonably broad, and if they wanted to pay lawers to stir up a ruckus, they probably could.
  • by dschuetz ( 10924 ) * <david&dasnet,org> on Thursday June 26, 2003 @06:53AM (#6301287)
    They might want to watch out, because from what I understand AOL has the world's largest internet cache system (all running Linux, actually). And I'd bet that it's been in place since before 1996.

  • I've been doing this with OpenBSD IP Bridging and Squid for about 4 years. I'm sure others have been doing it longer. I wonder if there is prior art in that (or if the non-obvious clause applies).
  • Wrong attitude (Score:5, Insightful)

    by banana fiend ( 611664 ) on Thursday June 26, 2003 @07:11AM (#6301387)
    'We anticipate that these patents and our technology solutions will encourage large groups of corporations to become customers of Mirror Image services'

    It's the technology that should be getting the customers, not the patents, it's this kind of attitude (we're using patents to encourage customers - read FUD) that leads to forgetting why patents are there, to allow you to enjoy the benefit of the attractiveness of your INVENTION, not the attractiveness of your patent

  • This one looks like its perfectly legitimate, but given the nature of the business and the patent system I doubt he will get very much out of it.
  • by aeaeae ( 96565 ) on Thursday June 26, 2003 @07:35AM (#6301515)

    The patent is at Delphion [delphion.com] (free registration required) and the USPTO [uspto.gov]. Paul Vixie is listed as an inventor but probably has no ownership rights, or even the ability to collect on royalties. So don't lynch him yet...

    The first base (or independent) claim is:

    1. A method for transferring information via the Internet, comprising the steps of:
      • intercepting a message from an Internet user directed to a content provider address;
      • determining whether or not the message is an information request;
      • sending the message to the Internet without being affected if the message is not an information request;
      • determining whether or not said information request relates to a content provider address having a corresponding alternative address, said alternative address providing at least part of the information provided at said content provider address; and
      • directing said information request to said corresponding alternative address, if existing, or sending said information request to the Internet without being affected, if not.

    Doesn't sound much like my understanding of how Akamai works (I didn't think Akamai "intercepted" requests -- the origin servers actually pointed to the cache servers in their img src tags). It does sound an awful lot like a transparent proxy however.

    There's 36 claims, but only 3 are independent -- the rest are derived from those 3 (dependent claims). It's only the claims that are worth reading and worth worrying about. Press releases, abstracts and summaries are all irrelevant to what a patent actually covers. I find them more confusing than useful.

    Let's concentrate on the 3 independent claims then. Here's the other 2:

    • 15. A system for transferring information via the Internet, comprising:

      • first means for intercepting a message from an Internet user directed to a content provider address;
      • second means for determining whether or not the message is an information request;
      • third means for sending the message to the Internet without being affected if the message is not an information request;
      • fourth means for determining whether or not said information request relates to a content provider address having a corresponding alternative address, said alternative address providing at least part of the information provided at said content provider address; and
      • fifth means for directing said information request to said corresponding alternative address, if such a corresponding alternative address exists, or sending said information request to the Internet without being affected, if not.
    • 36. A method for efficiently delivering cached information to Internet users, comprising the steps of:

      • intercepting a message from an Internet user directed to a content provider, the message requesting specific information;
      • determining whether or not the message relates to a content provider address having a corresponding alternative address, the corresponding alternative address providing at least part of the information provided at the content provider address;
      • determining whether or not the specific information is within the at least part of the information provided at the corresponding alternative address; and
      • providing the at least part of the information to the Internet user, if the specific information is within the at least part of the information, or sending the message to the Internet, if not.

    As you can see, the differences between these claims are very subtle. I'd need to spend more time reading those claims to understand

    • To me what they seem to be desribing is a particular application of DNAT. Implementing what they describe on Linux to logically replace an entire server (claim 1) is indeed trivial with iptables out of the box using DNAT and quite obvious. In fact, I've always done it without thinking when moving HTTP servers (or SMTP servers or squid cache or IMAP servers or anything else with a host and port) to new boxes.

      However, the second two claims mention replacing/mirroring only part of the content of a server,

  • by Bazzargh ( 39195 ) on Thursday June 26, 2003 @07:41AM (#6301544)
    The press release also mentions patent no.s 5,991,809, 6,370,580 and 6,480,893.

    It turns out the last two deal with offloading requests for static content to a separate webserver. Well, isn't that a common use for mod_rewrite? It certainly existed back then, this is the earliest page [archive.org] I can find where it became an 'official' part of apache (I am sure theres more in the cvs logs) - thats from Jan 97, version *3* of mod_rewrite. The patents weren't filed until 5 months later.

    Before it was an official apache module, mod_rewrite was released in 1996 [google.com] and there is evidence of people using it for offloading requests from one server to another that same year [google.com].

    -Baz
  • Stop the madness... (Score:2, Interesting)

    by andreMA ( 643885 )
    Regardless of how original an idea might be, USPTO should be required to consider the economic impact of granting a patent. Patents are often a useful thing and used properly do spur innovation by rewarding inventors, but that needs to be balanced against harm to others. That balance seems to be sorely lacking at times.

    I wonder if there's any way for victims of patent extortion involving patents which have later been overturned can engage in a class action lawsuit against USPTO and force a more careful revi

  • by kriston ( 7886 ) on Thursday June 26, 2003 @07:49AM (#6301591) Homepage Journal
    I'm not sure what the poster meant by "sounds like what Akamai do", but Akamai is far from a "transparent cache proxy". Anyone who has used Akamai for more than 2 minutes would realize this. The only things Akamai might do that could be somehow linked to the patent is that Akamai is a cache that distributes data regionally. It's not transparent and it's not terribly easy to use for the we developer, either.

    Kris
  • If so, maybe it's a good thing (although it was always amusing when the site went down before the 5th post).
  • "We also recognize that this technology is a critical component of other content delivery services"

    Doesn't that suggest they are aware of a lot of people already doing this? I bet they didn't mention those other 'services' on their patent application. Obviously it's a case of "gee let's patent something everyone is doing and charge them for it because the PTO only recognizes prior art if it's in prior patents".

    IANAL but I thought if a company can show that they were doing something before a patent applica

    • They filed the application in 1997, but the patent was just granted this month. MII is simply asserting that that other companies have started using the patented method/apparatus in the time between when they filed and when the PTO granted their patent: nothing mysterious there.
  • We used Mirror Image where I'm at. For awhile they were competing with Akamai for our business. During one tech evaluation the engineer started using Linux *against* Akamai, and calling it shareware. I started yelling at her, screaming that she was wasting her time with that line of reasoning, that everybody in the room knew it was ridiculous, and she was losing points with me fast.

    The account manager stepped in quickly. :)

    Ah, the good old days. We did pick Mirror Image, and were quite happy with them.

  • by kriegsman ( 55737 ) on Thursday June 26, 2003 @08:14AM (#6301759) Homepage
    It looks to me like Mirror Image's original "transparent supercache" system is what's described in this newest patent (not so much their Content Delivery Network). The patent looks like its fairly broadly worded, and probably covers some similar models too, but on the other hand, they cite plenty of prior art in their own patent. So overall I would guess that "ordinary" transparent caching is not covered by this patent, but then again IANAL, and in particular IANAPA.

    Mirror Image's original business plan was to provide a client-side supercache service to client-side ISPs in places where upstream bandwidth was scarce/expensive (ie, Europe in the 90s). MII would 'mirror' popular high traffic (American) content onto supercaches located just a few hops from the ISPs. ISPs subscribing to the MII service could then configure their proxies to do a "look aside" and access popular content from the local MII supercache rather than have to sent requests across the ocean and pull the content all the way back. It worked nicely for ISPs that needed it, but there were fewer and fewer client-side ISPs willing to pay for access to the MII supercaches. So MII expanded into the server-side part of the caching business: "Content Delivery Networks".

    In 2001, MII bought an existing CDN technology company (Clearway Technologies) and in the process acquired a nifty server-side software agent (your choice of Apache module or IIS plug-in) that automatically "Mirrorizes" *coughcoughlikeAkamizescough* all of the output from an origin Web server, so getting your server's content onto the MII CDN only takes a couple of minutes and you don't have to alter any of your Web content. That agent and its associated methods are covered by the other patents mentioned in MII's press release.

    Personally, I believe that if MII wanted to sue Akamai for patent infringement, they probably could make a case for it these days, but --as always-- it's unclear that that would be the best use of their resources.

    -Mark Kriegsman
    Former Chief Scientist, Mirror Image Internet;
    Founder, Clearway Technologies;
    Inventor, US Patents 5,991,809, 6,370,580 and 6,480,893 (now assigned to MII)
  • by Fefe ( 6964 ) on Thursday June 26, 2003 @08:15AM (#6301770) Homepage
    First he takes over maintenance of BIND, then proceeds to blackmail his "customers" into paying him to be told about the copious amount of security problems, essentially capitalizing on the work of others.

    Then he had BIND 9 written, which has nothing to do with BIND. We call it BIND then? Again, to capitalize on the work of others (not that the BIND name is something to carry proudly).

    Now he patents what Linux could do at least since 2.0 (1996) with a patched squid 1 (been there, done that). Granted, his patent appears to be earlier than Akamai (1998) but Digital Island may have been early enough to invalidate the patent. They were bought by Cable and Wireless a while ago.
  • by Compulawyer ( 318018 ) on Thursday June 26, 2003 @08:18AM (#6301788)
    I just read claim 1 in this patent (traditionally, claim 1 is the broadest claim in the patent) and my first thought was that this patent has to have a heck of a prosecution history with a lot of comments that will narrow the interpretation of this claim in order for this to have been allowed. Then again, the filing date is 1999 and I don't know how much caching was being done then.

    Bottom line: SOMEONE had to invent caching. It very well could have been these guys. But if they approached one of my clients with an offer to license this thing, I would want to do a LOT of investigation before advising one way or the other.

  • by BigBadBri ( 595126 ) on Thursday June 26, 2003 @08:23AM (#6301824)
    as it appears to me is the patenting of business processes as a series of 'desirable outcomes'.

    Their patent claims read like a first-pass analysis of the caching-load balancing problem.

    To my mind, this sort of thing should not be patentable - it is merely a problem analysis, not an innovative solution.

    Most competent people who understood networks and the issues behind content provision would have come up with a similar, if not identical solution.

    This could have been failed on the 'non-obvious' requirement, but hasn't been.

    It seems that the technical abilities of the patenting authorities are not of a level to deal with patent applications such as this one competently (or indeed in a timely fashion, if it was filed in 1996).

    The best solution for everyone (except greedy patent factories such as this example) is to outlaw the patenting of software and software designs. There's plenty of copyright law out there to protect the code you write, and if it's not the best code at the most appropriate price, then you'll fail in the market and too bad.

    If on the other hand you have a true innovation in your field, you'll have a good 12 months to build the initial market before the competition comes to play.

    If America really is true to the principles of the Founders, it's time that the politicians looked at this issue and came up with a solution to stamp on parasitical patents such as this. Even if unsuccessful in their aims of extracting money, the costs to others in the field of dealing with the attentions of these people is a disruption to trade, and needs to be eliminated.

  • Lots of prior art .. (Score:2, Informative)

    by LionsFate ( 513762 )
    Ok, I'm not anywhere familiar with reading patents, but as far as I can guess, we have plenty of prior art.

    From reading the basics of it, and having almost gone into convulsions for attempting to understand it, heres what I can gather.

    Re-directing a user to an "alternate address" is covered. So it doesn't have to be transpartent in the proxy sence, the client can be re-directed.

    We all know CPAN, right?

    CPAN redirects you to a mirror automatically. Thus CPAN is covered by this patent, if I read correctly
  • No wonder that jerk is getting a software patent. What else would you expect from an asshole who is in stupid admiration of Aynd Rand [vix.com]. What a moronic caveman!!!
  • by lifegonehazy ( 644446 ) on Thursday June 26, 2003 @08:57AM (#6302098) Homepage
    This [squid-cache.org] is the earliest reference to transparent caching that I could find in the squid-users archive.

Almost anything derogatory you could say about today's software design would be accurate. -- K.E. Iverson

Working...