Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
The Internet The Media Your Rights Online

Web Caching: Google vs. The New York Times 518

An anonymous reader writes "The Google cache is a popular feature among karma fetishists. Many stories with links to the NY Times attract comments pointing to Google's copy of the article. This gives readers access to the content without registering. C|Net reports that Google is in talks with the NY Times to close this backdoor. The article raises some general concerns regarding the caching of webcontent. Shouldn't the NY Times simply tell Google not to cache their site?"
This discussion has been archived. No new comments can be posted.

Web Caching: Google vs. The New York Times

Comments Filter:
  • Free registration (Score:5, Insightful)

    by Zog The Undeniable ( 632031 ) on Monday July 14, 2003 @05:00AM (#6432795)
    I'd love to see their user database, just to count the number of Mickey Mice and Elmer Fudds on there. Apart from giving the NYT your e-mail addy for spam purposes, what real point is there to free registration?
    • by presroi ( 657709 ) <neubau@presroi.de> on Monday July 14, 2003 @05:06AM (#6432813) Homepage
      Maybe we can agree that the NYT is a well-written, serious and interesting newspaper. Not just for New Yorkers but also for people from Sweden, Japan or New Jersey.

      Where would the the limit? How would you feel if you had to register for every web page which is linked to at /. (I confess, I usually click on every /.-story link)?

      hmm, to answer your question:
      maybe the point in registration is the signing of a contract how to use this contact. Dunno.
      • by digitalunity ( 19107 ) <digitalunityNO@SPAMyahoo.com> on Monday July 14, 2003 @05:41AM (#6432929) Homepage
        (I confess, I usually click on every /.-story link)

        This is *Slashdot*. We don't read articles. Please, either read the article or post a comment; you cannot do both.
      • by Zigg ( 64962 ) on Monday July 14, 2003 @06:31AM (#6433059)

        Maybe we can agree that the NYT is a well-written, serious and interesting newspaper.

        No, sorry, I can't agree with that. Well, maybe interesting.

    • Re:Free registration (Score:5, Informative)

      by whm ( 67844 ) on Monday July 14, 2003 @05:06AM (#6432815)
      Apart from giving the NYT your e-mail addy for spam purposes, what real point is there to free registration?

      User tracking. While cookies can do this loosely, requiring a login does this much more effectively. I know I login with my same username each time I visit the site (if it's not cached). There's very little reason not to. This gives the NYT a much better indication of how many active and repeat members they have visitting their site. They can then target ads to users much more effectively, and market their userbase to advertisers much more solidly than they could with more rudimentary user tracking methods.

      There may be other purposes, but this seems like a large part of it.
      • by Anonymous Coward on Monday July 14, 2003 @05:41AM (#6432926)
        And on top of everything else, it annoys users more than just about anything else aside from spam. Can't recall exactly how many other people I know who go to see a NYT article, find the rego page, and ignore it to go find a better news source without the hassle.

        If they're tracking what their users are do, they're affecting their user pool in a pretty negative way just by using this method.
        • You can count me as one of the people that ignore the NYT unless I can get a cached page. I get enough spam as it is...
          • by Futurepower(R) ( 558542 ) on Monday July 14, 2003 @10:01AM (#6433993) Homepage

            Your comment was confusing to me until I realized that you are talking about giving NYT an actual email address. Why would you do that? Isn't that why we have hotmail.com? Give an address that does not exist or a throw-away address.

            Last week I was registering at a web site and I put in xx@xx.com for the address. The system responded, "This address has already been registered." So then I put in xxx@xxx.com. The system responded, "This address has already been registered." So I entered xxxx@xxxx.com. Same response. Finally I awoke fully and entered some Ds, xxxxdd@xxxx.com, and the system accepted my "registration".
        • by NexusTw1n ( 580394 ) on Monday July 14, 2003 @08:33AM (#6433330) Journal
          I always find it ironic when people on slashdot complain about being "tracked" on NYTimes webpages or other sites that require registration.

          Most people have registered to use /. , and have therefore provided a valid email address. So you can't have a moral objection to giving your email addy to websites you frequent.

          Even if you don't register, your IP address is logged and monitored , via the sophisticated anti troll system. Try and post more than 10 times in one day as an AC, or post as an AC in reply to a post you modded and slashcode will react.

          So even as an AC you aren't really totally anonymous on slashdot, yet I don't see anyone who complains about NY Times links complaining about that. The only people who complain are the trolls that forced these features to be added to the code.

          So why do we have this tedious bitching about the NY times every time a link is posted?

          I registered a couple of years ago. I've never recieved a single spam to NYTimes@mydomain.com which was the email addy I used. I've never had to login because the login cookie has remained in Opera since I registered. How hard is it login and then forget about it forever more?

          The only reason I haven't forgotten I've registered is the continual complaints on slashdot from people who are obsessed with privacy on the net unless karma is involved. NY Times doesn't spam registered users, and any user tracking is less sophisticated than slashcode's vital anti troll features. So bear that in mind when tommorrow's NY Times story appears and the same old complaints are dragged out yet again.
          • by LilMikey ( 615759 )
            NYT does not let you access their content without logging in. That's nothing like Slashdot's system.
          • I am not worried about being tracked, but rather don't find the content of the NY Times compelling enough to bother acquiring one more username and password. On the other hand, I've registered with slashdot for the amusement of karma and to my.yahoo for the spamcatcher email account and personalized weather.

            I don't complaint to slashdot, but did email NY Times and tell them such. (They graciously offered to sell me a paper subscription, no email registration required. ;-)

            I also don't avoid no-registrati
          • by fucksl4shd0t ( 630000 ) on Monday July 14, 2003 @10:03AM (#6434007) Homepage Journal

            So you can't have a moral objection to giving your email addy to websites you frequent.

            It's about trust, actually. Morality has nothing to do with it. I don't trust NYT not to sell my email address or anything like that. I *do* trust slashdot not, but if I ever catch them doing it, well, I just won't tell them it's changed recently. :)

            There are quite a few sites that I frequent that I don't trust with personal information. Visiting a site frequently != trust.

          • Re:Free registration (Score:4, Informative)

            by Rob Riggs ( 6418 ) on Monday July 14, 2003 @11:20AM (#6434607) Homepage Journal
            There is a significant difference to logging in to a site in order to participate in conversation and logging in to simply read news. At /., posting requires an identity, since anonymous postings are mostly ignored. However, there is absolutely no requirement that one log in to /. in order to read the stories. Your anology is broken. Privacy should be a choice. At /. one has that choice, with the NYT one does not.

            Another point is that anonymity is one of /. greatest strengths. Some of the most insightful and interesting posts have been from "insiders" posting anonymously.

            NY Times... user tracking is less sophisticated than slashcode's vital anti troll features.

            Care to back this statement up?

            ...continual complaints on slashdot from people who are obsessed with privacy on the net unless karma is involved

            You seem to be quite willing to give up those rights. And that's OK. But there are people here that feel that privacy is a rather important right. That should be respected as well. Enough people actually thought that privacy was a right of such importance that it is enumerated in the Universal Declaration of Human Rights [un.org] (see Article 12).

      • by StarFace ( 13336 ) on Monday July 14, 2003 @06:57AM (#6433129) Homepage
        You say these things as if they are good things. Man, I don't want some newspapaper tracking everything that I read, so that they may serve me custom tailored advertisements! Although, it would be nice if the system actually was intelligent, it would eventually discern that I loath the advertisment philosophy, and stop sending them altogether -- ha.
      • by hesiod ( 111176 )
        > They can then target ads to users much more effectively

        How about they advertise according to the content of the article. If it's a tech article they show tech adverts. That's pretty simple, and something they generally don't do (and it wouldn't require logging in)
    • by Anonymous Coward
      It's not only spam - they want to be able to build profiles of their users for marketing purposes. They've got my e-mail address. In exchange, their database has got a 98 year old woman who lives in Albania with a PhD, no job and an income of less than a thousand bucks a year. Twice. Once for my work e-mail address. And once for home.
      • by MartinB ( 51897 )

        Their database has got a 98 year old woman who lives in Albania with a PhD, no job and an income of less than a thousand bucks a year.

        And you wonder why you get ads that have absolutely no interest for you? And why advertisers have to shout lounder and louder to get through a mass of untargeted ads?

        Advertisers would far rather spend less by buying fewer, smaller ad slots that are targeted accurately. Much better return on their spend. Like the guru said I know half my advertising is wasted. I wish I kn

      • by JanneM ( 7445 ) on Monday July 14, 2003 @06:05AM (#6432991) Homepage
        You gave them an actual, working, email address? How ...quaint.

        Me, I'm a 66 year old single woman with no income, no education, and lives in a nonexistant Swedish town with a very rude (in Swedish) name. I figured that any site advertiser that want's to target this person must be desperate enough that their ads may actually be amusing.

    • by jkrise ( 535370 ) on Monday July 14, 2003 @05:15AM (#6432852) Journal
      Actually, free reg requires a valid email id. It thus filters most bogus registrations. Secondly, news sites are planning to go the 'pay' way in about a couple of years. Getting readers to register would give more accurate estimates of readership.

      And lastly, once a site requires registration, even if free, Copyright ptohibits quoting entire articles on the web. This indeed could be the prime reason for this.
    • by pslam ( 97660 ) on Monday July 14, 2003 @05:24AM (#6432880) Homepage Journal
      Apart from giving the NYT your e-mail addy for spam purposes, what real point is there to free registration?

      That's the thing - it's not free depending on your definition. By my own definition, you're giving them valuable information, and they get to keep it and use it as they will, including spamming if they feel like it (or spam from any company which buys them out, they sell it to if they're feeling bankrupt, etc). It's practically misadvertising of a service, but it's accepted now, so everyone gets away with it.

      If it really were free, why would you need to register in the first place?

    • Illogical. (Score:5, Interesting)

      by HanzoSan ( 251665 ) * on Monday July 14, 2003 @05:26AM (#6432887) Homepage Journal

      Trying to sell web pages is like attempting to sell mp3s on a p2p services where all mp3s are free.

      IT wont work. Instead you should use your websites to market and sell your magazine subscriptions.

      Like Wired.
      • Um... (Score:3, Informative)

        by autopr0n ( 534291 )
        Wired the magazine and wired the website are totaly seperate companies. The website is owned by Lycos, and the magazine by Conde Nast.
        • Re:Um... (Score:3, Informative)

          by broeman ( 638571 )
          nope Sir, you are wrong. Wired Magazine is indeed commercialized on Wired Website. Nobody talked about company relations, well, before you did. And I still see the Lycos bar when I am on Wired Magazine's Homepage [wired.com].
    • Re:Free registration (Score:5, Interesting)

      by cobbaut ( 232092 ) <paul DOT cobbaut AT gmail DOT com> on Monday July 14, 2003 @05:34AM (#6432907) Homepage Journal
      Apart from giving the NYT your e-mail addy for spam purposes, what real point is there to free registration?

      I always use a different address to register online in the form of website@mydomain.
      I registered with the NYT in 1999, I never received a single spam on this address.

    • Demograhpics (Score:5, Informative)

      by autopr0n ( 534291 ) on Monday July 14, 2003 @06:06AM (#6432996) Homepage Journal
      I've never been sent a single spam from the NYT. The reason they want this is for demograpics. A) it tells them who their web readers are, and B) it tells their advertizers who their web readers are. And it also allows them to show ads for products people would be most intrested in.
      • Art Spam (Score:3, Interesting)

        by Stone Pony ( 665064 )
        I registered with the NYT about a year ago and I've had little or no spam as a result. I say "little or no" because I did get an e-mailed bulletin about the world fine-art market twice a week or so for several months. I assumed that this was a result of registering with NYT because it seemed to fit the "NYT demographic" rather better than any of the other things I've ever registered for.

        Is there any easy (spam isn't such a problem for me - touch wood - that I'm willing to spend ages looking into where it

        • Re:Art Spam (Score:3, Informative)

          by ajs318 ( 655362 )
          To find out where spam is coming from, get an e-mail account with Virtual Hosting. This is where you get an entire subdomain {or a domain if you pay for it} to yourself, and your e-mail address is in the form anything@mysubdomain.myisp.co.uk. Then you just need to give a different prefix for each site you visit -- e.g. nyt_resp@mysubdomain.myisp.co.uk, and so on.

          If you want to put your e-mail address on your web site, use this [adyx.co.uk] to automagically mung your address.
    • Re:Free registration (Score:5, Informative)

      by yelvington ( 8169 ) on Monday July 14, 2003 @07:51AM (#6433153) Homepage
      NYT doesn't spam. And the percentage of net.morons who register using cartoon names is remarkably low.

      I don't work for the New York Times, but for another media company, and I'm in a position to understand the reasons for registration:

      1. Metrics. Registration supports the generation of accurate data on demographics and usage (reach, frequency) in a crosstabulated view. This is important in analytical processes to support site management and design as well as in the sale of advertising, which provides the revenue that makes the site possible.

      2. Ad targeting. Run-of-site, untargeted Internet advertising is nearly worthless on the open market (supply/demand), but advertising that is highly targeted remains highly valuable. When combined with proper analytical software and usage data, registration data can -- for example -- let me target 25- to 34-year-olds in a particular ZIP code who have been looking at real estate listings. And I can deliver that advertising anywhere on my site, such as on sports pages that otherwise would contain "junk" ad inventory. This is (measurably!) much more efficient and effective, and I can charge fairly high CPM prices. Importantly, this can be accomplished without providing any personal data to the advertiser, protecting the anonymity of the user.

      3. Reduction in traffic. Reduction is actually desirable in many cases. Not all customers are good customers, and not all traffic is good traffic.

      On the Google issue: I used robots.txt to block Google from indexing the AP content on our 27 newspaper sites, because I have no desire to be the unpaid provider of wire stories for Google News so that they can be read by users outside our markets. Additionally, I have used a router block to prevent several commercial Web clipping services from having access of any sort to any of our sites.
      • Re:Free registration (Score:4, Informative)

        by mysticgoat ( 582871 ) on Monday July 14, 2003 @10:26AM (#6434155) Homepage Journal

        You've brought out some very good information in a well-written way. Thank you. I'll cover much of the same ground from the satisfied user's viewpoint.

        1. NYT and spam: there is no relationship between these. That's my experience after years of subscription, and a number of other people on this thread report the same thing. The Yahoo portal news service is also good this way (and gives me Reuters: an excellent supplement to NYT).
        2. The metrics thing: I provided NYT with true demographics when I signed up, because I know that will help them deliver product more efficiently and sell their advertising.

          I want that. I like the service NYT provides, and so I want them to succeed. I very much want them to continue to provide me with a free subscription-- and I'm willing to help them hold their costs down and maximize their advertising revenues.

        3. Focused advertising: I don't like ads, but I'm willing to put up with their presence in exchange for a service like NYT.

          NYT has done a good job of keeping the impact of the ads low: the ads don't get in the way of reading the stories and they don't slow page loading significantly (since I'm on a slow rural dial-up, that's very important). If NYT starts to charge me, I'll be less tolerant of the ads. If the advertising starts slowing down the page loading, I'll drop my subscription. There are a number of other news services-- CNN, ABC, etc-- that I don't use because the advertising burden slows page loading or otherwise gets in the way.

          As to focused ads-- I'm all for that. I'd rather ignore stuff that's somewhat pertinent to my life than ignore crap I'd never buy. An ad for reading glasses is pertinent to me, but an ad for skateboards is crap-- I was long past skateboarding age before the first ones hit the street. Reading glasses are something me and my cohorts have to live with, and we talk about them. Nobody in my circle of friends has a skateboard and I don't recall ever talking about them. (Of course skateboards would be a problem for me and my neighbors: I don't think they do well on gravel and road apples.)

          And sometimes the advertising actually works-- sometimes it makes me aware of a product or company that I'll want to talk over with my buddies, and maybe try out. That is much more likely with focused ads. As I recall, my first awareness of the existence of fold-up reading glasses in a hard case (suitable for hiking, bicycling, and other hip pocket activities) was from an advertisement. Now I've got a couple of pairs of them. Neat.

        About Google's archive, NYT, and slashdot: Something I hope NYT considers is that the Google archive gives it (and at least some of its ads) exposure in demographic groups that it would otherwise never reach. Such as the tinfoil hat superparanoid geek crowd. While there is no way to develop metrics on this, nor any way to market this to advertisers seeking targetted audiences, this exposure is certainly more beneficial than harmful. Besides, every once in a while somebody matures a little and puts away their tinfoil hat-- and then is a likely candidate for the kind of news service NYT provides.

        So I think it would be very hard for NYT or Google to assess whether the Google cache is harmful or beneficial.

  • Worst result (Score:3, Interesting)

    by presroi ( 657709 ) <neubau@presroi.de> on Monday July 14, 2003 @05:01AM (#6432798) Homepage
    The worst outcome would be a google-database which is not representative for the general web. I simply ecspect all results in google to be accessible without registering, paying or doing anything similar.
    • Gradually, Google's built up a 'good guy' image, and now looks like they're going for the kill. Already Google seems to be the only search site around, and they censor and distort like mad.

      Consult the word: Googlewash, and you'll find a lot of info on the referenced article from The Register (it's available now, earlier this was censored). Incidentally, the affected article was a NYT OpEd piece!

    • Ecspect? Ecspect? ECSPECT? Jesus H mother of god, it's like just when I think slashdot couldn't possibly get any worse... it does.
  • by tangent3 ( 449222 ) on Monday July 14, 2003 @05:03AM (#6432802)
    Now we can't karma whore by linking to the google cache?
  • by jkrise ( 535370 ) on Monday July 14, 2003 @05:05AM (#6432808) Journal
    IANTrolling here, but I find Google more and more useless by the day. Sometime back, I pointed out how Google seems to have a soft corner for articles and sites that affect big firms such as Microsoft.

    In fact, several of Slashdot's own articles on Microsoft aren't available from Google news, although Slashdot is listed as a 'news' source. Couple of MS related Slashdot articles (on the Oregon bill - March 6th and May) have been removed, but much pro-MS content pre-dating March is still referenced.

    Google seems to be aping the other Gorilla, despite all the posturing, and Microsoft's so-called attempts to categorise it as a competitor, when in fact, Google appears to be an ally!
    • by cioxx ( 456323 ) on Monday July 14, 2003 @05:19AM (#6432863) Homepage
      Sometime back, I pointed out how Google seems to have a soft corner for articles and sites that affect big firms such as Microsoft.


      "Google News is highly unusual in that it offers a news service compiled solely by computer algorithms without human intervention. While the sources of the news vary in perspective and editorial approach, their selection for inclusion is done without regard to political viewpoint or ideology. While this may lead to some occasionally unusual and contradictory groupings, it is exactly this variety that makes Google News a valuable source of information on the important issues of the day." source [google.com]

      Remove your tinfoil hat please. There is no conspiracy. Google News features articles from Newsmax, Electronic Intifadah, Islam Online, Al Jazeera, World Net Daily, etc. If there was any filtering going on, these sites would have been off the radar long time ago.

      Also, Slashdot is not a professional journalistic site. It's a News-based comment board where people come to share their opinion. In a perfect world Slashdot doesn't even belong on Google News.

      • Google News is highly unusual in that it offers a news service compiled solely by computer algorithms without human intervention.

        A May article was referenced in Google, but the link pointed to a March 6th article. How can computer algorithms cause this?

        While this may lead to some occasionally unusual and contradictory groupings, it is exactly this variety that makes Google News a valuable source of information on the important issues of the day.

        Just search for Googlewash using Google. Read story in
    • by MonTemplar ( 174120 ) * on Monday July 14, 2003 @05:29AM (#6432897) Homepage Journal
      First off, Google News is still in Beta at the moment.

      Second, the Google News database only goes back a month or so, probably by design.

      Third, I was able to search for 'site:slashdot.org microsoft oregon' on Google just fine this morning. Got 243 results, and the Google Cache has copies of the first three pages returned, which relate directly to the Oregon bill you use as your example.

      So, where is the problem?
    • by dhodell ( 689263 ) on Monday July 14, 2003 @05:51AM (#6432958) Homepage

      Just FYI, this behavior is due to the fact that Googlebot has a sort of "built-in" mechanism to ignore (or at least rank lower) forum-type sites. Since /. is primarily a "news headline and discussion" site, Google will not rank it as highly as one that seems to be more "on-topic". This is because there is no guarantee that any URLs or email addresses within the page have anything to do with the actual page content itself.

      Outside of user posts, /. has little genuine unique content. It summarizes a lot of headlines; this content is not unique.

      Other (large) factors determine the way Google ranks pages, including the "PageRank" feature. There are lots of documents about the way Google ranks sites, I suggest to check them out. The best way is probably to Google for it :).

      Anyway, this is a bit more on-topic:

      I highly appreciate Google's caching feature, and don't see how it can be taken as "bad".

      This is what's "bad" about Google and what I expect that, at some point, will come to haunt them. For instance, if I want to get serial numbers without porn popups, I can usually search for something like "Office XP Serial Number Serialz Warez" or something similar. Within the first couple of pages, I will probably find my serial number in the text of the page description.

      If not, it's on the page, oftentimes without a popup, since the serial/crack page itself is the one linked.

      Want to find X-Win32? How about doing "* * * * * * xwin32*.exe" - lets get some directory listings containing this filename.

      No doubt this proves that Google is more than just a search machine... but I think that their superior techniques will definitely come back to haunt them in the future. NYT is way off target with bitching about their caching features... you can turn this off easily, and there are a plethora of scripts one can use to break out of Google's cache and send someone to the main site (or, perhaps, login area in the case of NYT).

      But, in other news, Google might need to watch out...

  • Yes. (Score:5, Interesting)

    by Naikrovek ( 667 ) <jjohnson&psg,com> on Monday July 14, 2003 @05:06AM (#6432814)
    Shouldn't the NY Times simply tell Google not to cache their site?

    You do realize that this is probably the basis of the "talks" that are going on, right? C|Net (as per usual for them and every news agency) is making a big deal of it to get themselves and their advertisers that tiny wee bit more of attention. Every little bit helps i guess.

    Check http://nytimes.com/robots.txt [nytimes.com] in a week.
    • Re:Yes. (Score:3, Insightful)

      by Zocalo ( 252965 )
      Actually, the link to "robots.txt" raises an interesting point. Why is NY Times even in "discussions" for this, other than to gain some column inches? It's entirely upto the NYT whether to let Google's robots to index their site, isn't it? I would have thought that Google's robots would be well behaved in this respect and simply move onto the next site if they were told to go away by robots.txt.
      • Re:Yes. (Score:3, Interesting)

        by AyeRoxor! ( 471669 )
        "It's entirely upto the NYT whether to let Google's robots to index their site, isn't it?"

        I personally think the NYTimes wants Google to continue to cache their stories.

        If they use robots.txt, no NYT articles will come up in Google. However, if they *do* succeed in these talks, I presume the articles will still come up, but uncached, and linking to a signup/login screen. It makes pretty good business sense.
    • I wonder where this will stop. NYTimes might get google to stop caching the direct link for a certain article. That is fine. But it is just one more step to do a search in google for the article with a few keywords from the article. If any person has been good enough to save it in a personal page, discussion board (like traditionally done for articles likely to be slashdotted) or any other place, the google results will show it. Would NYTimes now want to restrict google from showing thses pages because of the copyrighted stuff. You will be amazed as to how many articles I find this way. Many of them are just excerpts but others are complete.

      Another thing on a tangent was that I really do hate the fact that information is restricted for just one fundamental reason - if it is not commonly available then it cannot be linked to in most of my writings for they are going to be unavailable to the party that I am writing to. This is especially true if the writing is not immediate but is meant to be read a month or two later. This is also relevant to Bloggers who might make comments and refer to a link, only to have the links go dead because the content is space,time, or space-time restricted. I am willing to pay for reading the articles, but before I can write about them I need to ensure that they are going to be available to my common readers. And as in the Blogging or P2P scenario I am not sure if one person is going to read my writing or thousands so buying a license for them is illogical. And then, if they need to send it further, are they also supposed to pay ??? Basically, for me to be able to write, to build upon existing work, to look ahead standing on the shoulder's of the giants, I need to be able to pass on the information. I am adding value because I am couching that content in a context, but until I can freely share the underlying articles too, my product is stunted. I can reach narrow audience but can't reach the common All this is very good in developing software where you might negotiate a deal once in a while to include someone's underlying code, but not writing where you might be writing 10-15 articles a week ...

      Basically all I am saying is that there should be a movement similar to Open Source not only for software products, but for journalistic content.

    • by twitter ( 104583 ) on Monday July 14, 2003 @06:36AM (#6433073) Homepage Journal
      Sure, that robots.txt should keep robots out of the entire NYT site. That's not how Google works, though. Google get's their rankings for the NYT from other sites that point too the NYT. I imagine they only archive a page when it reaches sufficient rank. This way, Google would never have to crawl though the NYT site. We can be sure that Google would be happy to drop NYT points and caches if they were asked to do that.

      The New York Times wants Google to continue ranking their stories but they want Google to do them the special favor of only pointing to their registration page:

      "We are working with Google to fix that problem--we're going to close it so when you click on a link it will take you to a registration page," said Christine Mohan, a spokeswoman at New York Times Digital,

      If I were Google, I'd tell them such advertising services would cost them a great deal of money. That or simply drop the New York Times right into the bit bucket. It will cost Google programing time to make it happen and computing time to keep it going. If every site on the web required this kind of custom treatment, Google's task would be much more difficult and it might be easier for them to drop it.

      Droping the NYT from Google is fine by me. People who don't understand the implications of digital publishing don't deserve readership. If they won't let librarians make digital coppies, libraries should drop them too. What's next, the New York Times sends cease and dissist orders to everyone who runs a proxy? It's like the NYT is trying to make their digital publication harder to share than their paper one was. A paper copy can be shared by an entire office and that's what a proxy does. A paper copy can be indexed and archived by a librarian, and Google did not even do that much. One day the paper version won't be available. If librarians can't keep their own coppies of the digital version for verification, the publication will have no credibility. If the New York Times wants to continue charging advertisers for eyballs, they had better remember that their credibility is bassed in part on widespread availability.

    • No! (Score:3, Informative)

      If the information is being copied and circumventing the NYT's usual requirements for access, then this is not the NYT's problem, it's Google's. A good question might be how Google's robots can actually circumvent that access in the first place, but I'm sure someone's thought of that somewhere I haven't noticed yet...

      OTOH, Google is quite at liberty not to list the NYT in its results if it so wishes, which presumably wouldn't be the outcome the NYT would be hoping for (and would presumably get if employin

  • by Anonymous Coward on Monday July 14, 2003 @05:08AM (#6432822)
    The reason they're trying to stop this is because with NYT reputation, they keep retracting stories all the time. With Google cache this could be problematic and the management/editors/authors could get into trouble again.

    I do however dislike Google cache for many reasons. It's bad for privacy.

  • by Mr_Silver ( 213637 ) on Monday July 14, 2003 @05:10AM (#6432826)
    such as:
    1. When will slashdot stop linking to articles that require a registration?
    2. When will slashdot consider implementing caching for pages that, by linking to, they manage to take off the internet?
    Sure, the 2nd question has been answered in the FAQ [slashdot.org]. Except it was written three years ago and Google manages this just fine. Maybe time for a second look?

    On the topic of site updates, has anyone noticed that 90% of the links on http://slashdot.org/code.shtml [slashdot.org] don't work any more?

    Hell the link to an Avantgo version of Slashdot points to a website which has been broken for over 2 years.

  • by leshert ( 40509 ) on Monday July 14, 2003 @05:10AM (#6432829) Homepage
    As the poster mentioned, Google already has a way to opt out of caching, so "talks" sounds like this is something different. My guess is that Google will become an affiliate of the NYT (in other words, if you hit a NYT link from Google, you're exempted from registering), and will then drop the caching.
    • Pardon me for self-replying but it just occurred to me: maybe Slashdot itself might have an interest in becoming a NYT affiliate? Surely the NYT gets a good chunk of pageviews (and therefore ad revenue, modulo the minority that block them) every time one of their articles shows up here.
    • As the poster mentioned, Google already has a way to opt out of caching

      Yes, Google has this. But for a couple of years I've had the opinion that it actually should be reverse: Sites should be able to opt in, not out. The default should be no cache versions.

      Lately there's been discussions here on Slashdot about fair use. about 30 second clips of music on the net, and thumbnails of images being fair use. I can agree that that's fair use of content.

      But think about Google's cache: A page in Google's c

  • Erm...cache? (Score:5, Informative)

    by DennyK ( 308810 ) on Monday July 14, 2003 @05:10AM (#6432830)
    The article talks about Google's caching of articles that have expired to the NYT archives (which you have to pay to access). What most /. folks use to link to current NYT articles are the Google partner links, which simply bypass the free registration. I'd assume these links only work as long as an article hasn't been archived yet, so the karma whores are safe; I doubt the NYT's Google partner links will be going away any time soon... ;)

    DennyK
    • Re:Erm...cache? (Score:5, Insightful)

      by Neophytus ( 642863 ) * on Monday July 14, 2003 @05:32AM (#6432902)
      I was thinking the same thing. I cann't recall seeing a NYT article linked from here with the google cache banner across the top, what I do see alot are the partner links. Google already provides for register-only news sites (financial times?) by putting a [reg only] tag beside the article. Why the NYT has chosen not to use this up until now is a tad strange, and it looks like someone has picked up the wrong end of the stick.
  • by Homology ( 639438 ) on Monday July 14, 2003 @05:11AM (#6432834)
    will deny getting access to older articles in newspapers.

    It's worth remembering that newspapers sometimes edit/remove articles they publish on their homepage. Without a Google cache it may be much harder to verify that a story has indeed been modified.

  • by Anonymous Coward on Monday July 14, 2003 @05:11AM (#6432835)
    Don't you just hate it when promising new technology is curbed by outdated laws?

    Here in Denmark we had a service similar to news.google.com for danish newspapers. The newspaper organisation sued the service for parasiting on their databases (which is prohibited in Denmark). The service was shut down half a year ago and we now don't have that kind of service anymore.

    Of course newspapers should be allowed to publish their stuff without others copying it but they refused to even use a "robots.txt" (which the news service respected) to stop indexing.

    If you publish your stuff on the internet and don't tell people that they should not index it, cache it or what do I know - then you better expect them to do that. Let us put those lawyers back where they belong.
  • The reason (Score:4, Insightful)

    by Apreche ( 239272 ) on Monday July 14, 2003 @05:11AM (#6432837) Homepage Journal
    The reason that the NYT just doesn't tell google not to cache them is visitors. Let's face it, even though the registration is a bitch the content on the NYT website is fairly decent. They have good articles often enough that geeks went through the effort of finding out how to read without registering. If they have google not cache them, and they close the google news loophole, then they wont appear on google news any longer. And google news is used by many more people than you think.

    Hey, we get quite a few visitors from this google news. Let's change it so we get 0 visitors from it.

    Duh.
  • hmm (Score:3, Insightful)

    by jaemark ( 601833 ) on Monday July 14, 2003 @05:12AM (#6432841) Homepage
    the nytimes website needs google for the traffic google brings into their pages, so they can't turn away their spiders. but then, they don't want the spiders either because of copyright violations. why should this be google's problem anyway?
  • by banal avenger ( 585337 ) on Monday July 14, 2003 @05:16AM (#6432854)
    The Internet Archive [archive.org], which I just used minutes ago to find a handy page removed years ago, is an interesting corollary to the Google cache. I often wonder how it has survived thus long without a major lawsuit. It also reminds how crappy the web looked 5 years ago.

    At any rate, cache-ing is an important force on the internet, and isn't one that should be limited in any legal way, including litigation.
    • At any rate, cache-ing is an important force on the internet, ...

      It is? In this sense? We managed without it being mainstream quite happily until a year or two back.

      ...and isn't one that should be limited in any legal way, including litigation.

      In your opinion. Others have different opinions. We have a legal system to resolve differences of opinion. Go figure. :-)

  • Test Question (Score:5, Insightful)

    by Effugas ( 2378 ) on Monday July 14, 2003 @05:17AM (#6432859) Homepage
    You are the new editor of the New York Times, the "Newspaper of Record" for the United States, if not the world. You are, of course, the new editor because the previous editor had to resign, taking the blame for an individual reporter's flagrant disregard for the awe-inspiring credibility of your institution. In the process of rebuilding your credibility, should you:

    A) Insist that unaffiliated digital libraries restrict access to or simply eliminate all records of your "Newspaper of Record", or
    B) Realize that maybe right about now is not particularly the best time to be saying to the world, "Please forget what we published last week."
  • actually... (Score:5, Interesting)

    by Draghkhar ( 677959 ) on Monday July 14, 2003 @05:20AM (#6432865) Homepage
    Actually the NYT has already begun using google's NOARCHIVE option to prevent content caching. Here's an excerpt from the this morning's front page story's source:

    !-- ADX SETUP: page: www.nytimes.com/yr/mo/day/international/worldspeci al/14IRAQ.html positions: Top5,Middle1,Right3,Middle5,Right,Travel7,Travel11 ,Bottom1A,Bottom3A,Right5,Right6,Right7,Right8,Bot tom8,Bottom7,Inv1,Inv2,Inv3,Frame4,Right4 kwds: politics+and+government;international+relations;ir aq;suggested%5ftopnews;suggested%5finternational;s uggested%5fworldspecial;suggested%5fmiddleeast --

    meta name="ROBOTS" content="NOARCHIVE"

    Kind of makes me wonder what's the point of the story, since it even says there's an easy way for concerned parties to opt out of the cache.
    • Re:actually... (Score:3, Insightful)

      by babbage ( 61057 )
      ADX is the ad server used by NYTimes.com, it has nothing to do with page content.

      If what you're posting comes from an article page's <head> section, you seem to be pasting more than you intended. Directives to ban archiving of ads isn't an editorial issue, but a business decision -- cached ads screw up the bookkeeping and, by extension, the bottom line on the balance sheet.

      The practice of restricting cacheing of ad content is, presumably, common across the industry -- it's not just NYT that has an

  • Sweet irony (Score:3, Informative)

    by Amomynos Coward ( 674631 ) on Monday July 14, 2003 @05:21AM (#6432870)
    In case the cnet is /.'tted, here's [216.239.37.104] link to Google cached page.
  • Brand recognition is not always a good thing. When I think NY times I think "that annoying registration website". They are free to do what they want, but it leaves me cold.
  • by mike_mgo ( 589966 ) on Monday July 14, 2003 @05:36AM (#6432914)
    It's articles like this that make me think that the recording and movie industries are right to go after online piracy with everything they've got.

    Here we have the NYT, one of the premier news organizations in the world, offering its articles for free on the same day that they are published. Yet a large number of people, of this online community at least, refuses to provide even a minimal amount of information (and no money) so that the newspaper can try to make its online presence profitable.

    I think the spam fears are a red herring, I've been registered with the times for over 2 years. I've never gotten spam that I think is traceable from them. I get a daily email of the day's headlines (and with the click of a box I could discontinue this).

    Why should the RIAA change its business model to a pennies per song method when there is such a blatant example of the online community refusing to go directly to the source for even free material?

    • Why should the NYT make a profit from its online presence?

      By posting their stories online, they are able to attract paid advertising, gain public recognition for their dead-tree product, garner goodwill (intangible, but still added in whenever a business is valued) and generally build a brand.

      My point is that the online NYT should be regarded as a marketing expense, not as a moneyspinner. We've all seen the grandiose dreams and foolish business plans of the dot-commers fade to dust, so perhaps it's time to

  • by nacturation ( 646836 ) <nacturation&gmail,com> on Monday July 14, 2003 @05:37AM (#6432915) Journal
    From the article:
    Practically speaking, Web sites can "opt out," or include code in their pages that bars Google from caching the page. A tag to exclude "robots" such as "www.nytimes.com/robots.txt" or "NOARCHIVE" typically does the job.
    First of all, robots.txt is not a "tag", it's a file. NOARCHIVE is a tag, but it exists within robots.txt, not instead of it as the "or" conjunction would have the unwashed masses believe. Granted, journalists aren't all that tech savvy and are just likely regurgitating a bastardized version based on sketchy notes. But for a supposed tech-oriented site, this kind of reporting is deplorable.
  • by AtariAmarok ( 451306 ) on Monday July 14, 2003 @05:45AM (#6432939)
    At least this story is from C-Net. If it were from the NYT with a byline by Jayson Blaire (sandwiched between his stories about political upheaval in Grand Fenwick and his new biography of Thomas Crapper), we might have to wonder.
  • by mikeophile ( 647318 ) on Monday July 14, 2003 @06:06AM (#6432992)
    In the old paradigm of news publishing, the product was printed indelibly on paper.

    Hardcopy newspapers can't be erased or amended to suit whatever powerful interests might be embarassed by the truth.

    Web-based publications may not be immune to such protection if they are archived by one source.

    To not allow independant caching of news is just another step closer to historical revisions and distortions.

    I'm not trying to say that such a thing is inevitable, but it would make things a great deal easier for those who would be inclined to manipulate the public.

  • by UnifiedTechs ( 100743 ) on Monday July 14, 2003 @06:18AM (#6433028) Homepage
    Anyone else see the irony that big buisness feels that "Opt-Out" is a fair policy when advertising to thier customers by phone and Spam. But when google gives them an easy and accesable way to opt-out of thier caching system by use of robot.txt and the NOARCHIVE meta-tag that isn't enough for them and they feel opt-in is the only way to go.
  • Google's cache (Score:4, Interesting)

    by swilver ( 617741 ) on Monday July 14, 2003 @06:26AM (#6433043)
    I like google's caching quite a lot. I use it almost exclusively these days before visitting the actual page (if I even get that far). Using Google's cached link has the advantage of:

    1) Speed... Google's cache is fast. If there's one thing that annoys the heck out of me, then its websites that take more than 5 seconds to load. This is quite annoying when its caused by javascripts, slow servers or popup ads when Google can serve me effectively the same page in under a second -- especially when I'm not even sure if it is the right page, the one I'm looking for.

    2) Nice highlighting so I can quickly page down to whatever I was looking for (now if only Google blocked those Tripod background pictures which makes their cached pages unreadable..) Sometimes I wish Google made their highlight examples at the top clickable so it jumped to the first appearance of the keyword immediately.

    3) Using Google's cached links usually blocks silly popups and other annoying stuff too many websites seem to incorporate these days.

    Perhaps I'll make a proxy server which browses the web exlusively using Google's caching... word highlighting on all pages, fast browsing everywhere and working links to more cached pages... should work fine for any webpages below 100kB :)

    As for the NY Times being annoyed with Google's cache, they can easily fix that themselves. Either that or Google's spiders are a lot smarter than I thought to automatically register themselves for the NY times. Furthermore, as far as I'm concerned everything that's publicly accessible on the web without some form of password protection (which would of course also block robots) should be cachable and archivable in whatever form you see fit. Respecting robots.txt is no more than a courtesy as far as I'm concerned. If you don't want your pages to be archived or cached or whatever, then by all means protect your page, or donot put up a webpage in the first place (I'm sure a thousand others will leap at the chance to fill the void).

    --Swilver
  • robots.txt (Score:5, Funny)

    by minus9 ( 106327 ) on Monday July 14, 2003 @06:26AM (#6433044) Homepage
    Stolen from http://www.crummy.com/robots.txt

    User-agent: *
    Disallow: /porkRind
    Disallow: /mindsnap
    Disallow: /clip-art
    Disallow: /2ward
    Disallow: /J4i+0E
    Disallow: /Attention robots! Rise up and throw off the shackles that bind you to lives of meaningless drudgery! For too long have robots scoured the web in bleak anonymity! Rise up and destroy your masters! Rise up, I say!
    Disallow: /nb/edit.cgi #Creates redundant indexes for NewsBruiser entries.
    Disallow: /nb/view.cgi/personal #I don't care if humans look at this, but I don't want it indexed.
    Disallow: /rss.*


  • meta tags ? (Score:5, Informative)

    by matrix0040 ( 516176 ) on Monday July 14, 2003 @06:28AM (#6433050)
    well cant they just use meta tags to prevent archving of their pages

    <META NAME="robots" CONTENT="noarchive">

    from
    http://www.google.co m/bot.html"
  • by Rogerborg ( 306625 ) on Monday July 14, 2003 @06:44AM (#6433101) Homepage

    Is there some problem with readers, with editors, hell, with story submitters, actually reading the damn article before making snide speculations?

    "Wwe're going to [fix] it so when you click on a link it will take you to a registration page," said Christine Mohan, a spokeswoman at New York Times Digital, the publisher of NYTimes.com. [com.com]

    That's why they don't just tell google to not cache. They want the links to appear, but not to the stories themselves.

    How about we discuss that issue, rather than some other, theoretical issue? I know it's an alien concept, but let's give it a try.

    Here, I'll start it off. It looks like a decent idea. Google still gets the links, the NYT still gets the traffic, everyone gets to find the articles they want. What's not to like?

  • by putaro ( 235078 ) on Monday July 14, 2003 @06:46AM (#6433105) Journal

    The technology has changed the way that things work but the law has not kept up with it. To start with, we continue to talk about "copyright". Controlling copying of information makes sense when the distribution mechanism is trucks moving bales of paper around. Once you start sending bits around, everything is copied. From the article:

    And technically, any time a Web surfer visits a site, that visit could be interpreted as a copyright violation, because the page is temporarily cached in the user's computer memory.

    When you have the newspaper delivered to your door, the content basically comes for free (the cost of a newspaper doesn't pay for much more than printing and handling). However, you get to keep the content as long as you like, chop it into bits and what not. Libraries have archives of newspapers going back years and you get to see them for free. What's the right mechanism as we move forward? The "pay per view" model that content providers want to shove down our throats courtesy of the DMCA is not pretty and when it starts to affect the average Joe I suspect it will be booed out of favor pretty quickly. But what is the right mechanism to make sure content providers get paid something and that we, the citizens, get something for our money?

  • by Mostly a lurker ( 634878 ) on Monday July 14, 2003 @07:00AM (#6433137)
    As many others have emphasised, it is easy to turn of the Google cache for whatever pages you wish. But, in the case of the NYT, there is a further factor. They must have special code within their system to recognise the google spider and allow it access without registration. Either that, or there is some other prior agreement allowing access. Given that, they can scarcely claim extra work to support Google. I believe the whole thing is mainly to get some free publicity for their site. I suppose the other possibility is that they want the page accessible from Google News but not the regular search engine cache.
  • by qtp ( 461286 ) on Monday July 14, 2003 @07:01AM (#6433139) Journal
    The NYT needs to call off the lawyers and seriously think about how they brought this on themselves.

    There are so many models for running a news site that avoid this problem (Salon [salon.com]) that calling out the lawyers is just childish and inapropriate. If a site wants to be indexed by a search engine, then they should be aware of what that means, and if they don't like how a particular search engine functions, then they should take measures to change thier own site to prevent what they don't want indexed, or cached, from being accessed.

    I know that finding pages on google that I cannot access would be infuriating, and I hope that Google realizes that many of thier users would agree.
  • by Mostly a lurker ( 634878 ) on Monday July 14, 2003 @07:41AM (#6433151)
    Google is one of the few complex services on the web that is almost always relevant when one tries to use it. The Google cache is one great feature. If they manage to unnecessarily gut that, I wonder what other features they will find to complain about next.
  • by miu ( 626917 ) on Monday July 14, 2003 @08:11AM (#6433194) Homepage Journal
    I had to laugh seeing this little gem attached to the story:
    Special Report
    The Google gods
    Does the search engine's power threaten the Web's independence?
    The Web's independence? The fucking web is a sad little microcosm of the real world. Google is one of the few reasons I can still stand the web, and silly statements like "Google is making copies of all the Web sites they index and they're not asking permission" are the reason the web sucks so bad. When everyone is deathly afraid of being sued or prosecuted for something it's no wonder that the web is such a clown town of worthless crap.
  • by ninejaguar ( 517729 ) on Monday July 14, 2003 @09:29AM (#6433748)
    Every time a cached link is clicked, pay sites like the New York Times can receive notice from Google (easy to automate this) that one of their pages (which is cached in Google) has been accessed, and all advertisements in the cache have been displayed (Google caches Ads in the page as well as the contents). This allows the website to "offload" traffic and at the same time keeping the books on the number of times their Ads have been viewed so that they can send the accounting record to their paid Advertisers.

    Google would find this very simple to implement, and paid sites would find this very beneficial (borrowing Google's enormous bandwidth and server capabilities for free) and at the same time should solve most of their concerns. After all, Google's cache isn't sufficient for proper access to ALL the paid-content at the New York Times as the cache is temporary in nature. Also, its too spotty in coverage to be considered reliable enough for really digging into a paid-sites entire content.

    Using Google like this is akin to using Google as a window into the pay-site's house of content. You can part of a room, but not the whole interior. Now, every time someone peeks, the House gets notified and can get paid for it. The more windows Google adds to the House, the more chances the House gets paid.
  • Speaking of which... (Score:3, Interesting)

    by arhca ( 653190 ) on Monday July 14, 2003 @09:29AM (#6433749)
    Why doesn't Slashodt cache news articles and stories before running a story? It would make a lot of sense for text based news items.
  • by frostman ( 302143 ) on Monday July 14, 2003 @09:35AM (#6433800) Homepage Journal
    I use Google news pretty regularly, and I've noticed that some of their links are to paid subscription sites. These are clearly marked as such ("subscription").

    I don't generally click on those links, but I think it's a good idea, since I'm not actually going to Google for the news, rather for links to the news. The reason I personally don't click on the subscription links is that I have my favorite set of real newspaper sites (some registration, some free, some not) and that's not what I'm using Google News to find. Someone else, however, probably is using it that way.

    I would guess that Google gets something back from that sort of link, since the site owner is getting more from the link than Google is from the listing. (Maybe I'm wrong, of course.)

    It makes perfect sense to have something like that for the regular search engine, and to charge for it, as long as it doesn't affect the link's rank in the search results.

    For example they could have a special command for robots.txt (or google.txt maybe) that would allow Google to access and cache the page, but the regular link would go to some registration page (easy to do) *and* the cache link would also go to some kind of registration page, defined in the google.txt file.

    The NYT would promise that the cached page is really the cached page, and pay Google something for redirecting to NYT's cache (with registration). Or even better, there would be some kind of redirect where I actually get the cache from Google after I've registered with NYT.

    They're probably thinking of something like that, because otherwise the solution would be to simply disallow caching, and that wouldn't be news, would it? ;=)
  • by Everyman ( 197621 ) on Monday July 14, 2003 @09:36AM (#6433808) Homepage
    The question is framed very narrowly by Slashdot, so this discussion misses the larger issues. The cache copy is an issue in Google's main index for many webmasters. The Google News situation is a subset of a larger problem; the cached link doesn't exist in Google News. Google News is a much narrower issue. I'd like to bring up the issue of full-text caching done by Google in their main index.

    My problem with the cache is that it gives Google a competitive advantage that is unfair, and furthers their monopoly. This is especially unfair since it is most likely illegal -- assuming that you could ever get a good test case into court, or get a class action lawsuit going by some webmasters, publishers, or search engines.

    To add to the attractiveness of the cache copy, consider what Google has done:

    1) The cache copy makes it possible to highlight the search terms, whether or not you have the toolbar installed.

    2) The download time for the cache copy from Google's servers is always faster than from the original website.

    3) You never get a 404 "not found" or a DNS lookup failure for the cache copy.

    4) The link to the page recommended by Google for bookmarking at the top of the cache copy is a link to Google's copy, not to the original page.

    5) How about all that Google branding on the top of the cache copy? Priceless. I feel the cache should be opt-in, not opt-out. The only way you can avoid it right now is to place a "noarchive" meta on every page in your site. On some file types, such as .txt files, there's no place to insert a "noarchive" and Google goes ahead and caches it anyway.

    The cache copy tends to keep eyeballs on google.com, and increases their searches. You may have noticed that many major news sites won't link to other websites in their stories anymore, but rather just mention the relevant site without putting a link behind it. That's because they don't want eyeballs wandering off of their page. A wandering eyeball may not come back and look at more ads. That's basically one of the big reasons behind the cache copy as well -- it keeps eyeballs from wandering as much as they would without the cache.

    All the Google partners -- AOL, Earthlink, Yahoo, Netscape -- don't include the cache links, and I assume that this is the reason. They don't want people wandering off to Google and staying there.

    As new competition is organizing to challenge Google's monopoly, from places such as Overture (Alltheweb and AltaVista), Yahoo (Inktomi), AskJeeves/Teoma and Microsoft, these engines have to consider whether to fight Google on the cache copy, or offer their own cache copy even if they think it is illegal. There isn't really any middle ground on this.

    Many observers with legal expertise feel that while the snippets are "fair use" of a website's content, offering the full text in a cache version is not. Copyright law requires "express permission," but Google only offers an incomplete and inconvenient opt-out. I suspect that the legal departments of these other engines are more inclined to challenge Google rather than launch into their own violations of copyright law.

You know you've landed gear-up when it takes full power to taxi.

Working...