Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
The Courts The Internet

Link Rot and the US Supreme Court 161

Posted by samzenpus
from the getting-old dept.
necro81 writes "Hyperlinks are not forever. Link rot occurs when a source you've linked to no longer exists — or worse, exists in a different state than when the link was originally made. Even permalinks aren't necessarily permanent if a domain goes silent or switches ownership. According to new research from Harvard Law, some 49% of hyperlinks in Supreme Court documents no longer point to the correct original content. A second study on link rot from Yale stresses that for the Court footnotes, citations, parenthetical asides, and historical context mean as much as the text of an opinion itself, which makes link rot a threat to future scholarship."
This discussion has been archived. No new comments can be posted.

Link Rot and the US Supreme Court

Comments Filter:
  • 404 Not Found (Score:5, Interesting)

    by themushroom (197365) on Monday September 23, 2013 @01:33PM (#44926427) Homepage

    Which is not what you want to see in, say, an Apple verses Samsung style case where "previous art" and earlier applications are all that separate you from being successfully sued into the Stone Age.

    • Re:404 Not Found (Score:4, Informative)

      by TemporalBeing (803363) <bm_witnessNO@SPAMyahoo.com> on Monday September 23, 2013 @04:45PM (#44928479) Homepage Journal

      Which is not what you want to see in, say, an Apple verses Samsung style case where "previous art" and earlier applications are all that separate you from being successfully sued into the Stone Age.

      FYI - the courts require that web content have screen shots taken with time-date stamps to avoid this exact issue. The screen shots must also contain the information in a certain manner, only then can it be used as evidence/exhibits. If the lawyers are not doing that, then they are not properly writing/citing their court paperwork (briefs, etc).

      And no, it does not amount to a copyright violation.

      IANAL, but that's my understanding thanks to Groklaw and other sources.

  • by Anonymous Coward on Monday September 23, 2013 @01:35PM (#44926467)

    They should just start linking through the Wayback Machine.

    • by Half-pint HAL (718102) on Monday September 23, 2013 @01:45PM (#44926573)

      They should just start linking through the Wayback Machine.

      ...which is precisely what I did when I went back to university. I gave all citations with the original URL and an archive.org one. I hoped at the uni would pick up on it and recommend it to the other students, but it never happened....

    • by Frojack123 (2606639) on Monday September 23, 2013 @01:53PM (#44926679)

      They should just start linking through the Wayback Machine.

      Interesting concept, but Wayback is not always complete.

      Perhaps the court should create an exemption to copyright, that allows the creation an internal copy (perhaps in image or pdf format) of the page for anti-link-rot protection.

      I'm sure with clever wording they can manage to restrict this to lawyers and court proceedings, however:
      I could make the case that it should apply universally.

      After all, If you ever put up a page publicly on the net to content you were the rightful owner of, you have declared that version of that page to be a public document, and anyone should have the ability to make a static Image of that document. There are all sorts of copyright corner cases involved, but it is really no different than publishing your screed in the New York times or your local paper. There is no way to unpublish it, and no way to prevent it being archived.

    • That seems like a good idea, but what happens if the Wayback Machine decides to change their link format?

      • by ibwolf (126465) on Monday September 23, 2013 @02:14PM (#44926931)

        Links to the WBM contain the original URL and a timestamp so it would be easy to redirect it. The issue is however unlikely to come up as Wayback links are meant to be long-term stable. They've already survived one complete rewrite of the underlying application.

        • by Minwee (522556)

          Links to the WBM contain the original URL and a timestamp so it would be easy to redirect it. The issue is however unlikely to come up as Wayback links are meant to be long-term stable. They've already survived one complete rewrite of the underlying application.

          And how are they going to survive being bought by Rupert Murdoch?

    • by Darinbob (1142669)

      Why not just cite the original document? Do these no longer exist anymore? Ie, you cite the newspaper article itself, or a journal article, and so forth.

  • Appendices? (Score:4, Interesting)

    by spamchang (302052) on Monday September 23, 2013 @01:35PM (#44926471) Journal

    Should documents then start including snapshots of the site (Wayback Machine-style) in document appendices? It's more work, sure, but it seems to be an obvious solution.

    • by kimvette (919543)

      . . . until robots.txt wipes out the archive.

    • I often print to pdf pages that I would like to link to. Then I put a live link, and a pdf link on my pages.
      When the live link rots, I remove it and substitute the pdf link. I make very little effort to track down revised pages. (Putting in redirects is their job, not mine).

      So far, because of the topic area I do web construction for, I've only been called to task for this once, and that was from an agency that had a updated version of their rotted link, (and didn't know enough about redirects).

      Most rotte

    • "Should documents then start including snapshots of the site (Wayback Machine-style) in document appendices? It's more work, sure, but it seems to be an obvious solution."

      This is simply and purely a failure of Government to properly archive, long-term store, and maintain its records.

      Anything cited in a Court decision should be saved, cataloged, and stored in a permanent place. Failure to do so is a big failure, indeed.

  • by Dunbal (464142) * on Monday September 23, 2013 @01:37PM (#44926495)
    Link rot could be "a threat to future scholarship"? WHO SAID TRAINING FEWER LAWYERS WAS A BAD THING? I just don't see the problem.
    • Science also relies on scholarship. Legal scribes are not the only scholars.
    • Well, training lawyers with incomplete information could lead to an unexpected revolution where we discard long-established precedents and rewrite all law as if it were born yesterday.

      Especially in law which relies heavily on precedent, being able to find the actual precedents for comparison's sake would be critical, IMHO.

    • "The same number of lawyers, now less informed" =/= "fewer lawyers".
    • by Albanach (527650)

      WHO SAID TRAINING FEWER LAWYERS WAS A BAD THING? I just don't see the problem.

      Presumably you've never needed a lawyer.

      If you ever do, and your lawyer wants to cite to a case with a broken link referenced, it could impact you directly. Even if the linked page is still available somewhere, you might be paying $500/hour for your lawyer to find it.

      • by Dunbal (464142) *
        Oh I have, but I live in a small, civilized country, where all its laws and codes fit into one medium sized cardboard box, not that nightmare of a country where it has become fashionable to see who can get the highest page count on their bills this session ("let's pass it and then we can find out what's in it") legal system.
    • by forkazoo (138186)

      However many lawyers you have, it's obviously better for them to understand the logic behind a decision, rather than just accepting the whole system as "just the way it is."

  • by the_scoots (1595597) on Monday September 23, 2013 @01:43PM (#44926553)
    Good thing the NSA has it all backed up!
  • by onyxruby (118189) <.onyxruby. .at. .comcast.net.> on Monday September 23, 2013 @01:50PM (#44926625)

    This has been a well known problem for at least a couple of decades. Google had their famous cache that was famous for saving peoples hides or embarrassing peoples mistakes. The people that run the Wayback machine have been fighting this problem for many, many years.

    Their is a natural resistance to being able to preserve content as it was at the time. People, companies and governments like to make revisionist history and forget that certain things ever happened or change them after the fact. Specialized companies help with reputation management in ensuring that such things disappear for good.

    It's a problem from tech support documentation that disappears to finding old employers that have changed their name and moved location. The only way to resolve the issue is to be able to preserve the content as it was for posterity. Always assume your links will vanish and turn your need pages into archive files. If you really want to do something about it donate [archive.org] to the Internet Archive.

    • All good points. If I may add to it:

      I really wish web-sites would list the DATE as in month + year on their articles so can tell how old (or relevant) the information may be.

      Microsoft has a really annoying habit of moving pages around. At least Microsoft and Apple put an unique identifier so even if pages get moved you can find it.

      Whenever I come across anything interesting I usually "Print to PDF" so at least I have a semi-permanent form where I can search for keywords used in the document.

    • Groklaw [groklaw.net] noted a number of instances where websites were changed as a result of information revealed in court ... or on Groklaw as a result of research prompted by court documents.

  • by ffflala (793437) on Monday September 23, 2013 @01:51PM (#44926641)
    For fuck's sake, this is one reason why PURLs exist. The trainwreck that is a constant string of dynamic URLs *printed* out in court opinions is an example of shameful institutional incompetence, regardless of whether it's willful ignorance or just plain ignorance.

    What is required to address this is an official government domain that hosts static screencaptures of web pages, provides PURLs to point to them, and ideally uses a URL-shortening function like goo.go or bit.ly.

    Then, instead of including a long, difficult-to-retype URL in the opinion, the short, easy-to-type PURL appears in the opinion. The supplemental info for the citation includes things like original URL and date accessed, and the given PURL will point to the material in question.

    Opposed to this idea will be copyright owners who fear that court opinions will eliminate their revenues by providing free access to material they usually charge for. Because this kind of opposition is easy to use to score political points (big government! wasting taxpayer dollars!! eminent domain of the little guy's copyrighted material!!!), to make money, getting to this obvious solution will be long delayed. When it is ultimately decided upon, it will be thousands of times more expensive than need be, take three times as long to roll out, will be created using shoddy technology that will break very quickly, and be used as yet another example of government failure.
    • by TheRealMindChild (743925) on Monday September 23, 2013 @01:54PM (#44926697) Homepage Journal
      and ideally uses a URL-shortening function like goo.go or bit.ly

      WHY? I never click on such links for the elementary fact that I have no idea where they lead
      • by gstoddart (321705)

        WHY? I never click on such links for the elementary fact that I have no idea where they lead

        Agreed. If I have no idea where a link is going, I'm sure as hell not trusting it enough to click on it.

        I want to know what domain I'm going to, and since .ly is Lybia, not exactly an entity I'm going to give blanket trust to.

        I don't trust URL shorteners because I have no idea who controls them or what's on the other end of a link. They've always struck me as a terrible idea.

        • by Albanach (527650)

          Agreed. If I have no idea where a link is going, I'm sure as hell not trusting it enough to click on it.

          Did you even read the original post before replying? The poster wanted a US government site that hosts static screen captures of web pages referenced in official documents. By implication you would know exactly where the link was taking you - to a government server that hosts images or PDFs of web pages.

          Excepting the possibility that the NSA might be monitoring the documents you access, I can't really see

        • by cdrudge (68377)

          So you never go to a new domain? You only stick to the well known domains that you're use to? I'm sure nefarious individuals would only use domains like .ly and would never use a .com/.net/.org domain name.

          • by gstoddart (321705)

            So you never go to a new domain?

            Not without knowing what it is, and not in a browser that's allowed to do much more than load pages, ignore scripts, and block cookies.

            I don't make it a practice to go to random links I can't tell where I'm going or visit a web site I've never heard of unless I have some idea of what it is.

            I place very little trust in the internet as a rule.

        • Clicking on a link is blanket trust?

          Who the fuck are you people that are scared of clicking on a link?

          • People using Windows and Internet Explorer?

          • by lgw (121541)

            Anyone whose work connection is monitored has reason to distrust links that might go to porn. Anyone whose brain hasn't already had those circuits permanently fused has reason to distrust links that might be goatse, or worse. Plus of course there are always zero-days for every browser, and it's sometimes easier to link spam to get traffic than hijack an ad server.

      • by 0123456 (636235)

        WHY? I never click on such links for the elementary fact that I have no idea where they lead

        That's not hard to work around. Bit.ly, for example, seems to use amzn.to for all shortened links that go to Amazon... if you click one of those links, you know it goes to Amazon and not Goatse. The US government could presumably afford to set up a link-shortening service of their own which you can trust to go to a government site.

      • by ffflala (793437)
        Great question. I'm not talking about a URL shortener than anyone can use and so could take you to blue waffles, goatse, or lemon party. The shortened links would be to a .gov domain, would originate from judges' chambers, and go through multiple levels of editing and review, including being approved by the Reporter of Decisions before publication.
    • PURLs and the like assume that there's going to be someone around to maintain the content, and maintain the linkage to the content.

      If a document is officially 'published' and given some sort of persistant ID (eg, DOI, ARK, Handle, whatever), then citing documents *should* use those over URLs.

      If however, you're just citing an example that's just some web site on the internet ... then you're SOL. They have no reason to never change their materials, keep a given version around 'til the end of time, or inform

    • by Minwee (522556)

      What is required to address this is an official government domain that hosts static screencaptures of web pages, provides PURLs to point to them, and ideally uses a URL-shortening function like goo.go [sic] or bit.ly.

      Indeed. Nothing lends more credence to a US Government document then putting all of its external references under the control of Greenland (.gl) and Lybia (.ly).

      • by ffflala (793437)
        I am aware that .ly indicates Libya, etc. Given that this is slashdot, I would have thought this knowledge would be accepted as commonly understood.

        I didn't think it needed explanation here, but yes you are indeed correct: obviously this model would have to end in .gov and be hosted in the states.
    • I would also add that this should be done with a "write once" kind of storage back. This way we have some small assurance it was not modified.

      You could go even further and keep a running log on the same medium that had an md5 of each previous content item which was then md5'd with the current.
      This seems (to me at least) like it would provide a verifiable trail that shows the written contents were not tampered with.

      Would this kind of scheme me useful? or am I missing something obvious?

  • Everything goes on one massive drive, and you grep keywords. Bring along a donut and coffee - it may be a while!
  • by Covalent (1001277) on Monday September 23, 2013 @02:02PM (#44926775)
    This should be a mission of the Library of Congress - to archive everything ever used by the government (including court cases), be it on the Internet or not.

    While they're at it, they can probably archive nearly everything else.
  • I would like to see participation in the memento project/stable URIs ( http://mementoweb.org [mementoweb.org]) become considered as a fundamental element of being considered "a journalist", part of the media, etc., in order to get the protections of that status. The lack of a consistent history in the web based media is harmful, and more than one massive corporation has used the "fluidity" of the web and hyperlinks to be more than fluid with the truth.
    http://www.metafilter.com/98913/Ancestors-we-will-never-know-presage-fe [metafilter.com]
  • For one person to make all the content needed for a person to educate them K-12-college is impossible. However you could write a hypertext document that links to content to educate people from K-12-college. I did not do this however, because of link rot. The obvious solution for these lawyers is to backup any page they want and have it documented, and not simply use URLs.

    Someday, someone will have a good system to educate people spoon fed style on the Internet. For now, learning on the Internet can
  • Isn't this precisely the type of thing archive.org exists for?

  • Sounds like the Justice Dept. needs a better CMS.
  • Company intranets suffers also from link rot, and some are doing it worse by using tools that inherently promote link rot.

    The point is that files are moved around on filesystems now and then "for better structure", "making it easier to find" and other lame excuses, but if every file had an unique ID that could be used to link with then they could move around the files as much as they like without causing harm.

  • I have always found that whenever an opinion cites a URL the courts are careful to indicate the date that it was accessed. A hard copy (or at least a PDF) of the page as it existed at that time is then retained by the clerk in the case file. There's usually a footnote concerning this arrangement.

    It's not that hard. No need for fancy technology or mass archiving of the Internet. The only thing they need is a basic PDF writer. Problem solved.

  • For something as important as court cases surely you make a copy so it can't be lost.
  • Court documents actually just list a link? With no copy/printout of what it links to? Really? If ever there was a doubt about how stupid and clueless judges are, it's that fact they allow shit like this to exist in official court documents.

    What next? Someone puts in their court document to "Google it"? Seems that would probably be better than a permanent link.

  • by EuclideanSilence (1968630) on Monday September 23, 2013 @05:03PM (#44928735)

    The only link that matters still works.

    http://www.archives.gov/exhibits/charters/constitution.html [archives.gov]

    Too bad they can't reference this one more.

  • -- member of Project Xanadu
  • This is because the current IP protocols are Dumb when it comes to data. I mean that with a capital D. Not that the designers are dumb, but the protocol itself is just dumb, in that it knows nothing about the data.

    We suffer from the fact that IPv4 and IPv6 do not have store and forward. Instead of / in addition to endpoint IDs, all the routers need to have a large cache for versioned content. You can still have your frackin' unversioned uncacheable content, however we need a more permanent store and forward service. This will reduce bandwidth consumption, and is essential for bringing the Internet to space it's part of the Interstellar DTN (delay tolerant network).

    Imagine the entire Internet as a hybrid between a decentralized distributed file store, and the current IP stack. Instead of requesting an endpoint we could request the data hash. A distributed hash table could serve the content from within the Internet. ISPs can vastly decrease bandwidth by increasing their cache duplication size (as we have currently), but when a cache miss happens it could be served by another cache in the distributed hash table on up the chain to the origin. "What about updates to documents? My cached pages!" Fools, the doc will have a different hash. We could actually SOLVE issues whereby resource names must be changed by simply requesting them based on their internal content hashes. Additionally, we can fix the issue of mixed secure / insecure content while we're at it. A resource referenced inside a secure document can include THE HASH ID of the resource. Thus, you know the insecure and cacheable content you're pulling in is unmodified...

    Nope, we can't have nice things because you fuckers regard the old farts who designed the current antiquated systems as if they were gods, even though store and forward works beautifully for packet radio. (Hint: The FCC disallows any use of store and forward by unlicensed civilians.) Otherwise we could have a decentralized unsnoopable high-speed (largely) wireless Internet that grows organically with demand with little or no fees (everyone's a node hosting data, buy a box once and you're done).

    The main barriers to solving the problem are ISP greed, draconian copyright laws, and desire for a surveillance state.

    Note, this WILL all happen eventually anyway, you idiots are just too foolish to realize it, so it'll turn out to be a cluster fuck like "The Web" is now because the end result will be evolved by bolting on shite to the current systems over the years instead of being designed with the desired end result problem space in mind. Eg: Colocation fees? WTF? This is a hack to move data closer to endpoints... like store and forward achieves by design.
    kthx.

  • Here's how to fix this: You quote, in total, the web page or article in question. Then, you *attribute* it to the url where you found it from, the date that you found it there, and the author/copyright holder. Now, it doesn't matter if the page changes or the site goes away. The content is preserved, the source is attributed. And, copyright troll lawyers aside, no one is harmed!
  • Every time you make a citation, copy it into a legal database, and then reference that entry into the database IN ADDITION to original URL. Include date and time... end of controversy.

...there can be no public or private virtue unless the foundation of action is the practice of truth. - George Jacob Holyoake

Working...