Copyright Tool Scans Web For Violations 185
The Wall Street Journal is reporting on a tech start-up that proposes to offer the ultimate in assurance for content owners. Attributor Corporation is going to offer clients the ability to scan the web for their own intellectual property. The article touches on previous use of techniques like DRM and in-house staff searches, and the limited usefulness of both. They specifically cite the pending legal actions against companies like YouTube, and wonder about what their attitude will be towards initiatives like this. From the article: "Attributor analyzes the content of clients, who could range from individuals to big media companies, using a technique known as 'digital fingerprinting,' which determines unique and identifying characteristics of content. It uses these digital fingerprints to search its index of the Web for the content. The company claims to be able to spot a customer's content based on the appearance of as little as a few sentences of text or a few seconds of audio or video. It will provide customers with alerts and a dashboard of identified uses of their content on the Web and the context in which it is used. The content owners can then try to negotiate revenue from whoever is using it or request that it be taken down. In some cases, they may decide the content is being used fairly or to acceptable promotional ends. Attributor plans to help automate the interaction between content owners and those using their content on the Web, though it declines to specify how."
Can't they just use google or torrent sites? (Score:3, Informative)
If users can find items they want, presumably the copyright holders could use the same methods...
Re:i don't like robots.txt anyway. (Score:5, Informative)
Let's take a fun legitimate site like, oh... Wikipedia [wikipedia.org]:
(They also disallow certain specially generated pages like Special:Random, and any of the pages which actually let you edit the site).Let's see, what are some other sites? Ooh. Take a look at Slashdot's robots.txt [slashdot.org]! (disallows a variety of fun pages.) Microsoft's? [microsoft.com] How about whitehouse.gov [whitehouse.gov]? Google [google.com]?
Re:Wager (Score:3, Informative)
Here's how to block two subnets using htaccess and mod_rewrite on apache: Line 1 activates the rewrite engine
Line 2 sets the condition to include remote addresses 63.148.99.224-255 and includes [OR] to allow further processing
Line 3 sets the condition to include remote addresses 63.146.13.64-95
Line 4 sets the rule that any url be forbidden
So, save your bandwidth by denying access to your content from unauthorized viewers (bots)
Re:search by hash? (Score:4, Informative)
But yeah, it might make sense for Google to become "aware" of unique content and variations of it.. but I doubt they'd ever use that openly for (aiding in) hunting down copyright infringement, simply for PR reasons.
Re:i don't like robots.txt anyway. (Score:5, Informative)
And dynamic content is, of course, the answer. If I'm going to put up copyrighted content in the future, I'd use one of a dozen schemes that regenerate the download link on a per-session basis. Obviously they're not going to honour robots.txt, but why are your links readable by such a basic spider? You need to:
Anyone who follows the above steps (and most sites already do most or all of this) won't be found by the spider. Period.
The only thing I can think of that this product would be useful for is to find people who have blatantly copied my website, but I'm sure you could find those people equally easily with Google.
mandelbr0t
I've experienced it from both sides. (Score:3, Informative)
I've experienced this from both sides.
I have a bunch of my books on the web, and every once in a while I do a search on some text from my own books to see who else is mirroring them. The books happen to be copylefted (dual-licensed GFDL/CC-BY-SA), but I'd like to know who's mirroring them, and check whether they're violating the license. A lot of people just seem to be hoarding the PDF files on their university servers, maybe because they're afraid my web site will disappear; that's flattering. One guy was selling them on CDs on e-bay, violating my license (claimed they were PD, didn't propagate the license). Another guy translated them to html, with lots of errors, changed the license to a more restrictive one, and put his own ads up; he fixed the licensing violation when I complained, and in a way it was a good thing, because it motivated me to make my own html versions (which are now bringing me a significant amount of money from adsense every month). One kind of annoying thing about mirroring is that the people who are mirroring never bother to update their mirrors, but in general I just figure there's no such thing as bad publicity :-)
From the other side, I once received an e-mail from a museum in the UK that was complaining that I was using a 17th century oil painting of Isaac Newton. I guess they own the original, and they may also have been the ones who did the scan that I found in a google image search, but under U.S. law (Bridgeman Art Library, Ltd. v. Corel Corp.), a realistic reproduction of a PD two-dimensional art work is not copyrightable. What really surprised me was that they came across it at all, because at that time I think my book was only in PDF format, and hadn't been indexed by google because the file size was too big.
The whole thing doesn't seem negative to me in general. It makes just as much sense as people doing a vanity search in Google before they apply for a job, or authors watching their amazon.com sales rankings obsessively. I guess the most obvious potential for abuse would be if they send a nastygram to your webhost, and your webhost is a low-end one that figures it's not worth their time to keep your account, so they just shut off your account.
Re:Wager (Score:3, Informative)