Image Detecting Search Engines' Legal Fight Continues 220
Mr. steve points to this New York Times article about sites like ditto.com and the new google image-search engine, writing: "Search engines that corral images are raising Napsteresque copyright issues." Expect to see a lot more sites with prominent copying policies and "no-download" images, and trivial circumvention of both. If an image is part of your site's design, you wouldn't truly want to prevent downloads, would you? ;)
property (Score:1)
Re:property (Score:3, Interesting)
Images on my site are my property. In every jpeg image (and powerpoint, word and text file) I create, I place my copyright statement. I also have a robots.txt file to prevent copying by search engines. To google's credit, they obey the robots.txt file, but others are not so considerate.
Recently, I had the occasion to place a number of images and other copyrighted works on a website hosted on one of my machines. The copyrighted works were available for a period of about 20 minutes, long enough for my friend (who paid me in beer, including many pints tonight just before I typed this, apologies for typos and bad grammar) and his brother to retrieve the works. My friend used AOL Instant Messenger to tell his brother which URL to find the images, including the obscure URL.
After I saw the two of them had retrieved the images, I left the site up for some stupid reasons (end of work day, beers calling, phone calls from idiots). Apache was running on an obscure port (28962) on an IP address with no DNS/reverse DNS entry. About 14 hours after my friend has sent the URL to his brother via AIM, I saw an AOL spider crawled my site for those works.
Its pretty fucking obvious that AOL is sucking up every copyrighted work they can, presumably to have copies of everything of value that passes by AIM. Their EULA allows them unlimited copyright to anything that passes by their systems, even if it is hosted on a third party system that doesn't agree to their EULA.
The machines involved slowly crawled the site, about one hit per minute from 4 different IP addresses. Machines like:
spider-loh-ta012.proxy.aol.com, spider-loh-ta016.proxy.aol.com, cache-loh-aa01.proxy.aol.com, and
cache-loh-ab02.proxy.aol.com carefully worked the site, following every link, and grabbing every (huge) jpeg and ppt file. Stupid of me to not filter AOL from my website, but I've learned. From now on, only password protected protocols that can't be easily picked up in plaintext streams.
Since that incident, I've been able to work this demonstration into my security reports. A client can set up a totally fake URL on a random port, send a message by AIM, and within 24 hours, the site is spidered by AOL, regardless of the robots.txt file. Sending an FTP username and password will result in the site being accessed within 24 hours. AOL hasn't responded to any of my queries, so that makes the whole thing even more interesting from a security aspect, and makes me even more money.
So don't place any intellectual property on any internet connected machine, if you want to retain control of your copyright. Large corporations will take your works, and if they happen to have great value later on, you won't see any recompense. I actually feel bad for the RIAA/MPAA giants, because they can't defend themselves, even with the DMCA and new European laws. You may own the IP for a work, but the internet doesn't care. "Get over it".
the AC
Re:property (Score:2)
STILL the wrong question (Score:2)
No-download policies (Score:1)
bullsh*t (Score:2, Informative)
http://techienews.utropicmedia.com [utropicmedia.com] help us beta!!
Re:bullsh*t (Score:2)
Don't sign up for NYTimes: (Score:5, Informative)
blah (Score:2)
I wish I could say the same, but that would involve thinking about the client (the web-surfer), which is against corporate policy.
Re:blah (Score:1)
If anybody don't want their images or text to be downloaded, don't put it on the net.
search engines designed to circumvent copyright (Score:1)
Re:search engines designed to circumvent copyright (Score:2)
well... (Score:4, Insightful)
Re:well... (Score:1)
Re:well... (Score:1)
After all, one could argue quite effectively that the copywrite violations of Napster benefitted the music industry, and the artists. That didnt stop them from putting the smack down.
Sooner or later we'll all realize that intellectual property is crap, a myth. People will still innovate because it's a natural human desire to, not because of any potential gain. If you're innovative solely for personal gain, chances are you're not coming up with anything very useful anyway.
-J5K
Re:well... (Score:2)
Re:well... (Score:2)
My point still stands that these are useful services that benefit everyone.
As shown by giving your site as the only example? Here's an example. What if I run a "Nicole Kidman nude pictures site" with permission from Nicole Kidman and the photographers. I place ads at the top of my site, and nude pictures of Nicole Kidman at the bottom. Now google comes along and let's people type in "Nicole Kidman nude" and see my pictures without my ads. Not only is that an illegal derivitive work, it is harmful to my business.
Caching is one thing. Extracting just the images without the rest of the site is another thing entirely, and as long as we continue to have copyright law it should not be legal.
Re:well... (Score:3, Informative)
Re:well... (Score:2)
Re:well... (Score:2)
http://images.google.com [google.com]
Eventually all will be availible to search (Score:1)
Ask before you take (Score:1)
I spend a lot of time editing my images in photoshop until they are perfect. I generally don't care if someone thinks they are cool and would like to use them. I do have an issue with people who take my images and claim them as their own work.
These search engines make it seem as though it is OK to take what ever you want and not credit the source.
That is not cool at all. If you want an image ask the developer if you can.. 9 times out of 10 they dont care as long as you give htem credit and a link.
Re:Ask before you take (Score:2)
Re:Ask before you take (Score:2)
So say no to the robots :) (Score:3, Informative)
Allows really useful features like marking given directories, pages, or files off-limits to a specific robot or all robots in general. Boy... a technical solution to a technical problem instead of a new round of lawsuits?
Quickie examples (this is SO simple folks):
User-agent: *
Disallow: /
Boom! No more google telling that horrible world of pirates and thieves about your site. Not many visitors either though....
So maybe you want to exclude just googlebot from your images and image directory with the following:
User-agent: googlebot
Disallow:
This will still allow your main pages to be indexed according to your meta keywords, but will disallow any 'napsterization'. Of course since it requires people running sites to do work and understand technology lots of people will probably decided lawsuits are easier.
Robots.txt DOES require you to run your own domain. If you don't, try using meta tags in the head of the html code for a similar effect, but it is harder to implement (must be on each page rather than site wide) and less supported. Info here [robotstxt.org].
If you spend that much time on the images... spend 5 minutes making a robots.txt file to indicate you don't want them taken by bots. But always consider anything you put on the net as published, if something's private don't put it on the net.
Re:So say no to the robots :) (Score:2)
And he should also look at the trivial ways of setting up his webserver to prevent serving an image, if the referer isn't from his local site.
This is a textbook example of an overcaffeinated ignoramous. God bless america, land of the dumb.
Re:So say no to the robots :) (Score:2)
If you run your own server, you could use .htaccess to require a password in order to access the site. The password could be displayed on the homepage as plain text or in an image; this would allow humans to get to "the good stuff" with a minimum of extra effort while making it nearly impossible for a generic 'bot to access the site
Re:Ask before you take (Score:2)
Granted, this won't stop someone from taking the GIMP and cropping or airbrushing the image to remove your logo, but at least you are making them work at it -- it would be very difficult to automate this process, particuarly if you vary the placement, size, and color of your watermark. Yes, it obscures part of the image and pollutes the artistic purity of your work -- but it's the only simple way to discourage wholesale theft of your work.
Of course, if you are really concerned about your art, don't put it on the web. Putting an image out on a public website is like putting cookies on a table with a big sign that says "Free Cookies, please help yourself".
What right do they have? (Score:3, Interesting)
If someone takes a picture of me out on the street, i have no right to keep them from publishing it. If i don't want people to take pictures of me doing something, i don't do it in a public place.
If you don't want Google picking up your pictures, and you don't want people saving your pictures to their hard drives, don't put the pictures on the web.
Re:What right do they have? (Score:2)
If someone takes a picture of me out on the street, i have no right to keep them from publishing it. If i don't want people to take pictures of me doing something, i don't do it in a public place.
I think you might be mistaken here. I believe they call these "slice-of-life" photos, and while generally they don't have any rights issues involved, I have heard of a number of legal cases where the person photographed successfully sued. It had something to do with a failure to attribute the photo *of* the person, *to* the person.
Wish I could remember the details; everyone knows that legal rulings are all about the details so..
Especially since robots.txt lets you disallow this (Score:4, Informative)
Allows really useful features like marking given directories, pages, or files off-limits to a specific robot or all robots in general. Boy... a technical solution to a technical problem? Who'd a thunk it?
Quickie examples (this is SO simple folks):
User-agent: *
Disallow: /
Boom! No more google telling that horrible world of pirates and thieves about your site. Not many visitors either though....
So maybe you want to exclude just googlebot from your images and image directory with the following:
User-agent: googlebot
Disallow:
If you want to do this for multiple directories, you add on more Disallow lines:
User-agent: *
Disallow:
Disallow:
Now if you put
in your code to show up high on the search engines, you shouldn't be surprised or upset when you SHOW UP HIGH ON THE SEARCH ENGINES.
Not all robots follow the robots.txt standard, and there's no way of forcing them too. But google does, and that seems to be the big concern here.
A real life example, slashdot's robot.txt file (at slashdot.org/robots.txt [slashdot.org]):
Re:Especially since robots.txt lets you disallow t (Score:2)
Re:What right do they have? (Score:2)
Re:What right do they have? (Score:2)
http://www.loc.gov/rr/print/195_copr.html [loc.gov]
Re:What right do they have? (Score:2)
Thinking about it. (Score:1)
No credit for all that hard work...for shame, for shame... you might want to check out Digimarc [digimarc.com], though
-PONA-
My God... (Score:1)
Search Engines that keep the data of webpages stored in a database are violating copyright.
What next? Copyrighted URL's?
In a nutshell (Score:2)
Copyright is simply NOT an appropriate guide to IT policy. Society has spent trillions creating technology allowing information to be instantly copied.
Copyright law was created to regulate BOOKS, not ELECTRONS. And it wasn't aimed at individuals, but at publishing houses.
This is why we have absurd situations where publishers claim that the information in buffer memory represents a copy - that streamed audio is creating a copy in fixed tangible form. What copy? Where?
Craziness... And of course we certainly wouldn't want to consider creating a copy to allow indexing by search engines to be fair use would we? Why that would instantly destroy our whole society and open an interdimensional gateway for demons to pour forth and devour our children
Re:My God... (Score:2)
If you wondered why DMCA wasn't mentioned explicitly in the article, this is probably why. DMCA is relatively nice to search engines, but the issues here go beyond that. We are concerned with whether images are different from text and what is a fair use. The article does a good job of outlining the issues.
For more on DMCA you might try this summary [loc.gov].
And for the really adventurous there is always the full text [loc.gov].
Note: Both links are PDF files.
It's the same damned thing as copying text (Score:2)
It's just a bunch of people wanting to raise a big hooplah and creating a big stink about the bandwidth consumption problem that this poses.
Wouldn't that? (Score:4, Funny)
I can just see it now:
Judge shuts down microsoft for distubting software that allows you to violate copyrights by downloading images. Microsoft was shut down Monday for it's popular browser Internet Exploder. A representive from the company said "We were shocked. I mean, we didn't really expect the software to work in the first place."
Of course we won't see such a headline, but still, turnabout is fair play.
Some sites are already doing this with cookies ... (Score:2)
I browse with most cookies filtered out by way of JunkBuster and have noticed that some sites will not let you view some of their images if you have cookies disabled. Enabling all cookies makes the problem go away. By requiring a cookie to be set, these sites are effectively disallowing web crawlers which ignore cookies from caching their images. Expect to see more of this, especially in sites full of copyrighted images or in sites which rely on advertising and where images are the main draw (ie. the porn industry).
Re:Some sites are already doing this with cookies (Score:2)
Alls they have to do is make the site think your accepting its cookies.
Re:Some sites are already doing this with cookies (Score:2, Informative)
Re:Some sites are already doing this with cookies (Score:2)
Netscape also has this feature.
Of course, it's undocumented and in fact is a hack. Here's how it works:
Same basic principle. The net effect is that when Netscape exits, it will lose any cookies that are set.
It is also possible to set permanent cookies: for example, your /. login cookie. Simply logon to /. with Netscape prompting for each cookie that is set: only accept the login cookie, and then quit. The cookies.txt/MagicCookie file will remember it forever more. My Netscape has, I believe, three permanent cookies. However, I've mostly transitioned to IE5: partly because of its cookie powers, which are much better.
Sometime in the future... (Score:1, Funny)
Mother: Because he forgot to disable his browser cache honey.
m$ breaking the DMCA with image toolbar? (Score:5, Funny)
Is m$ breaking the DMCA with thier circumvention?
Thank god for Google Cache for Slashdotted sites! (Score:1)
robots.txt (Score:5, Insightful)
Re:robots.txt (Score:2)
BIG difference... (Score:2)
This seems so absurd to me.... I remember when the hottest programs were ones to get you higher-ranked on the search engines to drive traffic... has concern over ip really overwhelmed a desire for more visitors this much?
Re:BIG difference... (Score:2)
Re:BIG difference... (Score:2)
>accessed and indexed by search engines, so almost nobody complains
>about the need to add robots.txt to sites they don't want indexed.
No, that's not the main, nor even the important, difference.
There already exists an established culture, and indexing is part of this. It predates what we now call the web. By putting up a webpage, you are implicitly consenting to the rules of the culture. Spam is *not* the existing culture; it violates these rules, and there is no consent.
The question here is whether the impied consent, which cannot be explictly revoked in other than the established manner, extends to the photographs, or only the text.
hawk, esq., but this is not legal advice. If you need legal advice, contact an attorney licensed in your jurisdiction.
Re:robots.txt (Score:2)
By putting you work in a web server, you clearly authorize some copying by your action. You do not however authorize somebody to change your work.
Re:robots.txt (Score:2)
I don't have any citation handy, but I think generally a file constitutes a work. This would be a good question for the cni-copyright list.
There is also a concept of a collection of works, which is different from a derivitive work.
Google is clearly creating a derivitive work by taking your images and combining them with the images of others.
I think a stronger argument is to say that by shrinking them to thumbnails you make a derivitive work. That involves actually changing the original. Showing it along side others is roughly akin to placing books next to each other on the bookshelf.
Shrinking would have to be justified by a fair use argument, I guess. It involves a substanital loss of image detail, and quantity copied is one of the fair use factors.
Re:robots.txt (Score:2)
googlebot does follow it though (Score:2)
Re:robots.txt (Score:2)
It's called copyright law, specifically 17 USC 106(1) [cornell.edu]. In practice, the need for explicit authorization would be conclusive.
You must have authorization to make a copy, (assuming it's not fair use). Clearly the authors of the robots.txt standard did not have the authority to make law, but contextual standards do have meaning. Any judge would start the analysis by placing the burden of proof on the copier to prove they had authorization. An explicit denial of permission in the standard place would require pretty strong counter-evidence. The mere act of placing it in a web server probably would not suffice, given the more granular meaning expressed by an entry in robots.txt.
You would get laughed out of court if you said you wanted to hold the copyright owner to contextual opt-in but not contextual opt-out.
The issue in this case is the converse. If no robots.txt is present, is recopy authorization granted from the fact that the file was placed in a websever? I argue "yes", because a convention is required for every opt-out system. After the initial opt-in, it's fine to have an opt-out, and local tradition and convention *has* to define that.
Re:robots.txt (Score:2, Interesting)
I don't know whether or not Kelly is incompetent, but I don't see how this can be interpreted as a trick. He has an explicit terms of use statement on every page that bans reproduction, modification or storage of his images (along with about ten other possible uses).
If Kelly were complaining about misuse of a paper copy of his images, it would be clear that the copier had deliberately violated his copyright. However ditto.com is collecting, processing and republishing images without a real person looking at the bottom of the page for this copyright statement.
The real question here is responsible for preventing violations of a clearly expressed copyright. Is it Kelly, who will have to track all image cataloging spyders and manually disallow them while still allowing text indexing if he wants to promote his site? Or is it ditto.com, who would have to instruct their spyder to look for phrases like "Images copyright Bob Plaintiff 1999"?
Do you have any idea how robots.txt works? (Score:4, Informative)
Disallow:
Put all image files in the
or I would recommend for him:
User-agent: *
Disallow: /
- i don't think he has any 'right' to use the search sites to promote his site if he doesn't consent to them copying his data. Is html code protected by copyright? This would make all search sites illegal, and destroy the internet as a usable resource. So because the consequences would be untenable, we should answer no.
That's all. Meta tags, which you seem to be thinking of, are a pain in the ass, poorly supported, and only worth using if you don't control the domain and can't put up your own robots.txt file.
If I put 10 pizzas on a picnic table with a note saying 'please dont eat my pizza' and leave it there for 3 days - it will be eaten. If I do this ignoring the safe that's right there that I could use to lock them in, then i'm an idiot.
you missed my point (Score:2)
Every web page should, therefore, be granted copyright protection.
The inevitable result of this is that portals stop indexing the web and the web ceases to be a useful tool.
The web is a DIFFERENT media than all others before it. It shouldn't be surprising that 18th century laws don't apply to it well.
When Kelly posts an image on his web site he is implicitly GRANTING consent for people to do any of the things they can do with http commands to access it: including viewing the image in a browser, saving it to their hard drive, saving the location, giving the location to their friends (which is why slashdot is allowed to crash other web sites willy nilly), etc. Including being spidered. The web would be nothing if not for the popularity of the portals early on for making the web usable. He wouldn't bother posting his images if the search engines weren't doing what they're doing. Nobody is SELLING his images in competition with him, so nobody is causing him financial loss. One of the reasons the movie companies lost Betamax was that the court held that someone MUST show actual financial loss to be able to request help from the copyright laws. Remember, copyright laws were aimed at publishing houses originally to keep them from stealing from each other. At best Kelly could maybe get a judgement that ditto.com or whoever couldn't show ads on pages with images from other sites, that those sites couldn't profit from other's copyrighted work. Although I would remind you again that all web sites could be considered copyrighted, and that this could be disastrous. Does slashdot profit from others' copyrighted work? Is this illegal theft or online journalism?
If Kelly wants to use the web to post images and grant selective access a variety of technical means exist to allow him to do this, from firewalls to passwords to the simplest robots.txt and meta tag exclusions. If the technical means exist to allow this, I would say that he is obliged to avail himself of those before he can seek remediation. The web was here before Kelly and will be afterwards. If he doesn't want to play by his rules he can take his toys and go home.
robots.txt is easy and flexible (Score:2)
Yes, and that kind of functionality is very useful. Arguably, it falls under "fair use", whether or not Kelly likes it. But the web actually gives him a way of expressing his preferences in a machine-readable way that imposes no burden on him.
If natural-language statements like Kelly's are found to be sufficient to exclude indexing robots, the web would suffer greatly, and for no good reason whatsoever.
Is it Kelly, who will have to track all image cataloging spyders and manually disallow them while still allowing text indexing if he wants to promote his site?
Kelly has to do no such thing: the robots.txt mechanism is flexible enough that he can include and exclude parts of his site from indexing according to his preferences; he doesn't have to know what robot is used for what purpose.
Not that Kelly has any legal right to make such choices to begin with: text search engines are under no obligation to index part of his site (in fact, I think any self-respecting search engine should blacklist him). Giving him an all-or-nothing choice would be entirely sufficient. He should count himself lucky that the mechanism he actually has at his disposal is so flexible.
Re:robots.txt (Score:3, Informative)
Grep for "terms and conditions" in:
http://www.gigalaw.com/library/ticketmaster-tic
What about personal pages? (Score:2)
From using assorted mirroring software in the past and from what I recall of the robots.txt documentation I've seen, it needs to be at the root of the domain, not in a subdirectory. So, does that mean that only people with a domain of their own get to protect photos or artwork that they've created?
Yet another load of... (Score:2, Interesting)
How is posting a picture on a web site any different than putting out a table on the side of the road, with a pile of photographs and a sign that says "Free"?
Now, I'm totally in favor of artists' rights and all... but let's ease off on the pervasiveness and invasiveness of copyrights.
Re:Yet another load of... (Score:2)
Posting on the web is not the same thing as having a table on the side of the road with a sign that says "free." By posting on a website, you let people look at your stuff, make a copy for personal use/commentary, etc. You don't give up ownership (i.e. copyright.)
Re:Yet another load of... (Score:2, Insightful)
If Google makes money with banner ads, that doesn't really have anything to do with posting thumbnails of images. They AREN'T making money from the images. They are providing a FREE service pointing out where to find these images. If the artist doesn't want visitors to his site, I don't know why he has a web page. If he does want visitors, I don't know why he has a problem with a search engine pointing people his direction.
Re:Yet another load of... (Score:2)
Re:Yet another load of... (Score:2)
Re:Yet another load of... (Score:2)
This doesn't bother me. I like looking up pictures. But I am going to play devil's advocate. If we were to extend this in to the future, we may find that sites no longer reduce the image size to a thumbnail. Let's say your search results only returned a few hits. No need for a thumbnail, right? So far, all is ok. The user is happy.
All is not ok, though. The person who created that image is left out completely. What if they wanted to know how many users were viewing their images to judge whether they should release it to a major magazine? The image could be generating a lot of hits but only through the search engine. The creator of the image never sees those users.
I liken it to the TMBG issue a while back. They Might Be Giants freely gives away music via their web page. But they do it to create a community. They didn't like napster because it stole directly from that community (I am going off of memory, I hope this is correct).
What they said in the article hits the nail on the head. Their picture has been reduced to that of clip-art.
I like the idea of everything being free, but if the creator doesn't want it to be, well.. tuff luck for us, I guess.
There is a very simple answer (Score:4, Insightful)
Re:There is a very simple answer (Score:2)
Bandwidth Protection is a Webmaster Responsibility (Score:2, Interesting)
Example:
RewriteEngine On
RewriteCond %{HTTP_REFERER} !^http://www.yourdomain.com [NC]
RewriteCond %{HTTP_REFERER} !^http://yourdomain.com [NC]
RewriteRule
Actually, the question is .. (Score:2)
How are images different than text? (Score:2, Insightful)
Authors are losing control over their works which can be easily found and copied now they they're catelogued by search engines! Outrage!
But is it the Right Thing to ban or penalize this?
These are exactly the types of problems that we're coming up against now that copyright has been deemed a control mechanism. We've gone and screwed up the whole system to the point where it's going to be virtually unusable.
But personally, I just want to know who I can sue for "copying" text and images from my site when they visit it. I need the money.
Hello, robots.txt? (Score:2)
However, while I would suspect that Google does the Right Thing with this, I know several newer search engines that completely ignore robots.txt and grab everything without even checking for this file. In addition, those new to the website game don't know about this mechanism, and thus don't know how to take steps to 'protect' their work.
IMO, the robots.txt thing ought to be a standard in place by both search engine software and publically offered site-mirroring software. Particularly in the latter case, most of these clients ignore robots.txt completely and grab all content including dynamic pages.
hmm (Score:2)
I got news for you, if I look at your site, I am saving it digitally, in my ram.(and of course cache).
Free Advertising (Score:3, Insightful)
Altavista.com - robots.txt etc. (Score:2, Interesting)
Many sites do this simply to get you to search from their site.
Altalavista and Google (as I now notice) both make you visit the site to get to the pictures. Chances are if you are complaining about someone stealing your pics - you are getting them with banner ad views.
I know i've put a pic here and there on the net, my own works (mostly) but I would notice someone trying to pass off my work as their own. If the concern is over pr0n whats the point; your pics are already on alt.binaries.great.ass.paulina or other such newsgroup. Pr0n pics are traded over IRC, Kazaa, Gnutella, what ever.
If someone has downloaded your company logo... whats the fuss? Either they are making a desktop background or something and if you still don't like that... sue them! Not the search engine!
I have a Athlon boot screen for winblows 98 and I bet I broke a copyright law while making it! But why would AMD even think of getting mad when I'm advertising for them??
If someone is editing your logo and putting sucks under it or some such thing... you probably do suck but have no time to sue anyone because you're doing The Next Horrible ThingTM.
Case closed. People wanted publicity... don't give it to them. They just use the word Napster to get attention from John and Sally Newbite.
do not download (Score:2)
Oops, too late! I already downloaded it so I can look at it! It's in my cache! It's in my RAM! It's in my squid proxy! I guess I better go turn myself in to the Kopyright Kops, eh?
This is all just silly.
Next step.... (Score:4, Informative)
They sell access to these databases to their clients to search for illegal copies of their works, or to see any mention of them in an unfavorable light. Is this an infringement?
Re:Next step.... (Score:3, Insightful)
Bing! You have reinvented TV, but with online ordering capabilities. Having failed to create interactive television, Big Business is systematically destroying those elements of the web that made it better than interactive TV.
Your spoonfeeding, already in progress, will now resume.
We need a file copyright meta information standard (Score:2)
I know that there are arguments about how if you place information on the web then it's practically public domain, and there's some merit to that I guess. After all, how can you stop people from downloading it?
At the same time though, I think it's silly not to allow people to put their stories and their artwork and what is essentially their copyrighted material on the web where people can access it without the ability to tell people not to copy it.
The napster-like thing with many image search engines is a problem. Even when image search engines including google can give a good indication of where an image is coming from, they often show complete versions of them (even if reduced size) that people can download without seeing any copyright information, and save without going through the source. For text searches there's a fair use issue because most search engines only display a couple of lines, except I've heard some people complain about the google cache. Also it's possible to put meta information on pages saying you don't want them indexed. With image searches it's much more difficult.
When Babylon 5 [babylon5.com] was around, and probably still, there was a policy that fan websites could use as many promotional images as they wanted to as long as they explicitly stated that they were copyright to the studio, and required everyone grabbing it to say the same.
Image search engines can't do this because they can't read things like watermarks. What we could do though is have a standard allowing for publishers to associate copyright information with files so that search engines know when and how they should be able to index and display other people's copyrighted work.
It would be a voluntary thing and if search engines want to make legal judgements about whether copyright claims are going too far, it would be completely up to them. But it would allow for image search engines to operate cleanly and make sure they don't go futher than the copyright holder wants them to, at least with reason of a good legal argument.
Make it a text file in the same directory, or something. Requiring it to be at the top level directory of a domain would mean that some publishers without access to that dir won't easily be able to set meta information for their own stuff.
Re:We need a file copyright meta information stand (Score:2)
People can only view artwork, listen to music etc. only after downloading the data. IE they have to get a copy of it anyway.. is there any clearer way of rephrasing 'Information wants to be free'?
It's like the hypothetical person with perfect memory and sufficient abilities who can reproduce a piece of music having once heard it. Is he committing a crime by having a good memory? If not, shouldn't normal mortals be allowed 'memory augmenting devices' like hard disk recording?
Information has no volition to be free or anything else, but its only natural state is that of freedom. But then again, as Nietzsche (or someone else, can't remember) put it, you don't love information enough it you disclose it to others...
Meta information isn't restrictive (Score:2)
Meta information isn't restrictive. It's descriptive.
If you chose to ignore an author's copyright notices then that's completely up to you, and this wouldn't stop you from copying it.
The main problem I have with the "information wants to be free" crowd is when someone takes my work and starts showing it off and taking credit for it as their own. Giving it away is one thing, but if it goes into the public domain I'd lose most of my incentive to create anything new.
I'm not in favour of totalitarian restrictive measures like CSS that work like a broadsword in blocking people's rights to fair use. But allowing authors to hold copyrights on their work is perfectly okay the way I see it.
rebroadcast is the issue (Score:2)
The issue is the rebroadcasting of the image by someone who doesn't hold the copyright on it.
As a photographer, if I put my hard work on the internet and suddenly some business plasters it all over their site without my consent, I would be pissed off. A google image cache with a searchable image database would be similar, and objecting to it is reasonable.
Corbis... 640x480 (Score:2)
It's pathetic, but Corbis actually sells extremely low rez 640x480 images [corbis.com] to "suckers."
I would argue that anything less than a print quality TGA image is a sample image, analogous to 128kbps MP3. i.e. it's free publicity in the eyes of real artists, and it's "copyright infringment" to greedy middlemen.
e.g. I happen to have a tangible print of this Pat Rawlings painting [novaspace.com] on my wall......and this [google.com] is called free advertising.
Anway... Everyone benefits from abundance...except the selfish FEW that would like to profit from artificial scarcity.
browser cache? (Score:2)
Copyright is just getting totally nuts...
When is the next shuttle off this rock?
Isn't copying on the internet authorized? (Score:2)
I fail to see any meaningful difference between infinite copying for free from the original site and transitive copying from a search engine. Since "deep linking" has been held to not be infringement, the argument that you aren't forced to see the whole page is bogus, since an URL can target the individual image file.
You can explicitly unauthorize search engines by using robots.txt [robotstxt.org], right? . Any splitting of hairs about the scope of copying authorized by the act of putting your file in a web server can be fully accomodated by using robots.txt. Since this standard is publicly available and well known, doesn't placing your files on the web without restricting via this method constitute a grant of authority to everyone with access to your web server to copy? Now if these search engines ignore robots.txt, then that is another matter, but I doubt that is the case.
When you opt-in to copying by placing your files in a web server, but fail to subsequently explicitly opt-out after that, you have authorized copying, so tough.
The photographers say they might have to leave the net. Not so if they follow robots.txt . I don't generally think that forcing off people who won't learn how the net works is a bad thing. These groups are essentially trying to use the courts to create standards. The net already created its own standard for this in 1994. Perhaps we will have the first ruling that essentially says "RTFM".
Circumvention (Score:2)
I especially like those sites with Javascript pop-up message boxes that appear when you right-click so you can't select 'save as.' As if you couldn't just go into your browser's cache and copy it from there. Or, even easier, simply hold the right mouse button down while you hit the spacebar to clear the popup.
Deja vu (Score:2)
I could see a problem with sites such as Google that present a preview of the image found... Perhaps unfair-use claims could be avoided if the quality of the preview was lowered. A pixel-doubled image looks enough like the original that a human can make decisions based on it, but it's useless for anything a computer would want to do with it.
Something we'll be seeing soon... (Score:2)
Sigh...
Hey, here are some images you can have for free:
http://www.furinkan.net/art/ [furinkan.net]
I'm the artist and owner, but maybe if my work gets around, people might be willing to pay me for large, high-resolution scans!
I don't understand (Score:2)
So how can sites say that an image may not be downloaded ? If that were the case then you wouldn't be able to see it.
Enforcement is already here (Score:2)
That's how.
Re:Enforcement is already here (Score:2)
Does anyone know of a Free media player that does a decent job of playing
I am guilty... (Score:2)
Re:Virgin problem (Score:1)
Now go buy a Marvin Gaye record, dork.
Re:Marked images (Score:1)
Perhaps we should encourage greater use of this - rather than just moaning from both sides of the fence.
(how about the search engines extracting this data and displaying it in the search results...should keep everyone happy
Re:What about other Caches? (Score:2)
fair use is an American copyright doctrine. while it undoubtedly applies in this case, particularly with things like Squid, it's irrelevant to the rest of the world.
what we're going to have to hope applies here is the principle of implied permission. basically, in copyright, under certain conditions I can by my conduct imply that I am granting permission for certain activities. for example, if I send a story to a magazine, I imply permission to publish. the magazine can publish my story without a separate copyright release, as long as they pay me their standard rate for it.
I would argue that by publishing a webpage, I imply permission to do quite a few things: I imply permission to index (unless I take reasonable technological protections to avoid being indexed, for example using a robots.txt file), and I imply permission to cache.
if this isn't the case, not only search engines will be hit: AOL's gonna get sued. they have a web proxy. :)
Re:All right, someone has to do it... (Score:2)
"In 19xx, images were previewing."
Search Engine Programmer: What happen?
Image Indexing Robot: Somebody set up us the law.
IIR: We get registered letter.
SEP: What?
IIR: Main reading lamp turn on.
Letter: How are you gentlemen???
Letter: All your indexed image are belong to us!!!
SEP: What you say???
Letter: You are on the way to liability. You have no chance to argue, make your time.
Letter: Ha ha ha!!!
SEP: Move email.
IIR: You know what you are doing?
SEP: For great publicity,
SEP: send out every email.