

Amazon Bots Cause Grief For Associate Web Sites 136
theodp writes "Amazon Associates and Web Services developers are crying foul over the hammering they're taking from ill-behaved bots that Amazon had subsidiary Alexa Internet dispatch to evaluate the 'quality and reliability' of their sites. Amazon fessed up and acknowledged problems exist, but points to recent Operating Agreement changes that not only give Amazon and any of its corporate affiliates the right to do so, but also to use unstated technical means to overcome any methods that are used to try to block or interfere with such crawling or monitoring. Interesting stance from the folks who called on the Senate to prosecute those who degrade the technical quality of service at web sites."
Redirection Limit for this URL exceeded. (Score:4, Interesting)
Funny to see that someone complaining about abuse links to pages that do not work with Webwasher filtering.
could be (Score:2, Interesting)
Re:could be (Score:4, Insightful)
So I guess I am not very informative about my habits - which I think is my freedom to do. And if a site doesn't work that way, the site owners clearly indicate that they are not willing to accept me a s a visitor - which is their freedom.
At least
Re:could be (Score:1, Troll)
Your logic is wrong.
The owners of a website, especially if they are not aware of your habits, are not rejecting ('not accepting') you as a visitor / customer.
At the worst, they're not taking efforts to accomodate your nonstandard way of browsing the web. YOU were the one who chose to apply filters--hence, the active part in the exchange is you, not the website owner.
Re:could be (Score:2)
Explain to me, when exactly became cookies something I MUST enable? When became Web-bugs standardized? At which it was ridiculed to have no referrer? And exactly when was it agreed that JavaScript is part of the HTML/XHTML/HTTP standard?
Exactly what is non-standard in my way of browsing the web? If you mean unusual I could agree, but non-standard is wrong.
Re:could be (Score:2, Insightful)
You're confusing technical uses of the word with colloquial ones. Web standards have _never_ been the normal case; there has always been some tweak or extension that makes the web useable in ways that a significant proportion of the decision-makers seem to like.
Your adherence to Standards is a non-standard act (an act against the norm, unusual, et al), and as such it is an unforseen action on your part and not an action on the part of the website owner & their developers to exclude you.
My point wasn't about standards, it was about whose action caused you to be unable to use the non-standard commerical websites. Since non-standard design is the norm for the industry, it's a case of the website failing to take extraordinary action (making their sites standards-compliant) to keep you as a visitor, and not them taking action to deny you as a visitor.
At best, they're ignorant and you're suffering from the consequences of your choice. At worst, they're gulity of not wanting to expend the effort to accomodate you--but that only happens if they have the ability & opportunity to meet your needs. (i.e., only to those websites that you contact with a request for a toned-down main or alternate version of their web page that you can visit.)
Re:could be (Score:2)
LOL. A website that simply works is "toned down"?
I can tell you how many times I've come across website that DOES work perfectly, except they ADDED code to redirect you to a basicly blank page saying "this site requires Internet Explorer/cookies/javascript/whatever". I have sometimes disabled this and used the site.
My point wasn't about standards, it was about whose action caused you to be unable to use the non-standard commerical websites.
In many cases they take a perfectly good website and disable it with these tests. Having cookies/javascript/whatever off may cause specific features to not work, but you actually have to go out of your way to design the entire site to fail.
If I have cookies off and it doesn't remember my prefferences, fine. If I have javascript off and help windows don't work, or the formatting is lousy, fine. But to fail to display ordinary text? That nearly has to be intentional, even gross incompetence is usually partially functional.
-
Boohoo!! (Score:5, Interesting)
One could argue something about watching out for who your bed-partners are! Bear in mind that a company that has such a disregard for even their affiliates has to have a pretty poor respect for anyone else out there! Caveat emptor!
Re:Boohoo!! (Score:5, Informative)
Re:Boohoo!! (Score:2)
ksuicide2k2
Re:Boohoo!! (Score:2)
Re:Boohoo!! (Score:1)
Looks like amazon managed to get that domain
The Alexa archiver -- you can stop that one. (Score:5, Informative)
User-agent: ia_archiver
Disallow:
And if its really annoying, bloody hell, just do an active firewall block and put the sharks (lawyers) away with those goofy lawsuits before they start wasting our senators' time and taxpayer cash.
Re:The Alexa archiver -- you can stop that one. (Score:5, Informative)
the bot was scanning through some kind of cgi script thus generating thousands of requests.
Re:The Alexa archiver -- you can stop that one. (Score:4, Interesting)
According to one of the posts here [prospero.com]:
Again, I don't get how my links can be broken since Amazon is delivering the content.
Re:The Alexa archiver -- you can stop that one. (Score:1)
What's wrong with a Marilyn Monroe themed amazon.com page? Otherwise, I have no problems with Amazon, but I only grossed $40.00 this year, with a commission of $2.00:-)
------
It's here:
http://www.geocities.com/rapidweather/geo12.html
pwned :-) (Score:2)
Re:The Alexa archiver -- you can stop that one. (Score:2, Funny)
ia_archiver = wayback machine (Score:5, Informative)
I've been following that problem... (Score:5, Informative)
The problem is that the bots are way too diligent. They go to every single link on every single page, even if the page is dynamically generated. Many sites have an infinite combination of url's, and as a result, the bots sit on them trying to download every single variation of query. That means that Joe Amazon Associate's web site is hammered with requests and his bandwidth fees go through the roof.
The simple solution would be to just stop Amazon from sucking up the bandwidth via a robots.txt file. But Amazon says that is not allowed. There's the dilemma.
Amazon.com has been silent on this issue for the last several days. My bet is that the bots won't come back without some heavy-duty tuning.
--Your Sex [tilegarden.com]
Re:I've been following that problem... (Score:3, Informative)
Remember when you could boost your ratings on Google but trapping the bots?
Re:I've been following that problem... (Score:2)
You just put in an invisible link at the top of your your page and assume anything that follows it must be a robot. But instead of trapping it, you switch to make all your normal links go nowhere.
Voilla! No more robot.
Re:I've been following that problem... (Score:2)
Re:I've been following that problem... (Score:1, Informative)
Re:I've been following that problem... (Score:1)
Re:I've been following that problem... (Score:4, Insightful)
Amazon's web-bots are looking for outdated links to books that don't exist, etc.
Wouldn't a better solution be to modify the software at amazon.com, so that every time there was a book not found/out of date error, it logged the refering affiliate and HTTP_REFERER request header?
I can't see why they would need bots and suchlike for such a simple procedure...
Just my $0.02,
Michael
Re:I've been following that problem... (Score:1)
What would be a better idea is that amazon should keep a db of referrers and books they link to. When a book goes out of print, it sends off an auto email to the referrers of the book, letting them know they have a dead link.
Just block it? (Score:2, Interesting)
Therefore, you agree that we and our corporate affiliates may take such actions and that you will not seek to block or otherwise interfere with such crawling or monitoring (and that we and our corporate affiliates may use technical means to overcome any methods used on your site to block or interfere with such crawling or monitoring).
As such, it doesn't say that you agree not to block them or that you're violating their license if you do block them. All you agree to is that they can monitor your site, but if you don't like how they do it, it doesn't state that you have to put up with their crawler. The only thing you do agree to is that they can use "technical means to overcome" your blocking. But so what? Let them waste money on attempting to monitor your site by modifying their crawler
Re:Just block it? (Score:1, Informative)
Actually, doesn't it say that you aren't allowed to block it, but if you do, they can try and get around it?
it says... (Score:2)
kneel down and I'll spank you, associates...?
Re:Just block it? (Score:5, Informative)
It says exactly that you agree not to block them!
"you agree
Clarification (Score:2, Interesting)
It's OK if they crawl/monitor my site using a bunch of people surfing my site all day long. I won't attempt to block that. Anything else, I might.
Re:Clarification (Score:3, Interesting)
Re:Clarification (Score:2)
Re:Clarification (Score:2)
Depending on exactly what "technical means" means, this sounds like:
All your sites are belong to us.
Re:Just block it? (Score:2)
If I was runnning a network being clobbered by Amazon I would put any barriers I felt like, such as dropping their packets and there is not a damned thing they could say or do to stop me. I'm not an associate and it's too bad for them that they can't see fit to play nice.
All they have to do (Score:4, Informative)
Most robots do something like that.
Of course - it takes a lot longer....
Re:All they have to do (Score:5, Interesting)
Robots have been around since the web started and it suprises me that the designers of this robot havent looked at previous design and good practice.
If any of you Alexia numbskulls happen to be reading this perhaps you could buy yourself a copy of HTTP the def. guide from O'Reilly, which has a tremendously clear explanation of what to think about to prevent your robots from destroying every site they visit that isn't sat on a T3 and Sun Fire w/ 64 CPUs and 64 GB ram.
Re:All they have to do (Score:1)
Just goes to show (Score:4, Insightful)
Re:Just goes to show (Score:3, Insightful)
Read before you sign. (Score:5, Informative)
They have a few options that I can see.
Terminate the agreement.
Bill for the bandwidth, or sue for damages.
Various technical measures (which are prohibited by the agreement)
Point out to your contacts at Amazon that this is pointless and dumb in such a manner they actually listen.
Make a mini site for the amazon site/bot, but the rest of your website in a second location (that they don't have access too)
Why deal with a company like this anyway, they're obviously inconsiderate pricks (at least) move on with your life.
Re:Read before you sign (and before you post) (Score:2, Informative)
Terminate the agreement.
Bill for the bandwidth, or sue for damages.
Various technical measures (which are prohibited by the agreement)
Point out to your contacts at Amazon that this is pointless and dumb in such a manner they actually listen.
Here's an idea.... How about politely posting a question or two about it in the appropriate forums? Who knows, something crazy might happen, like responsible people at Amazon might respond and turn the bot off while they investigate. Then, they might post a reasonable explaination and take reasonable steps to make sure they're not abusing associate's servers.
Here's another idea.... Try reading the pages that slashdot linked to. I know that's a lot of work, so I'll save you a bit of effort by posting each slashdot link, and a brief summary of what you would have found had you bothered to click on it and ACTUALLY READ it (before posting here with a subject advocating actually reading the terms and conditions).
This just isn't that sensational of a story. Yet another 'bot that needs some refinement, but a it IS designed to avoid more than one hit every 2 seconds (and the evidence posted seems to be consistent with that). They at least did respond to people's concerns and they took the bot off-line while they investigated it. Sounds pretty reasonable. It's not clear what might actually be done, and some of it appears that Amazon is claiming the problem isn't so great... but clearly they are attempting to respond to people's concerns.
Amazon feels they have a right to check the links on associate sites, and they put it in the terms. Again, it's really not that unreasonable.
What is unreasonable is the inflamatory summary appearing on the main slashdot page. Yes, timothy and other slashdot "editors" can claim it's all just editorial from "theodp" who submitted the summary. But what kind of editing it that?
The summary concludes with:
The link is to Amazon's position on DDOS attacks... there's really no similarity to a well-intentioned 'bot, which clearly identifies itself, limits itself to 0.5 Hz access rate, AND was responsibly taken off-line and reexamined when some people complained that it used too much bandwidth.
I did read (Score:2)
Amazon released a bot that negatively affected the affiliate websites.
This is at the very least inconsiderate.
I posted my opinion how this or similar activities COULD be handled.
You seem quite defensive about it, were you the one who wrote a buggy bot?
amazon... (Score:3, Insightful)
Re:amazon... (Score:3, Insightful)
Re:amazon... (Score:1)
Re:amazon... (Score:1)
Why is everyone so quick to cry monopoly? I'm one of the most anticorporateist types I know, but just because a company is big, has large market share, and deals with goverment agencies (especially ones that compete directly with private industry) does not make it a monopolist.
Re:amazon... (Score:1)
Re:amazon... (Score:2)
Which in turns means cheaper stamps for us to send mail with. I dont see anything wrong with Canada Post selling otherwise useless space on it's trucks to Amazon. And the day you start shipping as much as Amazon does, don't worry. Canada Post will cut you a good deal too.
its not running at the momant (Score:5, Informative)
ok that is a post from the associates board
in which amazon state
"Hello Associates.
Thank you for providing such valuable feedback. The Alexa crawl (id amzn_assoc) has ceased while we investigate the statements made in this post. We plan to address the following concerns:
1. The impact the crawler may have on bandwidth
2. The number of pages the crawler hits per second
3. How the Alexa crawler might identify and ignore AWS pages or links
Points of clarification:
1. Regarding Archive.org, Alexa has confirmed that material that is crawled by the 'amzn_assoc' crawler is not donated to the internet archive. It is used exclusively for the purposes of the Broken Link Reports.
2. The Alexa crawler 'amzn_assoc' differs from the 'ia_archiver' crawler. The 'ia_archiver' can be excluded by using a robots.txt file and will not violate the Amazon.com Associates operating agreement.
You should expect a response from us by COB Friday as it may take a few days to research your concerns. This issue is important to us and we will get it resolved. Thank you for your patience.
The Amazon.com Associates Program"
I participated in that conversation myself though and I don't think I seen one happy person that though making the agreement so we had to let them crawl our sites as often as they like.
cj.com report error links but they do it from the server end, amazons system is just stupid and it was only done to try and give there alexa company some work todo.
so I guess its just wait and see now till we know if the bot starts back up again.
Re:its not running at the momant (Score:1)
thats the link I was supposed to post kinda messed it up at the top, sorry
1984? (Score:2, Funny)
Maybe (Score:3, Funny)
;-) had to be said (Score:5, Funny)
Amazon Amazon Amazon.... (Score:5, Funny)
Reminds me of that old horror movie where they try to drive away from a haunted house, but every road they take leads them back up the driveway to the place.
I wonder... (Score:3, Funny)
Way around Amazon's partner agreement... (Score:5, Insightful)
Just temporarily (perhaps 1 day) block ANY client's class C (not just that of Alexa's crawler) that starts generating more than X hits per second for longer than five minutes.
By doing so, you haven't taken steps to specifically thwart *Amazon's* activity, you have simply enacted a reasonably security measure to block DOS attacks. If Amazon actually dared to sue for blocking them, you'd have a HELL of a countersuit on the grounds that their 'bot triggered your DOS alarm.
Personally, I'd just block their bot and if they complain, tell them where they can stick their partner agreement. No self respecting online retailer needs their own "partners" degrading their QOS. Anyway, When I want to buy something, I use either Google, or a product-specific price-search engine (like PriceWatch). Amazon counts as my LAST choice for finding something (actually not quite true... If I need to use Google to find a product for sale, I often check Amazon first, just to get things like UPC or ISBN numbers to narrow my search).
Re:Way around Amazon's partner agreement... (Score:2, Insightful)
When I want to buy something, I use either Google, or a product-specific price-search engine (like PriceWatch). Amazon counts as my LAST choice for finding something (actually not quite true... If I need to use Google to find a product for sale, I often check Amazon first, just to get things like UPC or ISBN numbers to narrow my search).
This should be called the fundamental (slashdot) attribution error. Assuming that we are representative of the market.
Reminds me of a VC I know. They were sitting in a conference room back in 1998 hearing a pitch from an online bill presentment company. The partner's first objection was that obviously everyone already had online banking and bill payment. To prove it, he asked everyone in the room if they had online banking. Everyone did.
Out in the real-world, > 2% of people had online banking.
There is an easy fix for this!!!! (Score:3, Interesting)
http://www.yourwebsite.com/amazon.xml http://www.somewebsite.com/~yoursite/amazon.xml
This is a stupid idea!!!! (Score:4, Interesting)
Your solution would work only for the intelligent and diligent and lucky. There are many Amazon affiliates who are neither.
Ah, so that's what ia_archiver is... (Score:5, Informative)
Finally, I know what it is...
It was trying to crawl *every* available url for the CGI script - and it appeared to be buggy because it got itself into an endless loop changing from one mode to the other.
Before you disallow ia_archiver... (Score:1)
... take a look at this comment: http://yro.slashdot.org/comments.pl?sid=47296&cid= 4843032 [slashdot.org]
Re:Before you disallow ia_archiver... (Score:2)
Blocking... (Score:2)
If you're a member company, employing Amazon's services, then in my opinion you should be responsible for providing Amazon with the links you want Amazon to vend, not that Amazon should crawl through your site for your pricing information...
Ironic (Score:1)
Goofballs.
Alternatives to Amazon! (Score:5, Informative)
http://www.booksense.com/affiliate/ [booksense.com]
Re:Alternatives to Amazon! (Score:3, Insightful)
But do they have a web services interface? That's the important part.
--Azaroth
Can someone clarify? (Score:4, Interesting)
Amazon is crawling these sites so that they can be featured on their website. When you search for an item, Amazon lists the prices and availability from the associates--everyone wins.
It seems that Amazon is searching a bit too often--combined with some affiliated sites that have very s-l-o-w dynamic pages, which is causing some problem. It's hardly a crime that Amazon is commiting--after all they want the most accurate, up-to-the-minute information on their website.
Re:Amazon Interview Experience (Score:1)
First of all, I do think this is relevant. If a company has poor quality software, then they're probably hiring badly. And Amazon's over-agressive spider, that gets caught in infinite loops on CGI-BIN directories--certainly is poor quality.
Just last month, I interviewed for two jobs (and got two offers!). I declined both opportunities, in part because neither asked me any technical questions during the interview process! A complete fraud who interviews well could have had the jobs (and one didn't even check references!). I wouldn't want to work for a company that doesn't inteview well.
Timing? Christmas sales? (Score:5, Insightful)
I know that the people being DOS'ed by Amazon are defined as 'affiliates', but maybe Amazon percieves 'affiliates' in the same way Microsoft percieves 'partners'; people to use and then buy or destroy. How much you wanna bet that this problem goes away after christmas? Of course, the claim will be that it was brought to their attention and it was fixed, but the timing of the whole thing is very suspicious. Perhaps this was the plan all along.
In these days of slim margins in business, maybe Amazon figures the average internet user is smart enough to figure that it their preferred site is slow, they will go directly to Amazon for their purchase and Amazon would be able to avoid reimbursement of their 'affiliate' for the sale.
Has this problem been going on, but been unnoticed for a while, or did it just start? I'm no consipiracy theorist, but the elements seem to be there for this to have been intentional and the timing is very suspicious. Why couldn't they have done this last month, or the month before if they're just checking for outdated links? Am I out in left field with this idea?
Anyway... just a different perspective and some food for thought.
Re:Timing? Christmas sales? (Score:2)
You just KNOW that when someone uses that line, then you are in for a nice whacky conspiracy theory that doesn't stand up to more than half a second's scrutiny. And you just confirmed that.
Hint - IF Amazon were deliberately DOSing a site (as opposed to simply runing a link-checking robot written by a clueless moron as is the case here), THEN the site woudl be too slow for people to even GET to the Amazon links, and thus would not think to go to Amazon directly (why woudl they go to Amazon if they don't know there is something being recommended in the first place)?
"I'm no consipiracy theorist, but it's all a conspiracy, I tell you!"
Re:Timing? Christmas sales? (Score:1)
Fundamental Rule #1 of Understanding Conspiracy Theories:
Never attribute to malice that which can be adequately explained by stupidity.
Re:Timing? Christmas sales? (Score:3, Insightful)
Like the timing of responding publicly quite promptly.
Like the timing where they disabled the 'bot soon after some people posted concerns about it?
If it really were some sinister plot to rob associates of their referal fees (which could be done much more easily by simply making accounting errors, Enron or RIAA style), don't you think they would have remained silent, or at least kept the 'bot running as a lengthy "though investigation" proceeded until the 26th?
stop whining :) (Score:2)
Powells Offers More (Score:1, Interesting)
Informed View (Score:5, Informative)
The Amazon Associates program has been around long enough for "page rot" to kick in, and I am sure there are many sites out there with links to non-existant products, such as old editions of books, etc. Historically, associates had to build static links (for the most part) by hand, and embedded them in more or less static page.
The problem comes in due to the recent introduction of their web services, where sites can build essentially unlimited pages based on dynamic real-time queries to amazon. I don't believe their intent is to "thrash around" in these sites, which is what is occuring.
A few month ago, I asked to have the Alexa bot crawl my site, (StarvingMind.net [starvingmind.net]) , I was curious about the reports it was able to generate. The bot ended up in endless loops and had to be manually stopped by someone at Alexa. They spent an impressive amount of time trying to identify and fix the problem my site was creating for their bot. I don't know whether my specific problem was ever resolved, but I have the impression the bug was found and fixed. I also have the impression that the bot is very immature code and buggy.
Based on the personal and public responses I have seen from the Amazon and Alexa people involved, they actually do care about these issues very much, and don't wish to cause harm by the bots use. I believe their goal is to eliminate the link rot that has accumulated on associate sites over the years, manytimes with the site owner unaware of the problem.
Web services threw a curve into the mix, and that is where the major problems are occuring. The post I a replying to seems to imply Amazon may want to "use then throw out" the associates. I think that is pure speculation without any knowledge of the fact. Amazon has recently gone from what appeared to be no fulltime staff to a team of people dedicated to supporting and running the associates program. I believe they consider it a very cost effective way of advertising, and I expect it is doing quite well for them. Based on their recent actions, I believe they are trying to build a strong long term relationship with the active ones of us, as we bring them a fair amount of business.
Another post has pointed out they have stopped the crawl while the issues talked about here are looked into. They realize they may have made a mistake, and are trying to figure out how to address the problem. They have been responsive (with me at least) resolving problems like this in the past, they deserve a chance to resolve it this time as well. They have started down the right path, by stopping the crawl.
-Pete
premature deployment (Score:1, Insightful)
What's with the pro-active solution... (Score:2)
Problem:
Some of our affiliates have out of date links.
Dumb Solution:
Create stupid high bandwidth consuming spider that endlessly crawls affiliate sites looking for out of date links;
or
Sensible Solution:
When an out of date link comes along to the website, display an apology screen to the visitor (whilst not letting up on any other sales opportunity) and email the affiliate telling them to get their site up to date.
Some people just don't fink.
Re:What's with the pro-active solution... (Score:2)
Re:What's with the pro-active solution... (Score:3, Insightful)
Amazon seems to be good at recommending items in relation with what you're searching for... Why not just force-feed another one of theses "People who searched for this item also enjoyed these (totally unrelated by the way.) items."
That way you potentially save a sale (dont tell me that every single person who clicks on one of those amazon links actually BUYS the product.) and you manage to annoy the reader with some free ads, and potentially screw the associate out of a sale. Everyone wins. (Ok. except perhaps the associate.)
Re:What's with the pro-active solution... (Score:1)
Man, I live in the New York City region, but it's like outer Mongolia for rave info.
-
Re:What's with the pro-active solution... (Score:1)
We're not aiming to stay local. We're really aiming to have local people help us cover their local rave scene. I'd really like to be partying in montreal, toronto, and NYC, but there's only one of ol'lil'me, and just so many hours in a week. If you want to take up the NYC section of the site, just drop me a line, and we'll make you some room. Even get on a bus and go party with you guys sometime.
Why the Alexa crawler is great and awful (Score:2, Interesting)
On the great side their crawler can easily use an entire T3 with just a stock PC driving the requests.
On the terrible side the crawler has is stateless - it has NO IDEA OF WHAT IT'S RECENTLY DONE. It doesn't know when it has hit a particular site 1M times in the last hour.
So when they say "it only crawled each site on average every 4 seconds" that is on average. You know, take total urls divided by total time. Doesn't say anything about how hard they hit aaa.com
The problem is that the crawler is designed in the extreme to be efficient. Keeping site stats and blocking GETs is inefficient.
You generate a list of URLs for it to crawl. It blindly crawls this list in order. To prevent aaa.com from getting hit with the first 100k requests (assuming aaa.com has 100k urls in the list) you randomize the list before crawling.
Problem is the randomization isn't perfect, and also any site with a high % of urls in the list is still going to get hammered.
Now I don't know if this is the crawler Alexa used on the associates. But I wouldn't be too surprised.
Re:Why the Alexa crawler is great and awful (Score:1)
OK from here (Score:2)
Looking at user agents, the browser war is over. IE is #1, and Netscape often isn't even in the top 10; various indexer 'bots generate more traffic than Netscape.
Re:OK from here (Score:1, Informative)
Looking at user agents is incredibly foolish since most browsers' agent strings default to IE and most users don't change that default.
21 Dog years (Score:1, Interesting)
from Amazon's submission to Congress... (Score:2, Interesting)
Okay, note the line "...distributed denial of service attacks involve coordinated and criminal transmission of content over the Internet"
Criminal transmission of content? WTFF??
Note also how it goes on to say the FCC shouldn't get involved since "FCC involvement would require statutory changes..." In other words, let's not waste time with all this analysis and law-making business and just get straight to the enforcement of what we want.
So why are they crawling me? (Score:2)
Soon I might just block them....but I would like to know how I got on their list of sites to crawl to excess.
Instead of the spidering... (Score:2, Interesting)
Better Thread (Score:2, Insightful)
Here amazon admits the issue and how they have stopped the bot until they can investigate the issue.
Amazon is actually very affiliate friendly. They have banned the scumware like wurldmedia, ebates and others that try and hijack affiliate comissions. Unlike affiliate programs by overstock.com,buy.com and others that are so desperate for short term cash they will screw over their current affiliates for some quick cash.
Considering buy.com is so deep in with the scumware people, i am surprised slashdot.org advertises them.
Disclaimer! (Score:2)
If someone is making a mistake at Alexa, Amazon.com can not really be held responsible.
Re:Amazon sucks. (Score:3, Informative)
That sounds weird... Isnt the US "Land of the lawsuit" ? I've read about people suing companies for sexual harrassment, and winning. Now you get physical damage, assault and whatnot, and she has to quit ? Wouldnt one of those late-nite 1-800-SUE-ME lawyers take this case ? Seems pretty much open and shut to me.
Re:Amazon sucks. (Score:1)
Re:Amazon sucks. (Score:1)