Apache Gets Big Brother Award In Germany 9
nomis writes: "Our favourite Webserver got one more award: The German issue of the Big Brother award, which blames privacy problems and tries to enforce a public discussion on this topic. Reason is the (from the technical point of view) unnecessary logging of various information (IP-Number etc.). The reasons for the award are described in German here. Basically the ability to log various information provokes user tracking and abuse of the collected data. I think they have a point. At least the default Configuration should log sparsely..." Interesting point.
Logging is the site's right (Score:2)
Crappy translation. (Score:2)
Established became should show the BigBrotherAward Germany to promote around the public discussion around privacy and data protection - it improper use of technology and information and contribute so to its acceptance.
Since 1998 such a "price" in different countries has become lent and this year also in Germany at firms, organizations and persons, that impair in special manner and persistently the privacy of persons, or (personal) accessibly make data of third.
The name is taken, in which the author already end of the vierziger years sketched its vision of a future company that stands under total supervision, from George Orwells of more negative utopia "1984".
The price sculpture to the "BigBrotherAward Germany" was sketched by the Oerlinghauser artist Peter summer. It shows one with lead volume gefesselte figure, that becomes of a glass disk durchtrennt on which a binaryer or hexadezimaler code stands to read: a passage out of Huxleys "quite new world".
The german BigBrotherAward becomes moved itself organized of the Bielefelder FoeBuD e. V., that 1987 as a society the production of the public and established unbewegten data traffic. The society was declared by Vernetzungsarbeit in that "Zerberus-network", its Mailbox "Bionic", the peace network "ZaMir" and its monthly organization row "PUBLIC DOMAIN" for subjects out of the future and technology, science and politics, art and cultivation.
On these pages, you find all information round around this first award of the trophy in Germany. The this year's Nominierungsfrist has been expired for the 25 September, would please us over stimulations and/or criticism under marginal address we.
--
Silly (Score:1)
Re:Logging is the site's right (Score:1)
I don't think anybody is questioning the legal right to keep logs. What is questionable is the repurcusions. Somebody's .sig reads "Just because you can, doesn't mean you should.", and that seems to be an appropriate argument...
My mom is not a Karma whore!
Helps to defeat DDOS attacks (Score:2)
When your webserver crashes under this sort of attack, it is useful to be able to examine your logs and block the associated addresses.
Responsible Logging... how about /privacy.txt (Score:3)
In this day, singling out any single web server product that logs IP information by default -- when they all do -- carries the flavor of provocative shotgun whining. Picking on a product to call attention to a more general issue has a superior hype response payoff; your targeting of a popular product gains better news coverage and attracts more response traffic, as loyal customers speak out in its "defense."
Your server is your home and castle, your visitors are your guests. To get static pages and content they may only need to get past the moat; but if you run CGI, your front door is wide open and you must keep watch over them to make sure they stay out of the fridge and don't wander into the bedrooms.
If you put up an Internet web server, it is irresponsible not to log ip addresses. In server context, IP addresses are not people, they are merely "source vectors." Only when you serve and log cookies does that context approach the person-level -- but even then you're still logging browsers, not people.
During a transaction IP address will always be known. A log file is merely a form of persistent memory that extends beyond that moment. Therefore the real issue is not whether to log, but how long it is retained.
If anonymity is declared as part of the service you are providing, it's easy to see that you start to cross the line if you write anything but summary stats to disk.
But for all other uses, it is good practice to keep logs around for at least one "blink cycle", twice the window of time in which you regulary attend to the server. For most of us this is the time of the day when we sleep, let's be conservative and declare it to be a full 24 hours. If you awake and discover a problem, you expect to have on hand enough information to identify what, how and why even if who does not matter.
Beyond the blink cycle, at issue is how often you rotate, how many rotations you keep -- and if you include logs in your regular system backups, the timespan until you scratch them.
Internet activists regularly watch for legislation that unfairly targets the Internet medium, for crimes that are already covered by common law. In that sense, the IP logging issue is already addressed by an emerging "Internet common law" -- the "privacy statement". The idea is not to clamp down absurdly on information gathering practices that have real use and purpose, but to offer a convention where visitors are clearly informed of the information is collected so they can make their own judgement.
Re:Silly (Score:2)
Seems sensible. You need to know who to send it to.
(so I can sell their address to junk mail corps).
But since this ais a German site, European data protection laws apply.
Or maybe: If someone wants to make phone calls in our country then our government should know who they are and where they are calling.
The person you're calling is entitled to know who you are though.
Re:Responsible Logging... how about /privacy.txt (Score:1)
But as Anomalous Ovum says,
It is not just how long the information is retained, but how it is used. To make the case clearer, let's look at an example where logging can be more Big Brotherish.I recall setting up squid web proxy and cache [squid-cache.org] at a medium sized university in 1995. Actually at that time, Squid was still Harvest. Anyway, once my co-admin and I got everything up beyond our own tests, we set the clients around the campus to use it. Naturally, we watched the cache-proxy logs go.
Well, as soon as we saw the URLs that were getting fetched, we immediately decided that "we shouldn't be watching this". We had the IP address of the client and we had other ways of finding out who was logged into that particular workstation. All of a sudden we had a way of tracking who at the university was reading what.
Of course we knew beforehand that we would have that information, but it was only after we tail -f the log did we realize how much of an issue that was.
The first thing that we decided was that if users were going to fetch lots of images, we wanted the material cached, instead of getting dozens of seperate requests for the same image. So the cache was doing its job. But we puzzled over what to do about this very private information we suddenly had.
At that point in time, use of the cache was voluntary. One could opt-out by resetting default browser settings. But we wanted as many people to use the cache as possible.
So we were left with a few options
That way, we would know what was being read, cached or not cached which is very useful for maintenance, but have no way to trace the individual user.
Current versions of squid now have that as a configurable feature. We would have just patched harvest or post processed the logs.
We really needed the information to tune the proxy. This was not an option we seriously considered.
The two of us admins agreed to respect privacy and not trace individual users and only read logs when needed (and mostly using summary stats), but more importantly we agreed that if some PHB in management ever asked us whether we could trace who read what we would lie and say that that was impossible.
On the whole, I still worry about whether we made the right choice. It worked out well, but we effectively lied to users (by not letting them know that such information was logged), and would have lied to management the same way had it come up.
So back to the main point. Logging may be necessary for security and maintanence, but the real issue is what safe guards are in place against misuse of those logs. Typically, it is only the goodwill of the sysadms.
But it's an open source Big Brother! (Score:1)
I admire their consistency in looking objectively at Apache (the darling of all anti-MS people out there, and the darling of all anti-big brother folks since it's open source) as a potential Big Brother tool.
Logging in and of itself wasn't invented by Apache, and the arguments for and against logging are obviously legion, but you can beat that with the Anonymizer proxy. Although
On the bright side you can see how this "Big Brother" works since the source code is right there in the open!
========================
63,000 bugs in the code, 63,000 bugs,
ya get 1 whacked with a service pack,