Verisign Sues ICANN Over SiteFinder 395
camusflage writes "Yahoo's running a story about VeriSign suing ICANN for holding up Sitefinder. Choice quote from VeriSign: 'This brazen attempt by ICANN to assume 'regulatory power' over VeriSign's business is a serious abuse of ICANN's technical coordination function.'"
What most see (Score:5, Informative)
Re:Problems like this are forseeable (Score:5, Informative)
CNET is also covering the story (Score:4, Informative)
Re:Wait a sec (Score:5, Informative)
Re:Problems like this are forseeable (Score:5, Informative)
Since it's NOT their page. foobar4575368389.com is NO more verisign's page that it is anyone else's since the domain is not registered.
sitefinder is not the problem. The problem is the default DNS entries which redirect connections to sitefinder.
VeriSign used their access to the DNS they host *on behalf of ICANN*, to gain visibility for their sitefinder crap.
Appart from being highly unfair to search engine competition, and ethically wrong, it also brings lot of technical issues for any protocol (which HTTP is only one of them) used on the Internet.
Re::rolleyes: (Score:5, Informative)
2) Public DNS names must be globally unique. This one isn't nearly as obvious as addressing, but it's still clear once you think about it, and is even enshrined into one of the RFC's on the subject.
Given that we require uniqueness, someone has to manage the systems to check that uniqueness and dole out addresses (both IP and names). That task fell to ICANN, who have since sub-contracted that work out to other entities. But still, someone has to run the central database, or there'd be chaos.
Re:Why do we need Verisign? (Score:3, Informative)
What is relevant is that they also control the gTLD root servers for
They've even got a contract for it...
Re::rolleyes: (Score:5, Informative)
Re::rolleyes: (Score:5, Informative)
Your comment is otherwise excellent, but this line deserves correction. Verisign does *not* have control over the root servers*. ICANN does. This is an important distinction because control over the root servers is what gives ICANN it's authority. What Versign DOES control are the so-called 'GTLD' servers, which serve the
*footnote: Verisign does, however, operate 2 of the root servers, A and J. In fact, Verisign operates them quite well, and in co-operation with the other root-server operators. But all root servers have the same data, provided by ICANN. The list of root servers (and who operates them) can be found here [root-servers.org].
Re::rolleyes: (Score:4, Informative)
a lot of people don't know this but verisign's root server isn't the only game in town, these root servers [wikipedia.org] offer many alternatives. If enough people make an end run arround their monoply, their authority will diminish as well as any brazen behavior. If you need instructions on how to do this OpenNIC [unrated.net] has detailed instructions.
Re:The solution (Score:3, Informative)
The solution is to alter a DNS server so it examines the results it gets back from its parents, and if it's a BS Verisign auto-search response, tell the requestor that the domain doesn't exist.
That was done in the early days of the VS BS. The ISC released a patched bind that would do just that within a couple of days of the problem, although the ISC didn't particularly approve of it, and only reluctantly released it.
Love that nibbled quote (Score:1, Informative)
Re:Dynamic configuration (Score:5, Informative)
Re:Wait a sec (Score:1, Informative)
Verisign runs [root-servers.org] the A and J root name servers.
The A root server holds a special position above the other 12 in that the other 12 query A to get the zone file for each TLD. While it's true that they don't run ALL the root name servers, they run 2/13 of them, including the most important one.
Re:Wait a sec (Score:5, Informative)
Yes, they can. And that's why when ICANN threatened them--back when Sitefinder was first turned on--that Verisign listened. Because, yeah, ICANN controls the root, and all authority flows from the root. (the root servers, that is)
As for your p2p root idea, well... To be blunt, it's a bit naive. First off, where does this p2p network get it's data? Remember, one of the critical ideas behind DNS is that the view is always consistent, there are no conflicting records. As in, www.exmple.com ALWAYS points to the same place, no matter who you ask. There is only one correct answer. (misconfigurations can prevent this, obviously, but that's the design of DNS). So you have to be worried about poisoning, authenticity, you have to trust this network. No current p2p network has my trust.
I give more reasons, but basically, the DNS system is set up right now with 46 root servers [roots-servers.net] (count 'em). These are generally a cluster of professionally managed servers, dedicated to a single, pretty simple task: Serving the 2000-odd records in the root zone, or returning a failure. That's it. Any suggestion of a p2p network, for it to be accepted, would have to show that this proposed ad-hoc network could provide the same performance and reliability that the current system does. Not to mention re-writing all this software that assumes DNS functions in it's current state.
To summarize, sure it SOUNDS like a good plan, but for it to actually be considered, it probably has to have actual technical details. And it wouldn't hurt if it came from someone more qualified than Armchair Internet Architect, such as you or I.
Re:Wait a sec (Score:3, Informative)
correct link to www.root-servers.org [root-servers.org].
Re::rolleyes: (Score:4, Informative)
Every domain name server has a list of root IP addresses, this is where he can find the ip address of the server that knows about 'org' and other domains.
The servers in that small list get a lot of traffic. Some are owned by the US military, other are owned by universities, etc. It's undoable for most for-profit organisations to fund such a machine (typically mainframes are used) or even its internet connection.
We do need a central authority to regulate the IP address ranges and adherence to RFCs such as the one in question here (DNS) that form the back bone of the internet, at least until we have something better.
In this case the ICANN has done its job, thankfully. Perhaps it's not a completely lost cause after all.
Icannwatch has links to original complaint (Score:3, Informative)
Re:Working with... (Score:1, Informative)
You mean a simile. If it were a metaphor, he'd have said "working the Icann process, we're being nibbled to death by ducks. It takes forever, it doesn't make sense, and in the end we're still dead in the water."
Note the absense of the word "like" since that implies simile.
<
Re:My prediction... (Score:2, Informative)
Dictionary.com [reference.com]
Google [google.com]
Re:The solution (Score:3, Informative)
This is also useful for other wildcarded TLDs like
Re:I don't get it (Score:3, Informative)
Mailing List (Score:3, Informative)
http://wwwapps.2mbit.com/mailman/listinfo/sitefin
Verisign is wrong - and here's why (Score:5, Informative)
They are in violation of the part of the
Re:Get rid of the dots (Score:3, Informative)
1) It allows for ownership and responsibility to be cleanly delegated to the appropriate parties.
2) It gets the load distributed.
DNS is not perfect, not by a long shot but I think you have no idea of the scale of the problem it solves.
The root servers handle almost 2000 queries per second, and that's for stuff that is normally cached for days! DNS Servers responsible for popular sites (e.g. Google, Yahoo, Microsoft, Dell, etc) routinely exceed 5000 queries per second JUST FOR THAT SITE. You want to centralize that? Ummmmk.
Re::rolleyes: (Score:5, Informative)
Incorrect. Addresses need not be unique at all,
Indeed one can make very good use of non-unique addresses. Quite a few of the IP addresses for the root DNS servers (eg those operated by ISC) are assigned to multiple different computers, diversely located geographically. Go google for "anycast". The 6to4 relay service also uses a public, non-unique address (ie anycast) for the 6to4 gateway.
Any stateless network service can be deployed using anycast addresses.
Forbes CEO Approval Rating (Score:3, Informative)
(Apologies if Redundant.)
Re:Wait a sec (Score:3, Informative)
Re::rolleyes: (Score:4, Informative)
Tim
Re::rolleyes: (Score:4, Informative)
Define a logical server? Providing a unique and coherent service? No, that isnt needed. You could use anycast for anything such that you are directed to the topologically closest host. (where "topologically closest" is defined by routing). Eg, you could setup an anycast address for "PGP public key server", or "web proxy" or "SMTP server", etc. Indeed, let me clarify my remark on statelessness - it is easiest to use anycast for stateless services, however one could use them for stateful services too, provided one had control over the stability of the topology. (eg a corporate, geographically diverse network, where topology changes were infrequent, could use anycast addresses to direct mobile users to the closest host providing a service).
Two different servers (probably owned by different people) having the same address wouldn't work too well, how would you say which one you wanted to talk to?
You dont, that's the entire point of anycast. Instead the routing domain picks the best host for you.
Re:Time to cast your votes for the Verisign CEO. (Score:5, Informative)
Forbes CEO Approval Ratings [forbes.com]