Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Patents

Multilingual DNS Patent Roadblock For IETF 170

Xanni writes "Intellectual property claims have blindsided the Internet Engineering Task Force and could derail the group's efforts to develop a common scheme for supporting foreign-language domain names across the Internet. NWFusion is carrying the story."
This discussion has been archived. No new comments can be posted.

Multilingual DNS Patent Roadblock For IETF

Comments Filter:
  • by Anonymous Coward
    It seems like this is kind of like what Rambus did--except, Walid doesn't appear to have been part of IETF. I wonder... will standards bodies like IETF have to start making participants sign "no compete" clauses, NDA's, or the like in order to prevent fiascos like this from happening? And, can that be done without fundamentally undermining the open discussion and review so vital to standards development? In a word, this sucks.
  • It will be painful at first. But in the end, a simple clean design will have lots of benefits.
  • by Anonymous Coward
    The article refers to Walid as a startup. Given the current market trend the IETF should only have to wait a month or less for them to fold.
  • Joke in subject line. Nothing here to see.
  • by Anonymous Coward
    Great point, actually! By having this patent, essentially the IETF has to innovate another solution. Exactly the kind of thing that patents were designed to do.
  • by Anonymous Coward
    Now we have two solutions, neither of them compatible. Now someone will have to come up with a third solution to make the two work. Exactly the kind of thing that patents were designed to do.
  • Domain names that can't be entered from a keyboard by everyone in the world are a bad idea anyway -- no one but spammers would want such a thing.
  • My keyboard at home is cyrillic, with russian input configured, all other keyboards that I use, including the notebook, I am posting this from, are ASCII-only, and my native language is Russian for crying out loud.
  • Unicode in itself is an attempt to make completely artificial, huge charset mandatory for everything to support, including devices that can't even fit Unicode font into their memory. There were some attempts to support multiple charsets in the same text, thus avoiding this requirements, however some backroom politics caused them to be stopped, and now IETF's "official policy" is to demand Unicode and UTF-8, encodings that no one but a bunch of self-proclaimed unificators support. Most of Unicode-should-replace-everything support emanates from people who use ISO 8859-1 encoding, that happens to be exactly the same as first 255 characters of Unicode, so they don't have to modify anything in non-trivial manner, and can just cut their fonts to fit them everywhere. And last but not least, I have yet to see a pro-Unicode document that was not being actively pushed by someone Martin Duerst, who seems to be made a career from coming to every internationalization-related group or mailing lists to spew Unicode propaganda until all his opponents will get exhausted rebutting it.
  • Character set, character encoding, and fonts are three separate issues. The reason why folks like UTF-8 so much is because it is easy to use. The software I code on used to support multiple character sets (European and CJK) internally. Now, we convert to UTF-8 on the way in and convert back to the desired character set on the way out. Our code is cleaner, smaller, easier to understand, and easier to debug.

    Precisely because all work on standards, formats and libraries that would do it for the programmers is stopped to benefit "Unicoders" who taken over the standardization process.

    Supporting multiple character sets per document is a mess and completely unnecessary for most real world applications.

    If "real world applications" == "pretty display of text", you would be right, however Unicode loses all distinctions between languages used in the text, thus making impossible to do any complex processing that must know the language. So sooner or later applications will have to include the name of language used, or face the conversion of large amount of useful data into unprocessable junk, that is just as useful as, say, gif with text in them. Unicoders, of course, already declared a standard for adding language information back into Unicode text, and "allowed" to use language attribute in XML. However it's obvious that all Unicode-using code can't deal with stateful text stream (text + language as state) because the whole point of Unicode was to avoid any state, and XML processing programs have no requirement to preserve attributes i their internal processing, thus making the whole activity impractical.

  • Well, if your device "can't even fit Unicode font into their memory", then you (as you say later) cut the fonts into manageable subsets.

    But which subset should one support in any given situation? And what is the benefit of Unicode then compared to text that just has charsets and language names embedded as state? I mean, other than having to look up symbols in translation table, 1.5-4 times larger text files and incompatibility with perfectly usable systems that already work with local charsets and can be easily modified to use multiple charsets if someone was able to standardize the metadata formats (by not being gagged by "Unicoders" every time when a suggestion of that kind is made in standard bodies)?

  • what i meant was is several friends of mine when i was in uni had japanese-character keyboards. they had some weird drivers just to write asci at all.

    It's just the opposite. Japanese keyboards all support ASCII, and need special software to be able to enter Japanese characters. Your friends probably had Japanese DOS on their computers, and had problems running English DOS software because of incompatibilities.
  • It's known as ISO2022. It's been around forever, and no one's stopping you from using it. It's used for COMPOUND_TEXT in X and MULE in Emacs. Most people don't like it because it's a state-heavy system. No one killed it by backroom poltics - it just didn't go over very well.

    ISO 2022 is completely unacceptable for any practical use in multilingual environment -- it is used only to manage small set of charsets for display-only purpose.

    The real solution won't appear until it will become easy to just place attributes that include language and charset by their full names in the text, so some simple interface (one to a program, not to a user) can be used to handle pieces of text accordingly to their attributes, and charset will be just one of them. By "handle" I mean everything that programs do with text -- sort, concatenate, input from a user, edit, format/hyphenate/..., fuzzy/phonetic match, and last and very, very least -- render with given set of fonts on a given device.

  • What is actually being patented is a system which attempts to replace the resolver on a machine with one which will automatically encode the local character set into an RFC 1035 compliant format. This patent specifically states that this is a mechnism to implement internationalized domain names without modifying the DNS servers.

    Now, if you are going to replace all the resolver libraries anyway, why not just extend the DNS specification to take straight UTF8 to begin with?

    Why create a huge ugly hack to preserve the DNS specification when the DNS specification has changed many times over the years to support new features such as IPv6.

    The only benefit I see to not changing the specification is that client application developers don't have to change their calls to gethostbyname() to gethostbynameutf8(). This is an advantage, but honestly... does anyone really believe that this is any harder than what applications have to do to support IPv6 address lookups?

    Of course, that's just my opinion. I could be wrong.
  • I don't see how this is so different from your basic uuencode function. I suppose you'll have to be a little stricter.. no underscores etc. Also you'll need a way to identify names not encoded from those that are.
  • Again, the purpose of IETF is to create standards, not to fight evil corporate types on the behalf of Slashdotters. Getting involved with litigation is a good way to make enemies and would likely undermine the needed industry support and participation for the stanards to carry any weight in the real world.

    Better, imho, for them to route around the problem like the net does w/ censorship. Leave the legal battles to someone who specialize in it, since that sort of activity would not compromise their primary mission.
  • (Otherwise I can't think of any good reason they don't take thesee people to court.)
    I can. IETF doesn't have monetery assets and is an engineering task force, not a legal one.

    Eg.,

    Participating in the efforts of the IETF The IETF is not a membership organization (no cards, no dues, no secret handshakes :-) The IETF is a large open international community of network designers, operators, vendors, and researchers concerned with the evolution of the Internet architecture and the smooth operation of the Internet. It is open to any interested individual.

    Anyway, it's not that the IETF would be legally prevented from recommending this as a standard because of the patent. It would just put anyone implementing that standard at risk of patent fees or litigation.

  • If I export from my registry under Win2K, I have no problem opening and viewing it in notepad.

    If I open it in Emacs, I can see that it has some kind of BOM, and then every other byte is 0x00. It's obviously Unicode, but notepad has no problem with it. Of course, if I reboot into Win9x, I'm sure notepad there is crap and won't be able to deal with it. But who cares?... Win9x is about to die the death it deserves.

    I think you'll find that most of those UNIX tools haven't been coded with Unicode in mind. MSFT have been pushing generic text macros for C/C++ for a very long time. If you use them, it's pretty easy to rebuild your app with Unicode support. Thus, just about anything that ships with NT can handle Unicode. Those Unix tools that you mentioned will require a lot more work... the people who coded them built the single/multi-byte limitation into them from the start.

    Internationalisation is something that I think MSFT does quite well, and it's really easy from a programmers perspective. Wow! Did I just sing MSFT praises?!

    What OS are you using that can't open its own registry dumps in notepad? It's got to be a newer version if it's not in the old registry format. Perhaps WinME does this... but I wouldn't know as I don't touch those Win9x OSes if I can help it!
  • I have trouble with Unicode with the NT port of GNU Emacs. I've seen mention of a package on the NT Emacs mailing list that supposedly can handle Unicode. It might be a language add-on, or something. I can't remember: I didn't need it enough to search it out and install it!
  • by Malc ( 1751 ) on Wednesday March 28, 2001 @12:08PM (#333221)
    "Every one talks about Unicode - even the seemingly obscure "new" registry format for Windows is Unicode - of cours Microsoft as usually prefers to leave it's users in the dark, to make them feel incompetent and relying on the clutches of windows - and fails to provide a file extension for unicode text files.
    But who is actually daring enough to "Go Unicode"(TM)(2001 RedLaggedTeut) ?


    The NT line of OSes has always been 100% Unicode.

    A file extension isn't needed: notepad utilises a BOM (byte order mark) to determine the file type. If you've done any work with XML, you will know that BOMs are used there too (although not mandatory). The BOM allows you to determine whether the file is UCS-2 or UCS-4, and also specifies the endian-ness.

    I don't think MSFT has left anybody in the dark.

    Developing NT only international products is actually very nice, especially when used on a system with NTFS. International stuff can interoperate so much more easily in this environment. Unfortunately, due to the limitations of Win9x and the need to support it, most builds are multi-byte only, not Unicode.

    FYI: The Win32 API under NT includes ANSI versions of virtually every system call, giving compatibility with Win9x binaries. These ANSI versions do multi-byte to wide translations, and then delegate to the Unicode version (yes: big performance hit.) What is bad is that under Win9x, the wide versions of the functions exist, but they are normally just empty stubs that do nothing! e.g. if you look at the exports from user32.dll, you will see that MessageBox consists of MessageBoxA and MessageBoxW.
  • I for one think this is the right solution. The article claims that this would be confusing to users, but I don't see how.
  • I can't hold any more - this patent mess in USA makes me sick.

    Why can't IETF relocate their domain task force somewhere in Europe? Just because of some brain-damaged (sorry, but it so) system in USA I can't have domain names spelled in Russian language in my own country. Where's the logic? I don't get it.

    As of now, USA control the Internet. There is no reason why it should be so. Definetely USA has the biggest chunk of the 'Net, but also definetely that doesn't mean that Internet belongs to them. Democracy anyone?
  • Let's image a country of Belarus (~3 million people) passes a patent/law which prohibits IETF from using their international domains technologies within that country... How much of a problem would that be for IETF?

    See? Size matters here. Still, that lack of international domain support would be prohibit me from using Cyrillic-character domains within my country (on webservers situated in my country) at least because of lack of support on the browser side. I don't see too much sense in this.

    Thats where the freedom of free software (FSF meaning) really matters. If free software hopefully will get the desktop the userbase will be large enough to support international domain schemes at least within non-US countries... At the same time people in the USA will get the sort of pain in the a%% that users over all the world were enjoying because of USA cryptography export laws in the past. Would you feel happy about it?

    Of course, thats all theories..
  • Man, you just confirmed what I was writing. Money rule the world, not the people. Money is what votes, and it is not your vote and your choice (unless you're powerful financially). I'm taking this out of your own words. I don't have anything against healthy capitalism, I don't like overcommercialization of life.

    And... Can you explain me, what are exactly interests of average USA business in international domains? How many USA companies will be interested in www.ÂÁÎË.com? However, I bet they'd be interested in having control over their registration and "lease" them to those "poor-3rd-world" countries. Commerce and harsh reality. Exactly what you are speaking about.

    So, what should IETF really care for, those other countries which want to have domains written in their own language, or USA businesses, which want to profit on them? I guess it should be the first. But currently it is unreal in real-life and I don't like that it is so. Free software helps me struggle that, by making standards widely available.

    You just completely ignored my main point.

    Internet is becoming a life necessity and I don't think it should every be fully commercially controlled. I understand that it makes sense for a business to not really care about 80% of planet's population because they don't have enough financial power, but as a person I do not accept that.

  • Thats getting interesting. Did you hear me whining? Its what you imagined and how you see other people. Thats why other people have stereotypes about Americans, the same way you have stereotypes about Russians "who are always whining". I don't live in Russia and I'm quite successful. I like capitalism. However I don't like when millions of people suffer because of commercial interests of a very small amount of people. That happened before with very unfortunate consequences, and I wouldn't like that to happen again.

    I wouldn't like to repeat myself, read my other replies if you're interested..
  • Regarding all your posts above..
    What you all seem to say that even if the IETF was in Europe, USA would still be blocking the progress.. And thats EXACTLY what I'm pissed off about. Of course IETF would not make any progress as long as USA is on the way, even if everybody else is happy about it. Is that correct?

    What I was saying in my previous message is more a cry in void than any practical suggestion...

    Let's see in general. The majority of the world doesn't live in USA; but most of the money is in USA. In pure capitalism that means that USA has the control over the majority. Somehow I don't think that it is fair.... but nobody can do anything about it.

    I see abuses of USA power and I'm not happy about it.
  • by Mawbid ( 3993 ) on Wednesday March 28, 2001 @04:44PM (#333228)
    Is this a troll or are you really as English-centric as you think we think you are?

    When these people [suzukibilar.is], who sell Suzuki cars in Iceland, ventured on the web, they naturally wanted to use the name of the company, "Suzuku bílar" as a domain name (without the space, of course): "suzukibílar.is". But they couldn't do it. DNS doesn't allow it. So they did what is usually done, and replaced the acute i with a regular i. This is kind of unfortunate because "bilar" means "breaks down" or "malfunctions".

    But I guess you don't see that as a problem. I mean, why can't these people just standardise on English?
    --

  • Possibly IETF drafts (which are publicly available) could be used here. For example:
    draft-ietf-idn-nameprep-00.txt (published July 3, 2000) and draft-ietf-idn-race-00.txt (dont know when this was published)
  • Unicode in itself is an attempt to make completely artificial, huge charset mandatory for everything to support

    What the heck does "completely artificial" mean? All charsets are artificial. It's only about twice the size of BIG5 and SJIS, which are your alternatives for Asian support.

    including devices that can't even fit Unicode font into their memory.

    What do you mean by "Unicode font"? No one expects most fonts to include more than a small subset of Unicode, and there's no reason why a Unicode font that contains ISO 8859-1 subset should be any larger than an ISO 8859-1 font.

    There were some attempts to support multiple charsets in the same text

    It's known as ISO2022. It's been around forever, and no one's stopping you from using it. It's used for COMPOUND_TEXT in X and MULE in Emacs. Most people don't like it because it's a state-heavy system. No one killed it by backroom poltics - it just didn't go over very well.

    Most of Unicode-should-replace-everything support emanates from people who use ISO 8859-1 encoding, that happens to be exactly the same as first 255 characters of Unicode, so they don't have to modify anything in non-trivial manner, and can just cut their fonts to fit them everywhere.

    Huh? recode l1..utf-8 is as difficult as recode koi8r..utf-8. As for fonts . . . welcome to the 21st century. Postscript fonts label characters by name, and Truetype fonts have always been Unicode IIRC, so the only fonts that that need recoding are BDF fonts. There are nice tools to do that automatically.

    From other posts:
    Precisely because all work on standards, formats and libraries that would do it for the programmers is stopped to benefit "Unicoders" who taken over the standardization process.

    Woohoo! All your base are belong to us!

    Have you ever thought that you're being just a touch paranoid? Unicode fans are putting work into things to get Unicode to work. You're welcome to join the standards committees and put in your work. Or, alternetly, create the tools you're talking about and make them a defacto standard.

    Unicode loses all distinctions between languages used in the text

    So does every other character set in the world. ISO8859-1 doesn't tell you what language the content is in; neither does SJIS. Frankly, I don't know where the information is coming from; I'm working on a multilingual webpage, and there's no way in heck you're going to get me to go back through a hundred page document to put in language information. If HTML were ISO2022 based, I'd use the ISO8859-3 charset for the whole document.

    making impossible to do any complex processing that must know the language.

    In the .1% of cases where you're dealing with multilingual text and you need to do complex processing on it, you're going to need to use some document specific language tags (XML tags, Unicode Plane 14 tags, whatever.)

    Most people aren't going to put in the tags anyway, and the computer can't tell whether you're typing French or English, so any system that requires tags is going to get a lot of mistagged documents.

    However it's obvious that all Unicode-using code can't deal with stateful text stream (text + language as state) because the whole point of Unicode was to avoid any state,

    Part of the goal was to minimize state, yes. But state does exist in Unicode - BIDI, for example, which is nesseccary to use Hebrew and Arabic. And of course any code that wanted to use language tags would support language tag state!

    and XML processing programs have no requirement to preserve attributes i their internal processing,

    There's an English saying: You can lead a horse to water, but you can't make him drink. If programs want to discard tagging information, they will and there's nothing anyone can do to stop them. If they want the information, then it's there.

    But which subset should one support in any given situation?

    Whatever Unicode subset you need? If you need to support Europe, you can look at MES-1, -2 and -3, successively larger subsets of European characters. If you need Japenese, I'd suggest the subset of Unicode corresponding to JIS 0213. And so on.

    How does this question differ from what charsets to support?

    You might be a little easier to take seriously if you stopped the paranoid rantings about the evils of Unicoders and rationally discussed the problems with Unicode.

  • by Ed Avis ( 5917 ) <ed@membled.com> on Wednesday March 28, 2001 @12:25PM (#333231) Homepage
    Which countries are the ones that want internationalized domain names? Probably those which don't use ASCII. Which country was the patent granted in?

    Couldn't the DNS servers be run in the countries where they are needed, where they won't be affected by the screwed-up US patent system?
  • Isn't the existance of sites like quepasa.com prior art of sites with foreign language URLs? Sure, it's not hindi or anything "exotic" like that, but it is a non-english language URL.

    P.S. I call all Klingon named URLs :)
  • In Britain and most of the civilized world, there are no patents on software.

    In the 'States you get the M$ version of "The Freedom To Innovate."

    Just move 'head office' to any British partner or consortium member. Use the court paper to wrap fish and claim a different jurisdiction.

    If the instigators of this infringement suit want to pay to haul the crap all the way through the World Court for a decision that eveybody else will be lobbying against, let 'em waste their money.
  • They will get just what they deserve, IETF will NOT pay them becasue they are smart, so if they dont give it up for free, they will get nothing. Nice try tho.

    Real nice setup for a future darwin award tho, thats a whole lot of people to piss off.

  • by The Dodger ( 10689 ) on Thursday March 29, 2001 @12:11AM (#333235) Homepage

    Fortunately, the United States of America, as well as allowing it's citizens to patent stuff they shouldn't, also allows it's citizens to carry weapons.

    Now, I would be the last to suggest that someone should find out who the majority shareholder in this 'Walid' company is, go up to him whilst he's walking down a dark alleyway, put a gun to his head and explain to him that he should really drop the whole patenting-DNS thing.

    That would be illegal. Tut tut...

    Having said that, of course, one could question the validity of laws which stifle the development of technologies such as the Internet and encryption, and, instead, line the pockets of corporations and lawyers.

    As Cicero said "Salus Populi Suprema Lex Esto" - The Good of the People is the Highest Law.

    But, of course, you Yanks went and elected a President who cares more about the US economy than combatting global warming [bbc.co.uk]. God forbid that American companies' profits should be threatened by refusing to allow them to continue to pump greenhouse gases into the atmosphere.

    George "Dubya" Bush. You know why he pronounces it "Dubya"? Because he can't say "doubleubleiminable"....

    Y'know, our motto used to be "Information wants to be free!"

    We might have to change it to say "Innovation and Ideas want to be free!"


    D.
    ..is for Don't fight the Chaos!

  • by mik ( 10986 ) on Wednesday March 28, 2001 @12:28PM (#333236)
    WALID's submissions to the IETF (example) [ietf.org] begin with a statement that "This document is an Internet-Draft and is in full conformance with all provisions of Section 10 of RFC2026. "

    But RFC2026 [ietf.org] section 10.3.1 makes it pretty clear that any conforming submissions must disclose "... the existence of any proprietary or intellectual property rights in the contribution ..."

    Is this merely intellectually dishonest, or is it fraud?

  • Well, guess who just registered su ra tsu shi u do tsu to . org [suratsushiudotsuto.org]!? I'm amazed it hadn't already been done... :-D

    <blink>ALL YOUR JAPANESE BASE ARE BELONG TO US!!</blink>

    I mean, what can you get for twentyfive bucks these days, anyway...?

    Moderators&kiddies-of-all-ages; please note the irony in this posting! (and what a sad day it is that I feel I have to write this disclaimer)
  • My own understanding of the article seems to be that the IETF is using strong arm tactics positively. It's not a matter of who came up with what. Both sides claim that they innovated this idea. Furthermore, people here trying to use claims of prior art with the fact that HTTP can handle encoded values is not worth much.

    The important piece to understand is that the IETF, which has been given to such tasks as making the Internet usable worldwide by various means of communications has done two good things this month.

    1. Telling SSH (the company) to basically F- Off. By naming the protocol first and not pursuing trademark name infringement, ssh (the protocol name) has gained common acceptance and a change of protocol name would be confusing.
    2. Telling Walid to basically F- Off. IETF is the group to persuade if you have an innovative solution to a problem and if you attempt to strong arm them, they'll just say "eh, we'll figure something else out, thanks."

    I think that this stance is a very good stance for a Open Standards body. It should choose open solutions that don't require licensing. But of course, IETF fails in some regards by not actively enforcing the standards in all situations. That's merely an aside, though. I don't think the IETF should be tasked with usage, because that's the job of the Internet Police.

    And as a personal aside to all comments regarding the U.S.-centric Internet and how it shouldn't be forced that way, I have one simple word: ARPANet. That's right, all of you non-U.S. whiners. We dominate the Internet, because from the beginning it has always been our bitch. Our standards body is being nice and giving you a chance to fully enjoy the playground we built. If anything, you should stop whining and bow down to the gods responsible for the Internet, your PC (invented in the U.S.), and the rest of the things that we did to create the Internet.

  • I can code, and think you're comment is ridiculous. This is a pretty obvious idea. If I were thinking of a way to encode foreign language names so DNS could deal with them, the solution they patented is probably the one I'd come up with too. The only work they've really done is run to the patent office and pay them a money.

  • Has to be one of the most insane schemes I have ever heard of. I am pretty sure the Whalid company did not develop this solution, but yet they want to make money off of it. Whatever happened to a work ethic?
  • Sorry. There's decades of prior art. Unless tricycles don't count.


    Caution: Now approaching the (technological) singularity.
  • Public publication would probably count here. But you might need an army of lawyers to prove it. But this is a good argument for publishing open source prototypes of all feasible applications, sort of like the w3 Amaya browser. It doesn't need to work well, it just needs to be a proof of concept and publically available. Then you can fix the bugs, or not. It's been made public.

    WARNING: IANAL. I didn't think that one had to go this far, but that was the clear implication of one of the earlier posts.

    Caution: Now approaching the (technological) singularity.
  • Patents as a surrogate for "first post!". What a concept! :-)

    Caution: Now approaching the (technological) singularity.
  • There have long existed childrens tricycles that had steering wheels. Usually they were for quite small children and were made of plastic. And I think that they were only intended for use inside of the house. But they exist & existed.


    Caution: Now approaching the (technological) singularity.
  • Prior art requires hardcopy publication for patents purposes. It does not do to have just a sample program or some internal documents.



    --
    Leandro Guimarães Faria Corcete Dutra
    DBA, SysAdmin
  • by dillon_rinker ( 17944 ) on Wednesday March 28, 2001 @11:39AM (#333246) Homepage
    $10,000 is nowhere near the actual price of patenting something like this.

    I just went through the process of pricing patents. A patent on a mechanical device was estimated in the tens of thousands of dollars - the actual patent fees charged by the USPTO plus the patent attorney's fees. An international patent was in the realm of six figures.

    Challenging that patent will cost millions of dollars. Have you written your check to the IETF to help defray their legal expenses, or are you more of a "Let's you and him fight" kind of guy? Me, I'm more of a "What a bunch of IDIOTS!" kind of guy. Welcome to the desert of the real...
  • If the IETF has been working on this for a while, then they should be able to contest it via prior art. From what I have read, this is not the case, which means that unless someone in IETF working group looked at the Walid patent, this implementation should not be considered in way "novel" as it was developed independently so quickly.

    Proving that an IETF collaborator didn't copy the patented idea may be difficult to do, but I doubt anyone would have persued the idea knowing it was patented.

    Anyway, it seems like a very logical step to decide to use an internationalization standard to solve the problem, and not very "novel" anyway.

  • Honestly, I hope this brings down the current DNS internationalization attmepts. Inviting a new Unicode encoding just to fit the current DNS character set is the 'wrong' way to approach this problem. I'd rather see everyone push through a year or so of temporary headaches as all protocols are converted UTF-8 in order to prevent the more permanent heaches of user agents using different encodings than the lower layers.

    Imagine. All protocols sharing the same encoding... What a thought....
  • Imagine: Me using 'Preview' to proofread my comments.. That a thought...

    "Inviting a new Unicode encoding..."
    should be
    "Inventing a new..."

    Anm
  • Not quite... engineering notebooks, if properly numbered and dated, are valid evidence for proving 'prior art', even if the information in those notebooks aren't published.

    Yes, its a tough one to push through a court, but it has been done.
  • Not necessarily true. v42 (LAP/M) (the error correcting part of most modem transmissions), is patented by a UK firm.

    Although, it doesn't stop it sucking badly when a patented process is transformed into a standard. Its a license to print money, or, more exactly, a license to EXTHORT money.
  • Walid's patent is filed in July 1999; was IETF working before that time is the key question
  • Right. Let Walid's fate be that of the town the railroad bypassed.
  • patent (noun) pronounced with a short A

    Only if you're American :)

  • DNS allows only about 5 bits in its encoding. That is 26 letters in english, numerals and zero. The 16 bit unicode (UCS2) needs to be converted to these 5 bits to represent all languages of the world. There is already a schme published as Internet Draft (draft-duerst-dns-i18n-02) which describes this process, which is dated July 1998. There are other ways also to internationalize domain names (draft-skwan-utf8-dns-01, iDNS, to name a few. iDNS is functional for a few years now.

    I do not know what this patent is about exactly. But considering this is dated July 1999 there is most probably lot of alternative possibilities to internationalize domain names if not any prior art.

    Search google for those draft texts.
  • There may be some other prior art on escape sequences in general.\n
  • This is a great example.
  • Yes, verbal communication of URLs is a problem, even in english. Often they must be spelled.
    Certainly learning one alphabet is easier than having to deal with the european character variations, not to mention the pictographic languages. Hopefully all sites with anything of interest to those outside their language region will have the foresight to register an ASCII equivalent, but I'm not sure this will happen if it is an additional expense, and lack of foresight will probably rule.
  • Is it just me? Is the whole idea of multilingual DNS names just completely dumb? The difficulty is in entering or communicating domain names that are not in one of your own languages. Rendering different languages is one thing, and IMO a good thing, but for domain names, where everyone sometimes needs to type one in, anyway you cut it ASCII is the worst solution, except for all the others. I can just imagine on a trip to china having to explain the local chinese domain name just to get someone to FTP me something.
  • A new toplevel domain for UNICODE domains could be added. This domain would be implicit when UNICODE domains were entered by the user and added to the end of the domain before lookup. The TLD domain servers would be the ones extended to support this domain.

    Each domain could also be assigned a 7 bit clean representation of the UNICODE name by doing simple 16 to 7 bit conversion with any necessary padding. This is different than the UTF-8 conversion method proposed. These (nonsensical) domains would then be used as the 'real' domain name that was looked up by the resolvers (such as smtp servers).

    Just a thought ...
  • The company's name is Walid [walid.com], for those of you too lazy to read the article. I mention this because I found their slogan very amusing:

    "Welcome to the real world."

    Exactly. When is America ("Land of the Free") going to realize that it's IP laws are strangling the very ideals upon which the country was founded. Namely Life, Liberty, and Property. IP (as currently instituted) is not property, it is paramount to theft. When I have the same idea as you, you do not lose your idea or your ability to use that idea, but when you prevent me from using my own ideas (which you happen to have had first and independently), you are doing something very similar to stealing.

    An oft quoted but important statement from Thomas Jefferson:

    If nature has made any one thing less susceptible than all others of exclusive property, it is the action of the thinking power called an idea, which an individual may exclusively possess as long as he keeps it to himself; but the moment it is divulged, it forces itself into the possession of every one, and the receiver cannot dispossess himself of it. Its peculiar character, too, is that no one possesses the less, because every other possesses the whole of it. He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me. That ideas should freely spread from one to another over the globe, for the moral and mutual instruction of man, and improvement of his condition, seems to have been peculiarly and benevolently designed by nature, when she made them, like fire, expansible over all space, without lessening their density in any point, and like the air in which we breathe, move, and have our physical being, incapable of confinement or exclusive appropriation. Inventions then cannot, in nature, be a subject of property. Society may give an exclusive right to the profits arising from them, as an encouragement to men to pursue ideas which may produce utility, but this may or may not be done, according to the will and convenience of the society, without claim or complaint from any body. Accordingly, it is a fact, as far as I am informed, that England was, until wecopied her, the only country on earth which ever, by a general law, gave a legal right to the exclusive use of an idea. In some other countries it is sometimes done, in a great case, and by a special and personal act, but, generally speaking, other nations have thought that these monopolies produce more embarrassment than advantage to society; and it may be observed that the nations which refuse monopolies of invention, are as fruitful as England in new and useful devices.

    -- Thomas Jefferson

  • su ra tsu shi u do tsu to

    Man, I had a hard time wrapping my brain around that until I realised that the <sub> tags were missing.

    If reports from my family members are correct, neither of my obaachan won't touch computers with a ten-foot pole :-).
    --

  • Can I finally register høtmãîl.cöm and gòåtsè.ç×?

    -----
  • Walid's patent should be condemned under eminent domain. Even if it is a true invention (which, from what I saw, it's not), why can't the United States just declare that this piece of "property" is of sufficient public interest to take it away from him?
  • OMG, have you seen their website [walid.com]? They're trade marking phrases from the matrix...
  • by Greyfox ( 87712 ) on Wednesday March 28, 2001 @11:06AM (#333266) Homepage Journal
    Isn't attending standard group meetings and patenting the ideas presented at those meetings a patented business model owned by RAMBUS?
  • But nobody seems to be asking: WHAT IS THE COST OF ACCEPTING A SECOND-BEST SYSTEM?

    The various encoding schemes - which Walid has apparently patented - ARE the 2nd best systems! Their only redeeming feature is that they can be deployed quickly with minimal disruption to existing clients and servers. Other than that, they're an ugly hack.

    If you were designing an Internationalized Internet Protocol suite from scratch, you wouldn't go near anything like them with a 10 meter pole!

    Do it right, even though it will be a long process to convert: UTF on the wire.

  • The various encoding hacks are just that: HACKS. Their only redeeming feature is that they interoperate with existing clients and servers. This allows it to be deployed more quickly than some of the more thorough and robust approaches. Adapting IP and the higher layer protocols to Unicode would truly internationalize the Internet, not just the presentation layer, which for all practical purposes is just the browser.

    It's more painful and will take a lot longer, but it is the right thing to do. Let Walid wallow in their worthless patent. Three cheers for the IETF thumbing their noses at the patent Nazis.

    "Oh, you patented that, well I guess Internationalization will take a lot longer then, because we are not going to use patented technology, no way, now how, unless it is licensed freely for all to use."

  • "Secondly, the vast majority of the world do not use English."

    But every desktop computer on earth uses ASCII letters out of the box.
  • aw, heck there are more things to comment about...

    "However, I still have to disagree. If you are likely to want the contents of a non-English URL, you're going to need a non-English viewer of some kind, and if you already have that, then the input method is only a tiny step beyond."

    The premise that only people speaking the language of a website will be interested in it is narrow minded to the extreme. Several examples that immediately spring to mind:

    1. how do you explain the existence of babel.altavista.com [altavista.com]?
    2. have you ever gone searching for a driver update for your new fangled computer doodad on some Taiwanese web site? now imagine trying to do that when the website name is in Taiwanese...
    3. what about tourists (American or Chinese) trying to reserve a train ticket, online, in Germany, a classicaly "German only" affair by your reckoning? No more tickets for net savvy tourists because they can't type Os with umlauts?!?
    4. what do you do if you are trilingual? Dual boot Windows Me?!?

    "Thirdly, there would be terrible political consequences to forcing everyone to use ASCII. It would very quickly be perverted into an East versus West issue."

    This one is ridiculous. ASCII letters are the tool to navigate the web. Just like the browser and its back button. It is not ideal but it is still just a tool. Saying it will turn into a religious war is just plain alarmist.

  • The distinction is irrelevant... Any desktop system intended for web use is ASCII compatible in some manner, out of the box. In other words, the user does not have to fiddle with his computer to get it to work. As such, from the user's point of view, every desktop computer is ASCII comaptible.
  • Adding multilingual domain names is bound to fail in the long run and will only create confusion and incompatabilities as it crashes and burns.

    For example, your average Russian user will not be able to type in a Chinese domain name, who in turn will be unable to access a Japanese domain name, who in turn will not know how to see a German web site with that pesky umlaut somewhere in its domain name. The only thing this silly idea will do well is effectively fragment DNS and, as a result, the net.

    May this patent help this idea disappear before it causes any real damage...

  • Er, do you know what the IETF is?

    They're a standards body, they set standards for other people to use. If they'd have included something which is patented in DNS, for example, everyone would have to pay the patent holder to develop a DNS server.

    Standards have to be free from third party IP claims to be successful, so if the IETF were to include this in a standard, it would simply go unimplemented.

  • Let me get this straight. Some guy spends on the order of $10,000 to get a patent.

    Now the body that is going to shape how international domain name resolution happens is going to refuse to seriously look into challenging the patent.

    Sure, investigating the validity of the patent could cost tens of thousands of dollers. Sure, a patent lawsuit can cost a million dollars. But nobody seems to be asking: WHAT IS THE COST OF ACCEPTING A SECOND-BEST SYSTEM? Even the people in charge do not seem to care. Maybe they don't have money to fight. A good question would be "Why not?"

    If geeks can't confront solving problems regarding DNS in rational, cost-effective ways, then who will? Whether that patent is a good one or a bad one, or whether the patent system is beside the point. The point is that the patent is there, and the problem should be dealt with to lead to a good technical system.

  • Preparation of a mechanical application average around $4000-6000.

    Preparation of a software, EE, or biotech application average ranges from $8,000-$20,000, but is more on the low end.

    Inventors note: the better your disclosure, the easier it is to get the work done.

    Filing Fee: $710

    Hassling with the patent office: $3000 average. The closer the prior art you gave to your attorney at the beginning is to the true state of the prior art at the application phase, the lower this number will be.

    Professional drawings up to PTO standards: $500.

    Issue Fee: $1,240
    3.5 year maintenance fee: $850
    7.5 year maintenance fee: $1,950
    11.5 year maintenance fee: $2,990

    The maintenance fees need only be paid if you want to maintain the patent. If you can't commercialize it - don't pay the fees.

    I said on the "order of $10,000", it might have been 15k. I doubt seriously that they paid 20k unless they were being reamed or self-screwing.

    International applications can be really expensive. Why?

    1) Hey, there are like 100 countries to get patents in. If you get a patent in each, each wants its issue fee.

    2) For some reason each country wants the patent in its native language. That means, yep, lots and lots of translations. For getting technical documents (like a patent) accurately translated, you can be looking at $100 a page.

  • sounds good to me. any ideas for a port number? let's get started, no use in sitting aronud. hey while we're at it why don't we come up with a open, standards-based naming registrar and do away with NSI and this patent squabble in one fell swoop? dibs on microsoft.com in the new DNS...
  • by startled ( 144833 ) on Wednesday March 28, 2001 @11:47AM (#333296)
    Yes, but the patent was granted earlier this year, which means it was patented quite some time before.

    If they hope to challenge it, they'll have to go with obviousness, which the patent obviously is. We did exactly what they patented to slap our i18n info into our databases, which didn't accept double byte characters. It's a trick i18n firms have been using for years, and, as usual, it's astonishing that someone could get a patent on anything this obvious.

    Here's the heart of their patent: "The domain name is converted to a standard format which can represent all language character sets, such as UNICODE. The UNICODE string is then transformed to be in RFC1035 compliant format." Or, according to the article, "This solution involves converting foreign language characters into Unicode, a computer industry standard, and then encoding them in U.S. ASCII for transmission over the Internet."

    There's an obscene amount of prior art for this, applied to databases, dll's, and pretty much anything you can think of, except DNS. Even the courts usually don't allow a patent to stand that's simply old hat applied to a slightly tweaked problem.
  • Because it breaks DNS as is. The filtering trick is the best retrofit because nobody back in the day thought being 16-bit clean would be particularly important...

    Wait a minute, it's the Y2K bug all over again! Run for your lives! Stockpile food! Imminent death of the Internet predicted! Get this freakin' straitjacket outta my *mmmfmwf*!

    I don't know... if you want to strike a happy medium, how about just making the whole mess *8-bit* clean and base everything off of ISO Latin-1? Okay, this particular proposal is a bit more complicated than that, but it might work and wouldn't require *major* retrofitting...

    Essentially all domain names are archived in romanized forms using a standard transliteration form. Domain names are issued with reference to the ISO Latin-1 standard, i.e. they could be using any character set but are treated as if they're Latin-1. (Granted this would look a little strange for, say, websites with Russian domain names, and the whole thing isn't too well thought out, but it's just a quickie sketch of an idea, not an RFC draft...)

    The actual domain name transliteration would most likely be done client-side or using an intermediate server that complements the existing DNS system.

    /Brian
  • by iritant ( 156271 ) <lear&ofcourseimright,com> on Wednesday March 28, 2001 @11:38AM (#333299) Homepage
    Prior to this patent coming to light it had been suggested that it was time to do something different with regard to DNS.

    John Klensin, the chairman of the IAB, presented to the IDNwg in December, at which time he dropped what I would describe as an intellectual bomb, by saying that the international train had already left the station, and that IDN shouldn't hurry to get a temporary solution or incremental fix out the door. Instead, he said that the IETF should strongly consider a real directory-based solution. He's probably right.

    While it is patently obvious how to encode domain names, what someone in a foreign country would see is gibberish.tld. That's not what you want someone to see. You would rather have people see your domain name translated into languages they understand.

    Of course, a real directory would be nice, but we've tried it several times before, and without much success. Remember about two years ago, directories were all the buzz? Now where are they?

    Also, as most here know, this is an area where politics and economics really make things ugly. So no matter what system we come up with, we'll see companies like Verisign do their best to winning another land grab.
  • by Keighvin ( 166133 ) on Wednesday March 28, 2001 @10:43AM (#333303)
    This is starting to resemble the initial dot-com fever as swept corporate America. Anything that had those magical 4 characters in the name was new in an exciting way and so must have been a sure shot. We know what happened there.

    The new craze is this IP (two NEW magic characters!) suit stupidity. Any possible patent is sought out and somehow granted, just to say "We were first, so there." It's not even so much about licensing fees as revenews, which most of these attempts have never generated in any significant form. This is about having control, being able to swing a legal club and act big.

    Won't it be nice when corporations have a clue again? I'm sure by then there will be another way of feeling all powerful and amazing after this one wears off so I guess that's still wishful thinking. Technology makes things different and on many fronts easier - but no portion of it is miraculous. You still have to work

    In other news today, stupidity is rampant on the internet.
  • If they can prove that they where working on this system before July 21 1999 then yes yes it would. I am assuming from the fact that they say in the article that they are not going to test this in the courts that they do not have really good proof of this. (Otherwise I can't think of any good reason they don't take thesee people to court.) And as much as I wish these people would FOAD it looks like this one would hold up.
  • Every widely used language has a transliteration method to ASCII characters. They might be ugly, and they might be unpopular, but they work.

    ASCII is just easier.

    I take your point. Better to use a simple lowest common denominator that everyone can use, rather than a monstrosity that isolates people.

    However, I still have to disagree. If you are likely to want the contents of a non-English URL, you're going to need a non-English viewer of some kind, and if you already have that, then the input method is only a tiny step beyond.

    Secondly, the vast majority of the world do not use English. Granted, the English/Roman alphabet covers many more languages than just English, but I would still be surprised if the Roman alphabet is recognizable by a majority of the world.

    Thirdly, there would be terrible political consequences to forcing everyone to use ASCII. It would very quickly be perverted into an East versus West issue.

    Finally, if no standard for non-English languages is defined now, it will happen later nevertheless, except that each country will be setting up their own separate network for their own use. This would isolate people more, not less.

    --

  • You have valid points, although I think you took my language more strongly than I intended. (I stopped short of saying there would be a religious war, for instance.)

    However, I've come to the conclusion that my final point (which, unfortunately, came to me last) is the most important. If we don't set up some kind of international standard now, each group of non-English speakers will go away and do their own (incompatible) thing.

    If we let this eventuality come to pass, the problems you pointed out in your post would be multiplied. In addition to your input problems, you might have to switch to a different OS, install different software, and maybe even connect to a different ISP.

    (Tho' not if you run Linux, of course. :-) Well, you might still have to use a different ISP.)

    If we establish a common technology, we can deal with the difficulties of language. But if we don't, we'll have to deal with both the difficulties of language, and technology.

    --

  • by DeadVulcan ( 182139 ) <dead.vulcan@pob o x .com> on Wednesday March 28, 2001 @12:05PM (#333311)

    The big problem with your reasoning is that it it's a double-edged sword. The difficulties you're predicting as an English speaker are exactly the difficulties that non-English speakers are trying to deal with right at this moment.

    For instance, I'm of Japanese descent. I can only imagine the difficulties of some older Japanese people who know no English, when they have to type in domain names.

    "Microsoft," for example, would be written phonetically, but it would come out like this in Japanese:

    ma i ku ro so fu to

    Even after having described the English alphabet, how do you explain that the mangled name above (it's the closest you can get in Japanese) is to be written like this:

    microsoft

    Want an even worse example? Try slashdot:

    su ra sshu do tto

    Actually, that's too phonetic. If I wrote the equivalents of the actual Japanese letters, it would come out:

    su ra tsu shi u do tsu to

    Don't ask. It's a quirk of Japanese spelling.

    --

  • by Decado ( 207907 ) on Wednesday March 28, 2001 @10:48AM (#333317)

    They patented a method to convert text from an arbitrary charset, to UNICODE, and then convert this to plain text as U+XXXX where XXXX is the hev version of the unicode charset. Thats a basic first year programming assignment and definately not an innovation. The first part is obviously crap as converting from one charset to another is not a new idea (Hell java does it for nearly every charset in existance for years). As for the rest does outputing a unicode character in hex count as an innovation? Is there anyone here who works with unicode and hasn't done the same thing as part of debugging an application?

    Whats next, patent a method for converting from an arbitrary number set to decimal and then outputting it in hex? The stupidity of the USPTO boggles the mind.

  • Couldn't this be shot down by almost any URL encoding scheme, including CGI? It seems that the debacle regards encoding unicode in ASCII. How is this any different from encoding special ASCII characters like ' ' into '+'? They are just encoding slightly different input data.
  • by RedLaggedTeut ( 216304 ) on Wednesday March 28, 2001 @11:41AM (#333325) Homepage Journal
    Delphion link: US 6182148 [delphion.com]

    As usually, the patent is also available at US Patent office [uspto.gov]

  • Brain:All I have to do is patent their method of translating funny squiggles, and everybody will have to pay me in order to use the internet! Hah-ha-ha!

    Pinky:Yeah. You'll be rich.

    Brain:The IETF never violates patents! I'll make gazillions!

    Pinky: It says here that they never pay patent fees either. This says they will work on an alternative solution, which doesn't use your patents. What does that mean, Brain?

    Brain:What? Let me see that! Curses! Foiled again!

    The only people who are really and truly screwed are Walid's investors, or other poor bozos who never looked at IETF's track record. Simple answer. Refuse to hire the WALID folks when they go bust. Tell every company that the WALID folks are the reason the Internet will be harder to use over the coming years. It's not about getting mad. It's about showing the chickens where to roost.

  • Uh, no. It is expressed entirely with the english charater set. Which is what this is all about using a non-english character set in url's. RTFA

  • by SVDave ( 231875 ) on Wednesday March 28, 2001 @11:28AM (#333330)
    So, why don't we let them have their bullshit patent and design DNS 2.0, which uses unicode throughout. Just throw it on a different port, and supercede RFC 1035. Since the whole patent hinges on this particular RFC, a new system that's not backwards compatible with the old one would not violate the patent.

    Good idea! I'm glad I thought of it. Now, if you'll excuse me, I need to make a quick trip to the Patent Office...

    (and while I'm there, maybe I'll trademark "DNS 2.0")
  • Both rely on patented technology, and yet have managed to become defacto standards on the web.

    Trolls throughout history:

  • OK, let's put aside any patentability issues and say that they have patented a method of converting arbitary domain names to US-ASCII domain names (gee, I thought MIME does something very similar with headers, I must be mistaken).

    What they are missing is the fact that for their patent to become valuable that particular conversion method must become standard. Now, you can have two kinds of standards, either de facto (like Microsoft) or de jure (like IETF). No self-respecting standards body is going to standardize a patented method so their only hope then lies in selling their patent to Microsoft. And I don't think even Microsoft has enough clout to make new de facto DNS standards...
  • And on another note, who exactly would pay the licensing fees that Walid is asking for?

    That would be makers of DNS servers. That's why the IETF will not standardize on a patented process.

    Couldn't they just cite their own research as prior art and have Walid's patent tossed out?

    It would most likely have to have been "reduced to practice" (more or less implemented) before Walid's patent was granted. An exception to this rule is when the product cannot be reasonably implemented before the patent is granted, and the creator can show that it can work. I hope that the IETF has a hacked-up nslookup or something that predates the Walid patent.
  • Universal??? What's universal about having multiple languages supported? In case someone wants to accuse me of being english-centricic, it's not which language, but rather that there be one standard language of the Internet.

    I guess I just advocate standardization. And while we're at it.. why do we need more TLDs before we have a directory service replacing DNS? Here's a challenge: find the URL for National Semiconductor, Inc. Now, with DNS how easy would that be if you had twice as many TLDs? Adding TLDs without first replacing DNS is putting the cart before the horse, making the Internet more difficult to use and LESS universal.. just like having international URLs.

    ..and no, I don't like the idea of patenting software anymore than you do, I just don't think 'universal' and multi-lingual (international) correlate. Rather internationalization is antithetical to universal standardization.

  • First, some of the documents that Walid submitted to the IETF working group on internationalized domain names are here [ietf.org] and here [ietf.org].

    Now, I won't comment on the contents of these documents with the exception that one of them was submitted to the working group after the patent was granted without any mention of the patent itself.

    From the looks of things, the IETF only started publishing work on internalization of domain names in 2000, so the prior art argument looks to be moot as Walid's patent application was filed in July 1999.

    I have to agree with the IETF's stance. Pushing patented technology (no matter how dubious the patent is) as a standard is akin to forcing internet users into paying a tax. I applaud the IETF in their statement; license the technology for free or we'll just use something else.

    -----

    patent (noun) pronounced with a short A: A grant made by a government that confers upon the creator of an invention the sole right to make, use, and sell that invention for a set period of time.

    patent (adjective) pronounced with a long A: Obvious; plain; synonym of apparent.

  • Granted. However, it would seem that the minutes of the IETF that may contain information that could be construed as prior art. This is assuming, of course, that the IETF publicly released these documents.

    Dancin Santa
  • by Dancin_Santa ( 265275 ) <DancinSanta@gmail.com> on Wednesday March 28, 2001 @10:35AM (#333349) Journal
    If the IETF has been working on this solution for so many months, they must have some evidence to show for it. Couldn't they just cite their own research as prior art and have Walid's patent tossed out?

    And on another note, who exactly would pay the licensing fees that Walid is asking for?

    Dancin Santa
  • So where was Walid when the international community has been asking for this all this time? It sounds to me like they may have fleshed out the technology after the standards committee started working, then demanded cash for an idea so obvious it was independently created.

    Unfortunately, this is a problem with open-source and standards-based development. They spend their time solving the problem well instead of getting patents, and then get scooped by idea stealers. Something should be done. Personally I'm in favor of giving obvious-patent holders the finger.

    cryptochrome
  • Sounds to me that WALID was trying to ride the money train.

    http://www.google.com/search?q=cache:ncdnhc.peac en et.or.kr/2000911/0252.html+walid&hl=en

    --
    J. Douglas Hawkins, of WALID, Inc., Elected to Inaugural Multilingual
    Internet Names Consortium Board
    [Business Wire]
    ANN ARBOR, Mich.--(BUSINESS WIRE) via NewsEdge Corporation -- J. Douglas
    Hawkins, Director of Business Development at WALID, Inc., has been
    elected
    to the Inaugural Board of the Multilingual Internet Names Consortium
    (MINC).
    The Inaugural Board, which includes members from industry and academia,
    will
    promote the implementation of multilingual Internet names, including
    domain
    names and keywords. The Board will work together with the Internet
    Engineering Task Force (IETF), the Internet Corporation for Assigned
    Names
    and Numbers (ICANN) and other international organizations to encourage
    and
    direct the transition to a truly multilingual Internet.
    Mr. Hawkins brings to the Board his extensive experience overseeing the
    development and internationalization of products used in over 70
    countries
    and in seven different languages. The Board also benefits from Mr.
    Hawkins'
    many years of experience working successfully with United Nations'
    Specialized Agencies, national governments and regional organizations.
    About WALID: WALID, Inc., headquartered in Ann Arbor, Michigan, provides
    a
    complete patent-pending solution for registering and resolving
    international
    multi-lingual Internet domain names. Through the registration of
    non-English
    domain names, WALID's technology enables business, organizations, and
    individuals to establish a unique presence in any language from which to
    communicate and conduct global commerce via the Internet.
    WALID, Inc., 2245 S. State St., Ann Arbor, MI 48104, USA

    About MINC: The Multilingual Internet Names Consortium (MINC) is a newly
    formed international consortium created by industry, academia, research,
    government and non-governmental organizations, with the common goal of
    advancing the development of the Internet through the coordination of
    the
    technology and technical deployment of multilingual Internet names. MINC
    originated from a Chairman's commission on internationalized Domain
    Names
    (iDNS) by the Asia Pacific Networking Group (APNG), Asia's oldest
    Internet
    organization, partially supported by the National University of
    Singapore
    (NUS) and the International Development Research Centre (IDRC) of
    Canada.
    For more information about MINC, please see or
    send an
    email to sec@minc.org.
    >
    CONTACT: WALID, Inc. | Bryan Cash, 734/822-2028 | Fax: 734/822-2021 |
    bcash@walid.com

    Here is the link for the patent

    http://www.delphion.com/details?pn=WO00056035A1

  • This patent looks like it's at the very least an ugly one (but then again, software patents aren't things that win beauty contests). So, why don't we let them have their bullshit patent and design DNS 2.0, which uses unicode throughout. Just throw it on a different port, and supercede RFC 1035. Since the whole patent hinges on this particular RFC, a new system that's not backwards compatible with the old one would not violate the patent.
  • by nate1138 ( 325593 ) on Wednesday March 28, 2001 @10:35AM (#333364)
    This is a perfect example of why you should not be allowed to patent software. Here's a group trying to create a more universal internet, and they get knocked back by a software patent. This is a prime example of patenting software and stifling innovation. I wonder how much sooner the internationalization would be complete if not for this roadblock.

"Why should we subsidize intellectual curiosity?" -Ronald Reagan

Working...