IBM stamping ID's into new PC's 161
IBM may not have grasped Intel's failure here. Attention IBM: I have been a religious Intel owner. Just the other day I bought several computers with AMD chips instead of Intel P-III's, because I don't want to be tracked - so as long as Intel wants to track me and there's anybody else in the chip-making business, Intel won't be getting my business. You just don't realize that people take their computers seriously - they don't want it ratting on them to every website they visit, they don't want it informing on them behind their back, they don't want Clipper chips performing insecure e-commerce "encryption" for them. It sounds (and of course IBM is releasing this tomorrow, so this is preliminary) like IBM has created a proprietary, closed system, which very probably includes a back-door in it for U.S. law-enforcement access, because otherwise IBM would have trouble exporting it worldwide. Only pointy-haired bosses are going to want to purchase such things. -- michael
Good intentions, bad solutions... (Score:1)
And I'm not worried about a privacy invasion from IBM. First of all, I don't use an IBM machine. Secondly, I know that software can always circumvent this type of stuff. Third: I think IBM actually had *good* intentions, but made a few mistakes in carrying out their intentions.
I don't think IBM's major motivation is to spy on users or create an invasion of privacy. I think that they want to motivate online purchasing, etc. And, hey, they're trying to get people to use crypto-- which isn't *all* bad, is it? So the intentions are good...
But the solution is bone-headed at best. Embedding a chip in the computer that will perform digital signature and encryption operations is a really inefficient and stupid way to go about encouraging the use of crypto.
First of all, why hardware? It's just as easy to implement the crypto in software. And software encryption can be much more flexible, handling larger key sizes for the ultra-paranoid, or forty-bit keys for the clueless.
Second of all, why integrate it into the computer? Okay, so you want to do it in hardware (in spite of its lack of flexibility). Why not distribute PC's with a dongle that plugs into a USB, parallel, serial, or Firewire port? That way, those of us who don't trust the damn thing can at least get rid of it.
Finally, why the hell would you do this when there was so much controversy over the PIII ID? I would figure that IBM has some good PR and advertising folks-- how did this one slip out the door?
Really, let's not jump on IBM. Applaud them for trying to encourage the use of crypto amongst the masses. Then scold them for raising the alarmist ire, and for not quite thinking the whole thing through.
Re:Stupid! (heh) (Score:1)
nce again, privacy freaks go nuts.
The price of freedom is eternal vigilance
Yes, its small, and yes it seems innocuous. Maybe it even is. But I'd rather have "privacy freaks" raise a stink now than risk waking up one fine day and wonder what happened.
Calm down (Score:2)
I can't believe that the original poster is talking about back doors and close systems based on nothing but wild-eyed speculation.
I realize it's a radical thought for some people around here, but let's get our facts straight first before we start deciding What It All Means, OK?
What this sounds like... (Score:2)
signing. This could either be:
Very good if it uses standard, verifiable
hashes and encryption algorithms. If it
does indeed do encryption faster than this
is a good thing. Esp. If IBM gets export
licences for stronger keys.
Very Bad if it uses proprietary, unverifiable
algorithms, perhaps that don't fully use
key information so as to make it easier to
crack your important e-mail.
The article is pretty vague.
Question about reading chip ID's: Are these
privileged or un-privileged operations?
-- cary
Key sizes (Score:2)
Routers got it wrong (Score:4)
Doesn't matter who is spying (Score:2)
Let's leave watermarking out of computers.
----
Software concerns (Score:1)
Re:Software concerns (Score:1)
...phil
Re:Stupid! (heh) (Score:1)
But if We Control The Software... (Score:5)
Details on precisely what instructions are involved would presumably be necessary; if one is running Linux, then actually using the instructions requires that someone convinces you to install software compiled with the "Evil Privacy-Killing Instructions."
This will fall high on the list of Things Ulrich Drepper Won't Add to GLIBC; it is equally likely to represent Instructions Unlikely To Be Added To the GCC Code Generator.
Note that this furthermore represents Instructions That Aren't on PPC which would encourage the purchase of PPC-based systems or Alpha-based systems...
Re:... (btw, DAV is not proprietary.) (Score:1)
Re:Stupid! (Score:1)
"The number of suckers born each minute doubles every 18 months."
Re:Uhh.. there's no such thing (Score:1)
If you sign up for a Slashdot account, the default posting behavior is saved for you. We even have the ability nowadays to post anonymously via a checkbox...
Re:Shortsighted: Viruses? Trojans? Spoofers? (Score:2)
You can't, for precisely the reason you indicate. Anyone considering this information to be an authentic ID is smoking crack.
Fortunately, this chip isn't about sending your "ID" all over the 'Net. It's about cryptography and digital signatures, which are a bit harder to forge than a simple ID.
Along the trojan/virus thread, why in the world would somebody write such a virus? The only data this chip would attempt to make available is perhaps the public encryption key, which is designed to be put out into the public anyways. I don't see the big privacy problem here. A legitimate example of a privacy-invading virus would be one that watches the system and constantly reports where the current machine is browsing, what they're doing, what documents they have, etc., but this can be done with or without a cryptography chip such as this.
I suppose a trojan could use the chip to digitally "sign" something the user didn't intend to sign, but re-read the article: a user PIN (password) is allegedly required to activate this chip. *shrug*..
Re:oh goody! proprietary encryption! (Score:2)
I think it's a pretty safe bet they're using existing cryptographical systems. An earlier post said they were using RSA algorithms, but I haven't been able to verify that myself.
Uhh.. there's no such thing (Score:2)
Sure, this solution is secure, but it's not *as* secure as other, unexportable alternatives. In ten years, "real security" will mean something entirely different. The original poster was using the term "real security" by saying the key sizes allowed by this chip were inadequate for truly sensitive data. I was simply saying that IBM is not marketing this mechanism for people that regularly make use of truly sensitive data.
Read the article if you haven't already. This is all discussed there.
Re:The Irony, and Lifespan of a Chip? (Score:2)
I'm confused. The only thing this chip does is provide encryption and digital signature services to applications. You will need a software-based PIN/password to access these features. I don't see how this allows IBM and its "evil" minions to "get at" your hardware. Am I missing something?
On another note. Isn't an embedded security device likely to go obsolete pretty rapidly? Then what, we have to buy a whole new motherboard instead of just installing the latest version of the software? That sucks.
All hardware-based cryptography products will be "obsolete" in short order. Does that mean they can be upgraded? Not without changes in US export laws.
It's certainly possible this chip is replaceable as cryptography improves in the future.
easy would it be to pry the sucker off?
Hey, suit yourself. It's just hardware-based encryption and digital signatures. The same sort of stuff I'm doing with PGP in software today. The only data that can be made public via this chip is your public key, which is something I make an *effort* to make public while I'm using PGP. I really don't see what all of the fuss is about. If you don't want to use it, just don't use it. If you feel like you don't want to buy from them, fine.
Re:What this sounds like... (Score:2)
Though you're right -- the article is pretty vague, but surely they're using a cryptographic standard.
Question about reading chip ID's: Are these
privileged or un-privileged operations?
What "ID's" are you talking about? Do you mean the public key? Does this really matter? The whole point about public/private key cryptography is to make the public key as widely known as you need it to be.
The article explicitely mentions you'd need a software-based PIN/password to access features of this chip, so I don't imagine these services will be available to any application unless you explicitely authorize it.
What "ID" is everyone talking about? (Score:2)
If someone wants to write an evil privacy-invading trojan program that secretly tracks your every move, it's probably in their best interests to use any of the other ID mechanisms already on your machine, like the MAC address, Windows registration codes, e-mail addresses in your e-mail clients, etc., etc.
Besides, the article explicitely states that you'd need to enter a PIN/password of some form to use features of this chip. Now, I have no idea if it's possible to circumvent this, but you'd think IBM would have done a bit of thinking and planning prior to now, yes? *shrug*..
In short, the potential for privacy abuse is virtually nil, and it's comparitively zero when held up with other methods for identifying and tracking you that already exist in software and hardware. I don't see any virii, trojans or rogue software companies out there making use of that, do you?
Re:Less of a privacy issue than a security issue (Score:2)
So everything made on a computer can be traced to that computer.
This isn't correct at all. The digital signing/encryption process requires the user to enter a PIN/password. The user must *explicitely* make the effort to digitally sign a document or to encrypt data. This isn't something that can just be hidden in the background for malicious or rogue software companies to take advantage of.
Though to be fair, it's certainly possible that this PIN requirement could be bypassed by a trojan/malicious coder. I'd be interested to hear how IBM plans to keep that from happening.
Furthermore, what happens when 128-bit keys are no longer secure enough and you need to move to 256-bit keys?
I believe a previous poster mentioned that this chip was capable of 256-bit encryption and digital signatures up to 1024-bits. Granted, it will be obsoleted in several years, but it's more than sufficient for items not of a super-sensitive nature. The article explicitely states that it should be adequate for around 80% of their customers. The remaining 20% apparently have needs for stronger encryption and either won't use this hardware chip, or will use it in conjunction with something else (as the article states).
Nobody's *requiring* this chip to be used. The whole idea is that the hardware chip completely hides the private key, making it impossible to recover by software (thus exposing data encrypted with it). Yes, it will be obsolete in time. So will existing software solutions. If you don't want to use hardware cryptography, don't. If you don't want to use software cryptography, don't.
As far as tracking users goes, I can think of much better ways to construct evil programs and trojans to do this job much more effectively and doesn't require that the user have a motherboard with one of these chips. Privacy and security issues here are minimal at best.
Re:CPU-based identity intrinsically flawed (Score:2)
the software can be used to track people wherever they go
A PIN/password is required to activate features of this encryption chip. Thus, encrypting or digitally signing something requires explicit user intervention.
There is no "ID" that is sent out by evil software. The only thing I can think of that might work in this fashion would be the public key, which is meant to be distributed anyway. If I were writing a trojan or an evil program to track users, I can think of a few better ways of doing this than relying on something only a small percentage of consumers is going to have available (like, say using the MAC address, Windows registration codes, e-mail addresses, etc., etc.)
Re:oh goody! proprietary encryption! (Score:2)
By placing the private key in *hardware*, it no longer becomes accessible by software. It is impossible to recover a hardware-based private key via software.
The only way a hardware-based key can be discovered is if it's cracked. Seeing how distributed.net has been working on cracking the latest 64-bit RC5 key since the latter part of 1997, I don't think we have to worry about these hardware keys being cracked any time soon.
Re:My $.02 (Score:2)
What makes you think this is possible? By storing the private key in hardware, it becomes impossible to access via software.
The only way the key could be discovered is by a cracking effort. At 256-bits (as one poster indicated for encryption, and 1024-bits for digital signatures), it's going to take a long time for that to happen.
How are you going to be able to communicate to the powers that be that your key has changed, and not only that, you could just change your key and all your new transmissions would be unreadable...
Uhh, the same way that people do it today with software encryption products (like PGP). Just pass out your new public key and stop using the old key pair.
Better yet, J. Smith over here invents a utility to reflash the chip with an arbitrary "identifier" and people can now pose as you
You assume that this chip can be "upgraded". It's quite likely that this chip is entirely hardware-based. No "flash" upgrade at all. That would leave it open to the attack you mentioned. The whole idea is to keep the chip completely isolated from software.
Re:OSS doesn't help here... (Score:2)
To suggest that web site owners will start requiring people to use these keys is totally absurd. Why in the world would web site owners voluntarily reduce their client base to less than 1% of its current base (those that have machines with these chips)?
People are using the same arguments they used against the Intel PIII CPU ID thing, when really the two situations aren't alike at all.
If you don't want to use the encryption offered by the hardware, DON'T. Stick with PGP or whatever other software-based solution you're using today. The only difference is that in the hardware implementation, it becomes impossible for trojans/virii/malicious programs to steal your private PGP key.
Why? (Score:2)
The only reason this sort of thing worked with older mega-server architectures in the past is because those platforms didn't have the upgrade rate of today's PC's. Plus, even if an upgrade *was* performed, all you usually had to do was contact the software vendor and let them know. A new software key was re-issued in short order.
With the upgrade rates of today's systems, I can't imagine a software company volunteering to create a staff of people set up to handle the enormous volume of requests for new keys as people upgrade hardware.
Re:Good intentions, bad solutions... (Score:2)
The whole point behind using hardware crypto is that it's impossible for software to recover private keys that are stored in hardware. With software-based crypto, there's always the (small) chance a trojan/virus will discover and recover your private encryption keys.
Finally, why the hell would you do this when there was so much controversy over the PIII ID? I would figure that IBM has some good PR and advertising folks-- how did this one slip out the door?
Because this crypto chip has nothing to do with ID's. All it does is provide encryption and digital signing services. To use these services, you must provide a PIN to software, which enables the features. It becomes an explicit user-initiated process, not something that can be maliciously hidden in the background.
The whole point is to allow you to digitally sign and encrypt data. What's the point in building a hardware system if malicious code could digitally sign stuff on its own, without your approval?
Re:Hrm.. (Score:2)
So long as the remote end has *some* public key that represents your system, they can verify your messages and validate your signatures.
The difference between this hardware scheme and existing software schemes is that it's theoretically possible for a malicious program to obtain your private keys stored on your system. It's not possible to do this if these keys are stored in hardware.
You want to disable ethernet MAC addresses? (Score:2)
These things are necessary for networks to function.
As far as the hardware encryption chip goes, do a bit more reading and you'll discover that this really isn't something that *needs* to be disabled. The whole "it's another attempt to brand our computers with an ID" argument is just silly. The only thing that this chip does is hardware-based encryption/decryption of data, much like an MPEG decoder card. The only difference is that, for this chip to work, you'd want to publish the public encryption key so people can send you encrypted messages and you can send others encrypted/signed messages of your own. It's NO different than using a software-based encryption solution, except that with hardware, it's impossible for someone to "steal" your private key.
Re:Uhh.. there's no such thing (Score:2)
A previous poster noted that this chip uses a 256-bit key for data encryption. A typical HTTPS/SSL connection uses a 40-bit key (with 128-bits implemented in "unexportable high-security" browsers).
I don't know about you, but this is more than adequate. IBM says as much and indicates if users need something more secure, they're free to augment this system with things like smart cards and the like.
If you're so concerned that people are getting a false sense of security with this device, you should be working to warn them about the dangers of secure web sites, which are significantly less "secure".
If you want to decrypt someone else's encrypted data nowadays, it's to your advantage to somehow gain access to their system and find a way to steal their private key instead of trying to "crack" it. By doing this in hardware, this becomes impossible. This is why hardware encryption schemes are so much more secure than equivalent software ones.
Re:Uhh.. there's no such thing (Score:2)
Do you have any idea how hardware based encryption products work? The whole reasoning behind hardware-based encryption is *because* it's not possible for software to retrieve the private keys! That's the whole purpose behind doing it. If private keys were somehow able to be retrieved, there'd be no point in doing it with hardware at all, because it has no advantage whatsoever over software solutions. This is a fundamental design requirement, and is EASY to assure.
I think you're being very rude here. The only reason I was saying 256 bits is plenty for me is because current encryption products use FAR LESS SECURE keys than what this chip is providing. I wasn't suggesting that you be a good boy and be content to use it. If you feel you have data that requires more security, USE A MORE SECURE PRODUCT.
Re:My $.02 (Score:2)
The reason private keys can't be "pried" from hardware products is because these hardware products provide no mechanism to retrieve the private keys.
It's the same reason you can't write a program to command an Intel CPU to change colors. The chip simply isn't capable of doing it.
When constructing something like a cryptographic chip, just build functions into the chip that you need. You don't want the private key to be exposed, so don't create a "return_private_key" opcode when designing the chip. There are probably things like "return_public_key", "encrypt_text_at_this_memory_address", etc. Unlike software, you can't just write a program to examine the details or inner workings of a piece of hardware. The hardware has to be explicitely programmed to volunteer that data.
Hardware data encryption has been something pushed for quite a while now. It's not that it's faster or more convenient than software solutions, and *certainly* it's not because the hardware is more adaptable. It's because the hardware version is incapable of allowing the private key to be discovered. Whenever you use software, the public and private keys are stored somewhere on the hard drive in a not-so-cryptographically-secure form. This means it can be found and stolen by a malicious program. That simply isn't possible with hardware solutions.
Re:OSS doesn't help here... (Score:2)
Without this manual step, it would become possible for malicious programs to digitally sign/encrypt things you didn't intend to sign/encrypt.
blame the submitter/author (Score:2)
It's because of the submitter's "summary" and "michael"'s subsequent editorial. It's obvious he didn't read the article. He just saw the "CPU ID" phrase and went ballistic, like so many uninformed privacy nuts that post here regularly.
I really wish the Slashdot authors would try to be a little less biased when it comes to the articles they post here. Slashdot has become MUCH too editorialized, which wouldn't necessarily be *all* that bad, except THEY DON'T DO A GOOD JOB EVEN AT THAT. They base their editorial comments and slurs on stupid/uninformed assumptions based on little/no information. As much as I love Slashdot, it will never be a true journalistic site until it can replace its poorer "authors."
Re:Why? (Score:2)
So long as there's competition in the marketplace, it would be insane for a company to start tying all of their software to specific systems just so they can force their customers to keep buying licenses as their hardware is upgraded. There will always be competition.
This simply doesn't make good business sense for typical consumer software.
Even for high-end software that, even today, is written and tied to specific systems, usually all you have to do is call them up and say "We're moving this software to another system, so we need another license key." The main reason they use those hardware dependencies today isn't to foil their customers. Rather, it's to keep copies of the software from appearing on warez newsgroups and being distributed to friends and sold illegally overseas, that sort of thing. It's usually quite easy to get a replacement license in these cases if/when you move the software to another piece of hardware.
Quite the opposite (Score:2)
Not marketed as "real security". (Score:3)
Re:Uhh.. there's no such thing (Score:3)
HTTPS (SSL) predominantly uses 40-bit encryption. "High security" versions of the same thing run at 128-bits. The last I checked, the "default" PGP key length wasn't anywhere *near* 1024-bits, which this chip supports.
Again, it's all a matter of *degree*. True, there is software out there that uses key lengths a lot longer than what this chip offers, but you won't find that software in mainstream browsers and e-mail clients, which means it's useless to normal people.
Additionally, you seem to forget the whole purpose of moving encryption into hardware: It's impossible to recover the private key via software. Today it's theoretically possible for a trojan or other malicious programs to snoop around your hard drive, find your software-based PGP private key ring, and from there, somehow recover the private key. This is not possible with hardware-based encryption, hence its attractiveness.
Re:Wait a minute here... (Score:4)
*IF* there is a backdoor. Somehow I doubt that such a back door exists. There's always the possibility that a back door will be discovered (and it's almost a guaranteed certainty, given enough time). If one is found, IBM will be nailed with lawsuits up the ass, criminal proceedings, you name it.
It doesn't make good business sense.
You know, it's certainly possible (I mean technologically, obviously) for the government to sneak in a hidden backdoor in Microsoft Windows. Does that mean we should ban and legislate Windows into extinction? It's also possible that they've secretly placed a backdoor in the operating systems that run on our Internet's routers. Quick! Ban the Internet!
Yes, each chip has a public key. If you don't want that public key given out, don't use software that makes use of it. Period.
I occasionally make use of a software-based PGP implementation, but you don't see me scrambling to hide my public key from people.
Remember: Multi-user systems are pretty commonplace nowadays (NT, Unix, even Windows-based workstations). It makes absolutely NO sense whatsoever to suddenly convert all programs so that they use this hardware-based encryption scheme over a user-defined one.
Less of a privacy issue than a security issue (Score:2)
``People from outside (of your organization) can get at your software,'' said Anne Gardner, general manager of desktop systems for IBM. ``People from the outside can't get to your hardware.''
So there will probably not be a software flash-upgrade for this chip or anything like that: after all, if it can be software-upgraded, it can be cracked: witness the recent virus (forget its name) that wiped your BIOS chip if you had a Flash-BIOS capable motherboard and chip. So the only way to upgrade this thing will be to replace the chip -- and it'll likely be soldered onto the motherboard.
``We want this to become an industry standard,'' IBM's Gardner said. ``We want this on as many desktops as possible.''
Which means that if they get there wish, people who build <buzzword>E-commerce</buzzword> sites will start to rely on their customers having PC's with the chip installed.
The features of the security chip include key encryption, which encodes text messages,
What key length? Is it upgradeable? Considering the "can't get at it with software" statement above, probably not. So either it will have export-grade encryption (weak and insufficient, as most /. readers well know) or the U.S. government will restrict its export from the U.S. Furthermore, what happens when 128-bit keys are no longer secure enough and you need to move to 256-bit keys? Whoops, sorry, can't just get a software upgrade, you need a new computer. More lock-the-consumer-into-the-upgrade-cycle stuff here, even if it's not intentional (and it very well may be intentional).
and ``digital signatures,'' which act as unique ``watermarks'' that identify the sender of the document.
So everything made on a computer can be traced to that computer. Just like typewriters in the olden days (I seem to recall a few detective stories based on that fact). Great -- could be useful in some circumstances; law enforcement would love that, for example. This is where the privacy issues (which I'm not discussing here) come in. BUT this just identifies machines and is useless for identifying people. It will almost certainly, however, be misused for identifying people by what computer they use. What happens when (not if) Joe L. User sits down at one of the public-access PCs at his local library to surf the web, sees a cool "web shopping" site and registers as a customer? Assuming the site uses the chip ID the way IBM seems to be suggesting here, it will send Joe's computer (which is actually the library's) a digital certificate for Joe to make it "easier" for him to shop there since next time he won't even have to log in. Joe likes this, of course: it makes things easier for him. So Joe orders a few things and leaves. (Log out? What's dead trees got to do with things, anyway?) Now Carl Cracker comes along, uses the same computer at the library, and checks the Netscape history to see what he can find. He finds Joe's recent visit to the <buzzword>E-commerce</buzzword> site, checks it out, and sure enough, Joe didn't log out. So he visits the site and their software thinks he's Joe. He orders a bunch of stuff and charges it all to Joe.
Plausible scenario? You bet. Could <buzzword>E-commerce</buzzword> site designers be so clueless as to use a mechanism designed for computeridentification to identify people? No doubt about it.
The real solution to the <buzzword>E-commerce</buzzword> security issue is software. Ubiquitous, open-source, peer-reviewed software. Like, say, PGP (International version) [pgpi.org], or GNU Privacy Guard [gnupg.org], or SSLeay [uq.edu.au]. The hard part is that "ubiquitous" bit. You want real security? Here's how: Convince your boss to go open-source on the security aspects of the company's new <buzzword>E-commerce</buzzword> site. Read the Linux Advocacy mini-HOWTO [linuxdoc.org] first, then point out the advantages of using PGP or GnuPG or SSLeay rather than a proprietary solution. It'll be a hard sell, but stick with it. If everyone works at this, we'll eventually achieve the "ubiquitous" part.
The solution is out there, folks. Let's go implement it.
-----
New E-mail address! If I'm in your address book, please update it.
Re:It could be good as theft protection (Score:1)
Yeah, but you know it will end up being handled by the Bureau of Alcohol, Tobacco, PCs, and Firearms.
There are already private companies that do this - if the criminal doesn't have the sense to blank the hard drive, a little program will phone in to the central office the first time he goes online. It's a voluntary system for those who wish to trade off a bit of privacy (and a bit of cash) for an improved chance of recovering stolen property. It only works because most criminals don't know about it yet.
Re:All the people fussing are simply ignorant. (Score:1)
The MAC address is used for node-locking certain types of software (the kind of software that costs more than your computer did, and where the salescreature gets a free trip to Hawaii if you buy the "gold" support package).
The noise about the Intel ID was not that it existed, but that Intel planned to use it in a very silly and dangerous manner.
OSS doesn't help here... (Score:1)
The thing about this is that if it works and becomes ubiquitous having the source to your OS won't help. You'll start noticing that web sites require you to submit this ID, and that software have access to it in order to take advantage of certain "features". So, in order to make sure that linux/oss software can take advantage of these "features" support for this ID will be programed in. Sure, you can choose not to use it, but when everybody else is using it it could quickly become impossible to get by without it.
You might be able to spoof it, but people that write the web pages (or whatever) that use it will find ways around this. They could restrict page views to 100x per ID, for instance, so people couldn't all use the same ID. (I know, so make it random -- that might work. But then things devolve into a hack war, like the aol/m$ instant messanging war.)
Re:OSS doesn't help here... (Score:1)
Don't get me wrong, you may be right -- I just don't know.
Wrong... (Score:1)
If you don't like ID's on CPU's then I hope you avoid SPARCs. AFAIK most server oriented processors have ID's. Not for tracking on the net (which is just a moronic and insecure thing to do), but for node-locking an application.
Think of it, an application on a web server asks for your CPUID. It gets the answer across the net - how does it know where it came from?!
Sheesh! Give it a rest.
Re:No factories? (Score:1)
of San Thomas here in Silicon Valley?
Yeah right, no factories - sure.....
Re:Why? (Score:1)
That's because you're not someone selling the software that needs to be replaced after an upgrade...
... (Score:3)
How arrogant of IBM to assume the subversive element of our society won't abuse this new privacy-invading 'feature'. What's worse.. they're actually encouraging the very thing this ID feature was supposed to stop - fraud!
To use an old, but good, example - if you don't have a secure channel with another person, you probably aren't going to be tempted to communicate sensitive information with it. But.. if you think you have a secure channel with another party.. you may be more willing to divulge sensitive information. The key word here is think. If that channel isn't secure.. you're exposing yourself to more risk than if it didn't exist at all! It defeated the very reason it was created - security. The use of this chip holds a similar analogy - if it is used for verification, then anybody who can defeat it can masqarade as anybody relying on it as a method of authentication. In short.. the barn door is wide open.
So privacy nuts... I suggest you adopt this approach instead - crack this scheme as fast as you can! Defeat it before people start relying on it - and issue a joint statement on why this is such a bad idea.
--
End-to-End Security (Score:1)
This is not so new... (Score:1)
Start the parnoia machine again... (Score:1)
-----
hey!! (Score:1)
...That's my customer ID!
I think we ought to wait... (Score:1)
That having been said, there is at least one issue - if the encryption/decryption is handled in firmware, will it mandate a limited key length? While I don't want to sound like a whacko conspiracy theorist, having an ability for limited encryption built in to a system targeted at the mass market, could give the government most of the control it needs over encrypted material.
Think for a second (Score:1)
It's no secret that public/private key operations are slow, even today. Without special hardware, you can't get an SSL web server to keep up with a very heavy load at all. If you imagine a future where even clients may be doing dozens of these operations a second, then having such a chip in every pc would be useful.
Unfortunately, there currently isn't enough information to really know what is going on with this chip, so at least lets not jump to conclusions and burn IBM at the stake now...
(side note: as others have said, if you don't want a unique ID in your computer, you better get rid of that ethernet card...)
CPU-based identity intrinsically flawed (Score:2)
Hardware based authentication and security tokens should be based on something portable, and that portable needs to have enough compute power to implement something like zero knowledge proofs. SmartCards fit the bill, and they are cheap. Keyboards should have SmartCard readers, and standard cryptographic methods allow secure transactions to be executed with SmartCards even over untrusted machines.
At best, the computer itself could benefit from hardware encryption that doesn't carry a key, in order to speed up throughput for encrypted data streams. But in the current political climate, putting hardware-based encryption into a PC is futile, since, according to US laws, it cannot be secure anyway.
Of course, e-commerce companies don't like SmartCards because, oh my, the consumer can remove them when they don't want to buy anything and don't want to get tracked. ID chips tied to the CPU or motherboard are great: the kids can order, the software can be used to track people wherever they go, and there is little most people can do about it if they run standard software like Windows.
If IBM wants to drive secure e-commerce, they should be shipping computers with SmartCard enabled keyboards.
Maybe (Score:2)
If the chip is a new ID, it's a huge waste of effort now that every intel CPU has an ID, every ethernet adapter has a MAC address, and every PC sold (through "legal" means) has a unique windows serial number (i know i know i know, use linux... just as soon as (fill in the blank) is ported!
Responding to a comment above, I know I don't know where my link is, but do you have a link to where it says this chip implements 256-bit RSA??? I find it very hard to believe that IBM would be shortsighted enough to use that.
Philosophy conflict (Score:3)
The geeks seem to hold fast to the belief that: You can not expect differing results from the same behaviour. We've seen the Intel precedent, and the result, and so we're expecting (reasonably) that the same actions by IBM (X) will have the same outcome (Y).. Next time, when a new value of X is fed into the function, the same value of Y will pop out the other end.
On the other hand, it looks like the corporations see it as: The squeak wheel gets the grease. Intel took the brunt of the opposition to the concept. Now IBM has picked up the gauntlet and is trying to run with it. Public opinion has been tested, and now the news is old. There is less likely to be as much opposition to the idea now, since it's not 'sexy' anymore. And if enough large companies reach concensus on this, the cusotmer is likely to simply believe, or give in assuming they can't win. Intel, IBM, any X, will keep chipping away at the issue until the wall gives way.
Eventually, what this will become is a matter of will. We have already made clear the reasons why this is not a good idea. We see it as a solved problem - how many times can you run through the same process until it becomes too tedious, and we move on? Intel was shown to be wrong and has backed down (a little). Now IBM put a new spin on an old hat. Eventually, one side will get tired, and it's likely to be the side that has less PR money.
Eventually we will get tired of voicing the same objections. The customers and the public-at-large will get tired of hearing the same arguments. The right legislator will get greased, and it will come into being.
Re:Hey, do you have an Ethernet NIC? (Score:1)
Do any of you read the articles? (Score:3)
I'm amazed at how many posters on this thread are running on the "it's another CPU ID" gripe when that has no basis in reality. Besides, these PC's will probably ship with P-III's, and why reinvent the wheel
To quote from the C|Net story about this:
------quote on--------
Big Blue, taking a lesson from Intel's blunder, worked with privacy groups, such as the Center for Democracy and Technology, on implementing the security chip.
"We found we could create a solution that does not create additional privacy concern, but built on a good security base and lets the user be the ultimate decision-maker," said Hester.
------quote off-----------
While it's true that the devil is in the details, and we don't know a lot about how this will be implemented, I have a hard time seeing how this a bad thing. Unlike the PIII ID feature, which provides no security at all for the user, this has the potential to provide a lot of security for the user. The reality is that encryption based digital signature techniques, which this chip will help enable, are the only way to protect people from identity theft online.
The big question is how avaiable is the documentation going to be. If it will be possible to write linux drivers and (say for example) allow GPG to perform RSA using licensed hardware, that seems like it could be a good thing. Depending on what the API looks like for this thing, it may be possible to turn around the "strong" signature capability and turn it into a "strong" encryption engine. Now that would be cool...
Re:How a unique ID can be private AND useful (Score:1)
However, it is *possible* to use it to prove that your computer, specifically, is a party to a transaction.
And right there's where this number breaks down completely for e-commerce. When I run a transaction, I need to prove to the other end that I am involved. If I go to another computer, I want to authenticate as me. If someone else sits down at my computer, I do not want the computer to authenticate as if I were sitting there.
And if you think being at a different machine isn't a problem, bear in mind that right now I regularly use 4 different machines. 1 of those is used by several other people when I'm not using it, and another is used by about 75 other people simultaneously.
Re:How a unique ID can be private AND useful (Score:1)
Here's what I'm saying: your bank should verify both *your* identity, and *your machine's* identity, before acknowledging requests to access your account.
Do they need to verify which phone in your house you're using to place a telephone order once they've verified that you are really you? I don't see why, and I don't see why they should need to verify which computer I'm using if they can securely verify my identity independent of which computer I'm at. If the machine I'm using is irrelevant to who I am, then it shouldn't be checked. If my identity is subject to forgery, then improve the method used to verify my identity and close the hole rather than trying to limit the number of places someone can exploit the hole.
Re:How a unique ID can be private AND useful (Score:1)
You're being naive. How do they know how to verify your identity? Well, right now, you enter a username and a password on a web page.
You're talking on-line banking. I'm talking about over-the-phone ordering in the real world.
Applying the principle of least privilege to computers on the net, it is clear to see that only the ones you use should be able to issue orders regarding your bank accounts. A computer that I use and that you don't use shouldn't have that capability.
You haven't used least-privilege, though. Least privilege would mean that the machine I'm sitting at at the moment and no other should be able to issue orders to my bank. If I use the machine and am not sitting at it, that I use it is irrelevant. If I have never used a machine before but am sitting at it, that I have never used it before is irrelevant. Access should follow me and not the machine in any way, so the check should be against me and not the machine in any way.
My opinion: if a check doesn't actually add to security, it should not be done. The identity of the machine is completely unrelated to my identity, so when trying to verify that it's me running a transaction you should not be concerned about the identity of the machine. If your method of identifying me is weak enough to need additional verification based on my being where I'm expected to be, then your scheme is too weak and needs improved, not papered over with irrelevant checks.
If you're reading this, you may have a MAC address (Score:1)
I don't recall ever being *without* some sort of ID.
And honestly, I've given away so much to online registrations at this point that there's really not much point trying to hide now. I like my nickname too much to change it and re-do all my accounts, so I guess until I next shift houses and don't forward my snail mail, or drop my email address and get a new one, I'm skunked.
Having seen some of the ins and outs of the legal system, I can say I'd *rather* be tracked when doing something *legal*, than *anonymous* when doing something *illegal*.
Where did this "Privacy Is The Be All And End All" mind set come from? My mom and dad used to be able to hear me with my girlfriends at night... they had the good taste not to mention anything. I'm sure most people *don't* snoop.
mindslip
Re:Wait a minute here... (Score:1)
There's a serious security issue here but I don't think it's the one you're describing.
The chip obviously needs to have access to your private keys. This means we have no proof that it isn't burning every private key it sees into flash memory for future recovery. You might be better off using an open source crypto implementation you can investigate with a processor that doesn't really know which bits of data are keys (your CPU).
However, the problem of unique chip identification seems to me to be a non-issue... What makes you think the chip will use factory hardcoded keys? Don't you think it's more likely that it will use user supplied keys issued by public certificate authorities like everything else does (except PGP, which doesn't use CAs but still doesn't use hardcoded keys)?
Ubiquitous? What about the "outlaws"? (Score:1)
So what if (I'll use my fave vendor in this example) FIC refuses to put this "ubiquitous" chip in their motherboards. What if VIA thinks this is a stupid idea from so-called industry leader IBM, and declines to support it in the chipsets. Then what?
The chip dies and nobody's going to care!! Why? Since when have you seen somewhere the PSN is required to complete a transaction? Nowhere! Retailers aren't stupid, they know by supporting legacy hardware they get more customers. If just one slightly big name vendor refuses to support the chip, the whole system goes under. As the system propagates over the years, FIC/VIA's motherboards make up a huge userbase of people who don't have the chip. So when it gets to be somewhat reasonable to assume people have the chip, you have a couple hundred thousand or million users who will be left out. Will retailers lose that many customers? Heck no! They aren't going to tell potential e-commerce customers "To use this site, you must replace your motherboard". That would be a HUGE turnoff.
Until there is a universially accepted (by EVERY vendor) standard for unique IDs (I pray to god that doesn't happen, but MAC addresses are allready here...), this idea will never fly.
And don't forget, it's not illegal to make your own programmable ID chip, is it? If it was, this topic would be moot.
Re:Wait a minute here... (Score:2)
Well, perhaps you are an idiot, but I do know how asymmetric cryptography works.
Did it ever occur to you that this chip may implement the algorithms for key generation, message signing, and encryption, while the keys themselves get stored on disk, and fed to the chip using device drivers?
As I said, like "PGP on a chip". Did you read me post at all?
No, I do not know how this chip from IBM works, but neither do you, as far as I can tell. Meanwhile, you and a bunch of other people are doing a headless-chicken-scene, which never helps.
Wait a minute here... (Score:5)
The features of the security chip include key encryption, which encodes text messages, and "digital signatures", which act as unique "watermarks" that identify the sender of the document.
Where in that sentence does is say there is a unique ID embedded in each and every chip? To me, it sounds more like IBM is marketing a hardware-driven security engine, a "PGP on a chip", if you will. I do not see how this translates to a unique serial number on each and every chip.
(Whether you want to trust IBM's security implementation is another matter entirely.)
What does this have to do with My Rights Online? If every hardware crypto product on the market is a violation of the First Amendment to the US Constitution, Slashdot is going to become awful darn cluttered.
When I first read about YRO, I thought it seemed like a good idea. The Internet is a new medium in many ways, and I do not want the government panicking and trying to restrict it. However, YRO seems less about keeping a sensible eye on things and more about paranoid sensationalism, written by anarchists who think that all laws must be bad, all corporations must be bad, everything not invented here must be bad, ahhhhhhhhhhh!
Even if there is a unique ID embedded in this chip, so what? A Unique ID for each computer can be a useful thing. For example, if you are trying to implement property control in a large organization, an electronic serial number would be a Godsend.
The problem with Intel's serial number was twofold: First, they were marketing it for "secure online transactions", something which it is not appropriate for, and second, they tried to smuggle it into every system made, turned on by default. That is not good at all. But there is zero evidence that this scenario is even possible with IBM's chip, let alone going to happen.
Please. Keep your head. Do not react first and then stop to think, or you are just as guilty as the government for panicking when something new comes along.
(And before you tell me "Nobody is forcing you to read YRO": There is thing thing called feedback...)
Inevitability (Score:3)
For my fellow paranoids (we know who you are!): keep in mind that all ethernet devices, including the NIC in your machine, already have a global unique identifier -- MAC.
Kaa
Why is this in the computer? (Score:1)
Wouldn't a better way be a smart card and thumbprint reader in one that hands this off to the software? That prevents theft of the card (at least without taking the thumb).
So, again, why?
--jeff
The optimist thinks this is the best of all possible worlds.
The pessimist fears it is true.
--Robert Oppenheimer
Re:Use your brains, people. (Score:1)
Re:... (Score:2)
Well, it depends on what you call `reporting', but modern drives CAN produce their serial numbers on request:
[r00t@yourbox]# hdparm -i /dev/hda
/dev/hda:
Model=Maxtor 90840D6, FwRev=WAS8283C, SerialNo=K60AV2XA ---serial number
[stuff deleted...]
Re:No Increase In Threat (Score:1)
already has it. (Score:1)
Re:All the people fussing are simply ignorant. (Score:1)
Re:Use your brains, people. (Score:1)
Re:What this sounds like... (Score:1)
"real security" (Score:2)
Why shouldn't a customer expect a "'real' security solution" to be "adequate"? Put another way - why bother with security if it is, in fact, not "real" security?
This "solution" just leads to a false sense of security. Furthermore, it leads to confusion and sensationalism when that false security is shattered by a compromise.
Re:Uhh.. there's no such thing (Score:2)
The "cost" difference between exportable and "real" security is pretty close to nothing in functionality. That is, the two implementations do not differ in cost to produce or functionality. The "cost" is US law. So what we end up is an inferior system pushed out to the public as a "solution" for their data. The trouble is, it shouldn't matter if that data is trade secrets, credit card data, or Aunt Nellie's secret chocolate chip cookie recipe. The solution IBM provides should be the best possible. This isn't it.
Instead of providing "real" security, IBM is providing a false sense of security. Your Average Joe doesn't understand encryption. They'll read about this "secure" solution IBM is providing them and they'll use it. They'll feel secure. They aren't. If the worse happens and their data is compromised, they'll feel shocked, violated, and vulnerable. Those evil hackers have managed to defeat even IBM! Even if the worse doesn't happen, Average Joe will skip along happy and "secure" and the demand on the US Gov't to drop their artificial anti-export laws will never manifest in the general public.
IBM is not providing a solution. They're providing marketing fluff.
Re:If you're reading this, you may have a MAC addr (Score:2)
Information is power. Today, that statement is more true than ever before. Entire companies are built on information. No other products; no widgets, no foodstuff... just information. Therefore, anything and everything a company can record about you is worth money... to someone. And they will record it. Even if it has no use today, tommorow it might be invaluable. And every step is an invasion into your privacy.
I'm sure you can trust your parents. And I'm sure there are a lot of other considerate, non-snooping people out there. However, I can't say the same for corporations. If there is value, they will snoop. And information warehouses have already shown a complete disreguard for privacy and safeguarding the information they sell.
Identity theft was science fiction in the past. Now its a real problem. If databases of personal information didn't exist, or were at least better guarded, the problem wouldn't exist. But it does. And many advances in data technology simply adds to the ease of generating these databases. This is why we SHOULD be aware of our privacy.
Where did this "Privacy Is The Be All And End All" mind set come from? Its a sign of the times.
Re:... (Score:2)
In these cases, that serial number is stamped physically on the case of the item. AFAIK, your average HD isn't able to report its SN. Or can it? Maybe I've missed some advancement in the hardware industry here....
unlike hardware... (Score:1)
Now, the suits *may* ask, "Why do we need that pricey firewall when IBM's got this hardware security solution? We could standardize on that."
Hopefully, the smarter sysadmins will respond to the sentiment with, "Well, yes, I suppose we could dump the firewall and rely only on an untested hardware chip and our desktop operating system's inherent security."
Please. Well tuned firewalls, carefully administered networks and attentive sys admins are going to do a lot more than any ID chip.
As for identifying users and protecting digital documents, the pre-existing software solutions are well tested. GPG and PGP are the best examples. What's more, they've already got the support of nearly everyone.
End rant
Sheesh! (Score:1)
This kind of paranoia only matters if you're using a browser/app that will send back that identifier on request. I'm going to doubt that Netscape will, I'd be pretty assured that MSIE will, and I'm *positive* people will come up with ActiveX tools to get that Host identifier. And the BIGGEST thing is that it will probably only affect windows users aversely because they can't get source code to their OS...
Re:Wait a minute here... (Score:2)
Digital signatures, on the whole use public and private keys. These public and private keys are unique numbers, somewhere on the order of a few hundred digits long (usually). In order for a packet to be signed, that packet must have access to the private key. In order for a packet to be verified, the receiver must have access to the public key.
Now, think about what's been said:
Now, the number isn't a true "serial number", simply because it doesn't count up in order (in fact, due to other facts, not mentioned here), it can not count up.
Instead, we have something even better: a unique, cryptographically secure (supposedly) identifier attached to each and every computer which Big Blue sells. If there is a backdoor in these chips, then the government will now have a way of tracking, and reading, everything which gets encrypted/signed by these chips.
Can you see the problem yet?
What's the big deal... (Score:1)
It's not like any punk can use a daemon dialer and magically obtain your number right?
Re: Security not Privacy (Score:1)
Re:Uhh.. there's no such thing (Score:1)
Re:No Increase In Threat (Score:1)
It could be good as theft protection (Score:1)
The problem of big brother-ism is real, but not insolvable. You need this to be handled by a trusted and independent non government organization that is charted with the sole purpose of retrieveing stolen PCs, nothing else.
I swiss banks can keep a secret, others can too!
Yeh, but it gets unsellable (Score:1)
I mean, how do you sell a PC and make sure it will not get online...?
Hey, do you have an Ethernet NIC? (Score:2)
1. Unique serial numbers have been with us for a long time. (The MAC address of your Ethernet card is unique to your computer. Moreover, the tools are already in place to track your computer using this identifier, I.E. arp.)
2. Unique ID's have many useful functions besides violating, your already non-existent, privacy. (Just to start with, tracking is not necessarily bad. Anybody who has had a laptop stolen from them probably knows what I am talking about.)
3. The real threat is not that we can be tracked, it is that it may be done without our consent and in secrecy. (There are more than enough trojan java and activeX applets that will track every web site you visit AND record your passwords already out there.)
Don't fight the technology, demand a better implementation. Anytime something like this comes up, just make sure the implementation is open and well documented.
convenience is the great enemy (Score:3)
-konstant
How a unique ID can be private AND useful (Score:1)
This is a practical and useful thing to have. The unique (secret) number could be a private key; the corresponding public key could be widely published by the manufacturer (and be related to e.g. the serial number).
Now, because there is software between you and the bits that comprise the private key, nothing says that you have to do anything with it. However, it is *possible* to use it to prove that your computer, specifically, is a party to a transaction.
For example, right now identity theft makes it trivial for a crook to access your bank account over the web, if you have electronic banking enabled. He just needs some info, like SSN, mother's maiden name, etc. However, suppose you gave the bank (in person) the serial number of your computer. Then the bank could verify the identity of the machine that tried to access your account, using a zero-knowledge proof (give it something to sign, and verify the signature).
This doesn't make your security iron-clad, but I think it does help. Of course, there will be places that demand that you authenticate even for transactions that don't need authentication (e.g. the New York Times). Again, though, the fact that those bits exist in your machine doesn't mean that you are under any obligation.
It will be interesting to see how the "default software" will be set up on MS platforms; if IE authenticates everywhere without asking, then Joe Windows User is as badly off as he would have been with the Pentium ID, in terms of his activities on the 'net being traced. Maybe IBM should manufacture a separate hardware switch that can disengage the chip, so the user can do an end run around Redmond shenanigans (sp?).
Re:Hrm.. (Score:1)
This could be a useful security aid, and is certainly not "total security in a box". Those two statements are not exclusive.
Re:How a unique ID can be private AND useful (Score:1)
And if you think being at a different machine isn't a problem, bear in mind that right now I regularly use 4 different machines. 1 of those is used by several other people when I'm not using it, and another is used by about 75 other people simultaneously.
Here's what I'm saying: your bank should verify both *your* identity, and *your machine's* identity, before acknowledging requests to access your account. You use any of four different machines? Well then, tell the bank to accept transactions from any of those four. You still are eliminating lots of potential attackers. Some of those machines are shared? Well then, make sure the permissions on your *personal* private key aren't world readable (and consider how much you trust 'root' on those machines)!
Being able to identify a particular computer is useful. I don't claim that this means of doing so is bulletproof, or that the ability to do so represents a security 'magic bullet'. However, it can offer additional security *in conjunction with* your PGP or other software-based crypto system.
Re:How a unique ID can be private AND useful (Score:1)
You're being naive. How do they know how to verify your identity? Well, right now, you enter a username and a password on a web page. Many banks use your account number or your SSN as your username, and neither of these is especially secret. In fact, most banks have their web banking set up by default to save your username in a cookie so that your browser remembers it.
If you know my username, SSN, and Mother's maiden name (also not hard to learn), you can call up the bank and convince them that you're me! You can say that you forgot your (er, my) password, and can they assign a new temporary one (that of course you will change five minutes later). Alternatively, if my bank offers web banking and I haven't signed up for it, you can sign me up over the phone (with the same ID information).
In the days of phone banking, this wasn't too big a risk, because all you could do was transfer money between your own accounts. However, now web banking allows you to write electronic checks to anyone, making the risk much greater. Worse yet, don't bet that your money is insured if it gets stolen this way -- how can you prove that you didn't authorize the funds transfer?!? The transaction required your authentication information...
Even if the banks switch to public key cryptography, there's a problem, because there is no public key infrastructure (PKI). Without this, it is difficult (on a large scale) to correctly associate an individual with his correct public key; identity theft is still entirely possible.
One way to tighten the security is for you to give the bank a list of computers from which to accept orders ostensibly from you. This really isn't unreasonable, especially as the trend from desktops to laptops and palmtops continues. Indeed, there will probably be 'smart cards' or some such that you use for electronic cash or electronic voting, anyway, that live in your wallet.
In the security field, there is something called, "the principle of least privilege". What this means is that the proper way to approach security is to take away all privileges, and then selectively grant them back. "Selectively" means that a subject should be given enough privileges to be able to do what it needs to do, but no privileges beyond that. Applying the principle of least privilege to computers on the net, it is clear to see that only the ones you use should be able to issue orders regarding your bank accounts. A computer that I use and that you don't use shouldn't have that capability.
At any rate, if you don't want to use the added security I propose (and I still call you naive in that case), then don't! Note that in my original post I also proposed an additional hardware switch (say, right next to the power button) that would disengage the chip. My reason for this was that it becomes possible for others to trace your activities if your machine authenticates everywhere on the web, not just at sites where you benefit from the added security.
My point is, if there is a unique ID on the machine, it doesn't have to be a privacy violation, and it can be useful -- both at the same time. Note that I don't claim that hardware identification is bulletproof, just potentially useful if it's "hard enough" to break. You can claim that you don't think it's useful for your purposes, but I argue that it's still useful for lots of other people. And, even if you claim it's not useful at all, that doesn't constitute a proof that it violates your privacy.
So where's your beef?
Re:How a unique ID can be private AND useful (Score:1)
My example is legitimate, whether or not yours is. If hardware ID is useful in the case I describe, then it is useful. For you to show that it is not useful in general, you must do more than argue that it is not useful in a particular case. Furthermore, my example is a real-world example, and nontrivial.
You haven't used least-privilege, though. Least privilege would mean that the machine I'm sitting at at the moment and no other should be able to issue orders to my bank. If I use the machine and am not sitting at it, that I use it is irrelevant. If I have never used a machine before but am sitting at it, that I have never used it before is irrelevant. Access should follow me and not the machine in any way, so the check should be against me and not the machine in any way.
You want all computers to be able to issue orders regarding your account, which is farther from least privilege than my proposal. Also note that when least privilege is applied to *people*, instead of computers, the statement is that only you should be able to issue orders regarding your account. So, you need to authenticate yourself, as well.
My opinion: if a check doesn't actually add to security, it should not be done. The identity of the machine is completely
The proper way to approach these problems is with the principle of least privilege in mind; be as restrictive as possible while allowing subjects to be able to do what they need to do. My proposal comprises a restriction that does not prevent you from doing what you need to do.
If your method of identifying me is weak enough to need additional verification based on my being where I'm expected to be, then your scheme is too weak and needs improved, not papered over with irrelevant checks.
You're still being naive. Authentication is a problem that is getting worse, not better. The lack of a PKI (and the difficulty of implementing one) implies that identity theft is actually easier on the 'net than it has ever been before; all you need to do is fool somebody into believing that your public key is really your victim's public key. Right now, in the real world, though, the security is even worse, because anybody can claim to be you by looking up easily accessible information about you. In the real world, it is trivial to steal your identity (this is even talked about in the mainstream press). You claim that this is the problem that should be fixed -- but the fix likely will involve public key cryptography, which is only slightly better (even in principle, let alone in practice).
There *is no* completely satisfactory solution to the authentication problem, where the check is not done in person.
The Irony, and Lifespan of a Chip? (Score:2)
``People from outside (of your organization) can get at your software,'' said Anne Gardner, general manager of desktop systems for IBM. ``People from the outside can't get to your hardware.''
The funny thing is, anyone _can_ get to my software, including me. It's open source. But only IBM, or their designated manufacturers, or people who send a signal to my computer to get my "digital signature", can get at my hardware, excluding me. I like systems I can control a bit more.
On another note. Isn't an embedded security device likely to go obsolete pretty rapidly? Then what, we have to buy a whole new motherboard instead of just installing the latest version of the software? That sucks.
Hmm, the article just says that the chip is embedded in the hardware, somewhere. I wonder where? How easy would it be to pry the sucker off? ;) Or, I could just not buy an IBM. Yeah, that's the ticket.
So its in your computer.... (Score:2)
Besides, if you have an ethernet card, you already have a unique ID in your computer hardware. Its called your MAC address. Microsoft uses it to uniquely stamp your word documents. (Thats how they traced down the mellissa virus author.) The misuse of it is all at the software level. I can't imagine anybody writing free software that will use IDs like this. I'll keep away from MS thank you.
There is no anonymity on the net. (Score:2)
People who don't do things they shouldn't have no fear of "privacy invasion". But with porn being the true fulfillment of e-commerce on the web and the occasional illicit mp3 download it's a safe bet that a sizable percentage of the internet going public have justifiable reasons for not being tracked.
What everyone seems to forget is there is no anonymity on the net thanks to a little thing called an IP address.
Did you download a song from alt.binaries.sounds.mp3? Or maybe that latest nude in alt.binaries.pictures.erotica.*? Your ISP knows exactly who you are. Your IP address is logged along with your user name and password. Your user name is in their billing records - complete with your name, address, phone number, and probably your credit card number.
They could also care less unless someone is calling to say you broke something or you spammed slashdot or something.
Maybe you've visited a ftp site and downloaded a movie. If the ftp site was a sting operation then they've got your number and can force your ISP to turn that number into a name. The same is true for web sites. If you downloaded a movie you're probably broadband and have a greater chance of having a fixed IP address, in which case you already have a serial number even if you use AMD.
Having run a large and successful website I can state absolutely that after 100 unique visitors a day people stop being people and start being demographics. The real life corollary is that everyone has a driver's license and a social security number (and credit card numbers and all that) but even though it's possible to do, you have a better chance of winning the lottery than having someone piece together your every move. So the only true privacy we have is safety in numbers.
Privacy is and always has been an illusion and never more than on the web. The people who want to embed serial numbers in your computers realize this. Shoot -- every slashdot reader should know that given time, determination, and lots of search warrants anyone can be tracked down. The elite slashdoters can do it without warrants and the best of the best can probably do it without using a single z to describe the process. So if there's no anonymity then the good of serial numbers far outweighs the "bad" (mainly giving you a false sense of security which is bad in its own right).
A similar fuss was made over the introduction of caller id. Caller ID still went through and guess what? I haven't gotten a prank phone call since it was introduced. Like caller ID, this too is going to happen. There are too many good reasons for it not to. Forging, changing, or blocking the serial number will also be a very easy. The program to do it will probably have a z in it though. "SerialZ no more" or something. Look for it at that zero-day-warez site near you.
Whoa! (Score:2)
Plus, we can be sure that the possible 'Big Brother' applications of it will only be included in MS products, so we're all safe! Right?