Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Technology Your Rights Online

IBM stamping ID's into new PC's 161

Twid writes " Reuters is reporting that IBM is duplicating Intel with the Pentium III and stamping their new PC's with a "watermark" chip to allow for "secure transactions". Just like Intel, no mention is made of how to turn the feature off or how to ensure consumer privacy."

IBM may not have grasped Intel's failure here. Attention IBM: I have been a religious Intel owner. Just the other day I bought several computers with AMD chips instead of Intel P-III's, because I don't want to be tracked - so as long as Intel wants to track me and there's anybody else in the chip-making business, Intel won't be getting my business. You just don't realize that people take their computers seriously - they don't want it ratting on them to every website they visit, they don't want it informing on them behind their back, they don't want Clipper chips performing insecure e-commerce "encryption" for them. It sounds (and of course IBM is releasing this tomorrow, so this is preliminary) like IBM has created a proprietary, closed system, which very probably includes a back-door in it for U.S. law-enforcement access, because otherwise IBM would have trouble exporting it worldwide. Only pointy-haired bosses are going to want to purchase such things. -- michael

This discussion has been archived. No new comments can be posted.

IBM stamping ID's into new PC's

Comments Filter:
  • by Anonymous Coward
    When the PIII thing came out, I wasn't worried. Mainly because I knew that the *real* reason behind the ID was to stop the distribution of stolen processors. The whole "online ID" thing was a fanciful piece of horseshit dreamed up by a marketing newbie.

    And I'm not worried about a privacy invasion from IBM. First of all, I don't use an IBM machine. Secondly, I know that software can always circumvent this type of stuff. Third: I think IBM actually had *good* intentions, but made a few mistakes in carrying out their intentions.

    I don't think IBM's major motivation is to spy on users or create an invasion of privacy. I think that they want to motivate online purchasing, etc. And, hey, they're trying to get people to use crypto-- which isn't *all* bad, is it? So the intentions are good...

    But the solution is bone-headed at best. Embedding a chip in the computer that will perform digital signature and encryption operations is a really inefficient and stupid way to go about encouraging the use of crypto.

    First of all, why hardware? It's just as easy to implement the crypto in software. And software encryption can be much more flexible, handling larger key sizes for the ultra-paranoid, or forty-bit keys for the clueless.

    Second of all, why integrate it into the computer? Okay, so you want to do it in hardware (in spite of its lack of flexibility). Why not distribute PC's with a dongle that plugs into a USB, parallel, serial, or Firewire port? That way, those of us who don't trust the damn thing can at least get rid of it.

    Finally, why the hell would you do this when there was so much controversy over the PIII ID? I would figure that IBM has some good PR and advertising folks-- how did this one slip out the door?

    Really, let's not jump on IBM. Applaud them for trying to encourage the use of crypto amongst the masses. Then scold them for raising the alarmist ire, and for not quite thinking the whole thing through.
  • by Anonymous Coward

    nce again, privacy freaks go nuts.

    The price of freedom is eternal vigilance

    Yes, its small, and yes it seems innocuous. Maybe it even is. But I'd rather have "privacy freaks" raise a stink now than risk waking up one fine day and wonder what happened.

  • by Anonymous Coward
    Can we please stop with the hysterics, at least until we know if there's something going on here that's worth getting hysterical over?

    I can't believe that the original poster is talking about back doors and close systems based on nothing but wild-eyed speculation.

    I realize it's a radical thought for some people around here, but let's get our facts straight first before we start deciding What It All Means, OK?

  • by Anonymous Coward
    Is just a chip that does encryption and
    signing. This could either be:

    Very good if it uses standard, verifiable
    hashes and encryption algorithms. If it
    does indeed do encryption faster than this
    is a good thing. Esp. If IBM gets export
    licences for stronger keys.

    Very Bad if it uses proprietary, unverifiable
    algorithms, perhaps that don't fully use
    key information so as to make it easier to
    crack your important e-mail.

    The article is pretty vague.

    Question about reading chip ID's: Are these
    privileged or un-privileged operations?

    -- cary
  • by Anonymous Coward
    I read that this chip implements RSA public key crypto -- but with maximum key lengths of 256 bits for messages and 1024 bits for signatures. We all know that 256 bit RSA is woefully inadequate for any real security. IBM is not about to piss off the us government by providing good or even mediocre encryption to the masses.
  • by Anonymous Coward on Monday September 27, 1999 @06:53AM (#1656430)
    IBM actually will put an encryption chip on all their pc's in the future, enhancing personal security not hindering it. see the register for more info. http://www.theregister.co.uk/990927-000012.html
  • Does it really make a difference. The government spies because it wants to know if you're a subversive or terrorist. The corporations spy because they want to know if it's worth their while to try to sell you soda, or a new computer, or whatever. I don't think either is worse than the other. They're both bad, and I want my tools to encourage neither and discourage both.

    Let's leave watermarking out of computers.

    ----
  • If all new PCs eventually have serial numbers, it's only a matter of time before node-locked software becomes commonplace (in the Microsoft world, at least).
  • Only if the serial numbers are read via a standard API call. If they are all different, it will slow things down (and provide an opening for serial number forging software).


    ...phil
  • Intel has a big factory in Arizona, I don't think they make Pentium3s however. An Arizona state legislator did try to introduce a bill that would have banned the sale of the P3 and similarly equipped chips in the state. The bill wouldn't "ban Intel" just sale of the P3.
  • The only way that this feature gets to communicate with the bad guy on the other side is if the software is written to do so.

    Details on precisely what instructions are involved would presumably be necessary; if one is running Linux, then actually using the instructions requires that someone convinces you to install software compiled with the "Evil Privacy-Killing Instructions."

    This will fall high on the list of Things Ulrich Drepper Won't Add to GLIBC; it is equally likely to represent Instructions Unlikely To Be Added To the GCC Code Generator.

    Note that this furthermore represents Instructions That Aren't on PPC which would encourage the purchase of PPC-based systems or Alpha-based systems...

  • by the way, DAV is not a proprietary spec... go to webdav.org and read.
  • Hey, if the NSA can get Microsoft and Netscape to insert the NSAKey, then certainly a flanking attack on the CPU side (Intel) or Mobo side (IBM) isn't out of the question either.

    "The number of suckers born each minute doubles every 18 months."
  • [I HATE the way posting seems to default to HTML now and strips out all the carriage returns.]

    If you sign up for a Slashdot account, the default posting behavior is saved for you. We even have the ability nowadays to post anonymously via a checkbox...
  • The real issue is: How secure is is to trust the identity of the user based on his CPU/board ID when someone else could so easily "pretend" to be me by sending my CPU ID all over the net?

    You can't, for precisely the reason you indicate. Anyone considering this information to be an authentic ID is smoking crack.

    Fortunately, this chip isn't about sending your "ID" all over the 'Net. It's about cryptography and digital signatures, which are a bit harder to forge than a simple ID.

    Along the trojan/virus thread, why in the world would somebody write such a virus? The only data this chip would attempt to make available is perhaps the public encryption key, which is designed to be put out into the public anyways. I don't see the big privacy problem here. A legitimate example of a privacy-invading virus would be one that watches the system and constantly reports where the current machine is browsing, what they're doing, what documents they have, etc., but this can be done with or without a cryptography chip such as this.

    I suppose a trojan could use the chip to digitally "sign" something the user didn't intend to sign, but re-read the article: a user PIN (password) is allegedly required to activate this chip. *shrug*..
  • Why do you think this is proprietary? Don't you think that kind of limits the usefulness of such a chip? I mean what good is a digital signature or encrypted data if only people using an IBM machine with one of these chips can use/decrypt it?

    I think it's a pretty safe bet they're using existing cryptographical systems. An earlier post said they were using RSA algorithms, but I haven't been able to verify that myself.
  • It's not possible to be 100% secure with your data. Period. It's all a matter of "degree". How "secure" do you want to be?

    Sure, this solution is secure, but it's not *as* secure as other, unexportable alternatives. In ten years, "real security" will mean something entirely different. The original poster was using the term "real security" by saying the key sizes allowed by this chip were inadequate for truly sensitive data. I was simply saying that IBM is not marketing this mechanism for people that regularly make use of truly sensitive data.

    Read the article if you haven't already. This is all discussed there.
  • But only IBM, or their designated manufacturers, or people who send a signal to my computer to get my "digital signature", can get at my hardware, excluding me.

    I'm confused. The only thing this chip does is provide encryption and digital signature services to applications. You will need a software-based PIN/password to access these features. I don't see how this allows IBM and its "evil" minions to "get at" your hardware. Am I missing something?

    On another note. Isn't an embedded security device likely to go obsolete pretty rapidly? Then what, we have to buy a whole new motherboard instead of just installing the latest version of the software? That sucks.

    All hardware-based cryptography products will be "obsolete" in short order. Does that mean they can be upgraded? Not without changes in US export laws.

    It's certainly possible this chip is replaceable as cryptography improves in the future.

    easy would it be to pry the sucker off? ;) Or, I could just not buy an IBM. Yeah, that's the ticket.

    Hey, suit yourself. It's just hardware-based encryption and digital signatures. The same sort of stuff I'm doing with PGP in software today. The only data that can be made public via this chip is your public key, which is something I make an *effort* to make public while I'm using PGP. I really don't see what all of the fuss is about. If you don't want to use it, just don't use it. If you feel like you don't want to buy from them, fine.
  • Guys, if the digital signatures and encryption is done in a proprietary fashion, that will make it incompatible with everything out there that makes use of public/private key cryptography. Not exactly the road to public acceptance, if you ask me.

    Though you're right -- the article is pretty vague, but surely they're using a cryptographic standard.

    Question about reading chip ID's: Are these
    privileged or un-privileged operations?


    What "ID's" are you talking about? Do you mean the public key? Does this really matter? The whole point about public/private key cryptography is to make the public key as widely known as you need it to be.

    The article explicitely mentions you'd need a software-based PIN/password to access features of this chip, so I don't imagine these services will be available to any application unless you explicitely authorize it.
  • The only thing this chip ever makes available would probably be your public key. The whole concept behind public/private key cryptography is to make the public key publicly available to those you want to communicate with.

    If someone wants to write an evil privacy-invading trojan program that secretly tracks your every move, it's probably in their best interests to use any of the other ID mechanisms already on your machine, like the MAC address, Windows registration codes, e-mail addresses in your e-mail clients, etc., etc.

    Besides, the article explicitely states that you'd need to enter a PIN/password of some form to use features of this chip. Now, I have no idea if it's possible to circumvent this, but you'd think IBM would have done a bit of thinking and planning prior to now, yes? *shrug*..

    In short, the potential for privacy abuse is virtually nil, and it's comparitively zero when held up with other methods for identifying and tracking you that already exist in software and hardware. I don't see any virii, trojans or rogue software companies out there making use of that, do you?
  • I don't think you quite understand how this chip is supposed to work.

    So everything made on a computer can be traced to that computer.

    This isn't correct at all. The digital signing/encryption process requires the user to enter a PIN/password. The user must *explicitely* make the effort to digitally sign a document or to encrypt data. This isn't something that can just be hidden in the background for malicious or rogue software companies to take advantage of.

    Though to be fair, it's certainly possible that this PIN requirement could be bypassed by a trojan/malicious coder. I'd be interested to hear how IBM plans to keep that from happening.

    Furthermore, what happens when 128-bit keys are no longer secure enough and you need to move to 256-bit keys?

    I believe a previous poster mentioned that this chip was capable of 256-bit encryption and digital signatures up to 1024-bits. Granted, it will be obsoleted in several years, but it's more than sufficient for items not of a super-sensitive nature. The article explicitely states that it should be adequate for around 80% of their customers. The remaining 20% apparently have needs for stronger encryption and either won't use this hardware chip, or will use it in conjunction with something else (as the article states).

    Nobody's *requiring* this chip to be used. The whole idea is that the hardware chip completely hides the private key, making it impossible to recover by software (thus exposing data encrypted with it). Yes, it will be obsolete in time. So will existing software solutions. If you don't want to use hardware cryptography, don't. If you don't want to use software cryptography, don't.

    As far as tracking users goes, I can think of much better ways to construct evil programs and trojans to do this job much more effectively and doesn't require that the user have a motherboard with one of these chips. Privacy and security issues here are minimal at best.
  • Damn I feel like a broken record here..

    the software can be used to track people wherever they go

    A PIN/password is required to activate features of this encryption chip. Thus, encrypting or digitally signing something requires explicit user intervention.

    There is no "ID" that is sent out by evil software. The only thing I can think of that might work in this fashion would be the public key, which is meant to be distributed anyway. If I were writing a trojan or an evil program to track users, I can think of a few better ways of doing this than relying on something only a small percentage of consumers is going to have available (like, say using the MAC address, Windows registration codes, e-mail addresses, etc., etc.)
  • If you had read and understood the article, you would know:

    By placing the private key in *hardware*, it no longer becomes accessible by software. It is impossible to recover a hardware-based private key via software.

    The only way a hardware-based key can be discovered is if it's cracked. Seeing how distributed.net has been working on cracking the latest 64-bit RC5 key since the latter part of 1997, I don't think we have to worry about these hardware keys being cracked any time soon.
  • What happens if your key gets compromised??

    What makes you think this is possible? By storing the private key in hardware, it becomes impossible to access via software.

    The only way the key could be discovered is by a cracking effort. At 256-bits (as one poster indicated for encryption, and 1024-bits for digital signatures), it's going to take a long time for that to happen.

    How are you going to be able to communicate to the powers that be that your key has changed, and not only that, you could just change your key and all your new transmissions would be unreadable...

    Uhh, the same way that people do it today with software encryption products (like PGP). Just pass out your new public key and stop using the old key pair.

    Better yet, J. Smith over here invents a utility to reflash the chip with an arbitrary "identifier" and people can now pose as you :(...

    You assume that this chip can be "upgraded". It's quite likely that this chip is entirely hardware-based. No "flash" upgrade at all. That would leave it open to the attack you mentioned. The whole idea is to keep the chip completely isolated from software.
  • This is *not* a "CPU ID" chip. It is designed to do hardware-based public/private key encryption. To use these hardware features, you must supply a PIN/password to enable access to your key pairs. Thus, it is an explicit user-initiated process.

    To suggest that web site owners will start requiring people to use these keys is totally absurd. Why in the world would web site owners voluntarily reduce their client base to less than 1% of its current base (those that have machines with these chips)?

    People are using the same arguments they used against the Intel PIII CPU ID thing, when really the two situations aren't alike at all.

    If you don't want to use the encryption offered by the hardware, DON'T. Stick with PGP or whatever other software-based solution you're using today. The only difference is that in the hardware implementation, it becomes impossible for trojans/virii/malicious programs to steal your private PGP key.
  • by Fastolfe ( 1470 )
    It's rather commonplace for people to upgrade their desktop PC's every few years. CPU's change, motherboards change, hard drives change. To tie software to any of these components seems rather stupid to me.

    The only reason this sort of thing worked with older mega-server architectures in the past is because those platforms didn't have the upgrade rate of today's PC's. Plus, even if an upgrade *was* performed, all you usually had to do was contact the software vendor and let them know. A new software key was re-issued in short order.

    With the upgrade rates of today's systems, I can't imagine a software company volunteering to create a staff of people set up to handle the enormous volume of requests for new keys as people upgrade hardware.
  • First of all, why hardware? It's just as easy to implement the crypto in software. And software encryption can be much more flexible, handling larger key sizes for the ultra-paranoid, or forty-bit keys for the clueless.

    The whole point behind using hardware crypto is that it's impossible for software to recover private keys that are stored in hardware. With software-based crypto, there's always the (small) chance a trojan/virus will discover and recover your private encryption keys.

    Finally, why the hell would you do this when there was so much controversy over the PIII ID? I would figure that IBM has some good PR and advertising folks-- how did this one slip out the door?

    Because this crypto chip has nothing to do with ID's. All it does is provide encryption and digital signing services. To use these services, you must provide a PIN to software, which enables the features. It becomes an explicit user-initiated process, not something that can be maliciously hidden in the background.

    The whole point is to allow you to digitally sign and encrypt data. What's the point in building a hardware system if malicious code could digitally sign stuff on its own, without your approval?
  • You assume that these encryption keys are associated with you personally. What you don't realize is that it's very common for secure HTTP sessions and SSH connections to generate new keys all the time.

    So long as the remote end has *some* public key that represents your system, they can verify your messages and validate your signatures.

    The difference between this hardware scheme and existing software schemes is that it's theoretically possible for a malicious program to obtain your private keys stored on your system. It's not possible to do this if these keys are stored in hardware.
  • Hell, why not outlaw IP addresses while you're at it.

    These things are necessary for networks to function.

    As far as the hardware encryption chip goes, do a bit more reading and you'll discover that this really isn't something that *needs* to be disabled. The whole "it's another attempt to brand our computers with an ID" argument is just silly. The only thing that this chip does is hardware-based encryption/decryption of data, much like an MPEG decoder card. The only difference is that, for this chip to work, you'd want to publish the public encryption key so people can send you encrypted messages and you can send others encrypted/signed messages of your own. It's NO different than using a software-based encryption solution, except that with hardware, it's impossible for someone to "steal" your private key.
  • The solution IBM provides should be the best possible. This isn't it.

    A previous poster noted that this chip uses a 256-bit key for data encryption. A typical HTTPS/SSL connection uses a 40-bit key (with 128-bits implemented in "unexportable high-security" browsers).

    I don't know about you, but this is more than adequate. IBM says as much and indicates if users need something more secure, they're free to augment this system with things like smart cards and the like.

    If you're so concerned that people are getting a false sense of security with this device, you should be working to warn them about the dangers of secure web sites, which are significantly less "secure".

    If you want to decrypt someone else's encrypted data nowadays, it's to your advantage to somehow gain access to their system and find a way to steal their private key instead of trying to "crack" it. By doing this in hardware, this becomes impossible. This is why hardware encryption schemes are so much more secure than equivalent software ones.
  • Huh?

    Do you have any idea how hardware based encryption products work? The whole reasoning behind hardware-based encryption is *because* it's not possible for software to retrieve the private keys! That's the whole purpose behind doing it. If private keys were somehow able to be retrieved, there'd be no point in doing it with hardware at all, because it has no advantage whatsoever over software solutions. This is a fundamental design requirement, and is EASY to assure.

    I think you're being very rude here. The only reason I was saying 256 bits is plenty for me is because current encryption products use FAR LESS SECURE keys than what this chip is providing. I wasn't suggesting that you be a good boy and be content to use it. If you feel you have data that requires more security, USE A MORE SECURE PRODUCT.
  • I will state that all things (this is stated liberally, I am sure that I am wrong in certain cases) that has to do with hardware can be discerned/extracted using software.

    The reason private keys can't be "pried" from hardware products is because these hardware products provide no mechanism to retrieve the private keys.

    It's the same reason you can't write a program to command an Intel CPU to change colors. The chip simply isn't capable of doing it.

    When constructing something like a cryptographic chip, just build functions into the chip that you need. You don't want the private key to be exposed, so don't create a "return_private_key" opcode when designing the chip. There are probably things like "return_public_key", "encrypt_text_at_this_memory_address", etc. Unlike software, you can't just write a program to examine the details or inner workings of a piece of hardware. The hardware has to be explicitely programmed to volunteer that data.

    Hardware data encryption has been something pushed for quite a while now. It's not that it's faster or more convenient than software solutions, and *certainly* it's not because the hardware is more adaptable. It's because the hardware version is incapable of allowing the private key to be discovered. Whenever you use software, the public and private keys are stored somewhere on the hard drive in a not-so-cryptographically-secure form. This means it can be found and stolen by a malicious program. That simply isn't possible with hardware solutions.
  • The keys are stored in the hardware. The article states that in order to access these hardware features, you will have to provide a PIN to the software to gain access to the keys.

    Without this manual step, it would become possible for malicious programs to digitally sign/encrypt things you didn't intend to sign/encrypt.
  • I'm amazed at how many posters on this thread are running on the "it's another CPU ID" gripe when that has no basis in reality.

    It's because of the submitter's "summary" and "michael"'s subsequent editorial. It's obvious he didn't read the article. He just saw the "CPU ID" phrase and went ballistic, like so many uninformed privacy nuts that post here regularly.

    I really wish the Slashdot authors would try to be a little less biased when it comes to the articles they post here. Slashdot has become MUCH too editorialized, which wouldn't necessarily be *all* that bad, except THEY DON'T DO A GOOD JOB EVEN AT THAT. They base their editorial comments and slurs on stupid/uninformed assumptions based on little/no information. As much as I love Slashdot, it will never be a true journalistic site until it can replace its poorer "authors."
  • The reason businesses adopting this model will not succeed is precisely the same reason we don't see copies of a program functionally equivalent to "MS Paint" being sold for $1000.

    So long as there's competition in the marketplace, it would be insane for a company to start tying all of their software to specific systems just so they can force their customers to keep buying licenses as their hardware is upgraded. There will always be competition.

    This simply doesn't make good business sense for typical consumer software.

    Even for high-end software that, even today, is written and tied to specific systems, usually all you have to do is call them up and say "We're moving this software to another system, so we need another license key." The main reason they use those hardware dependencies today isn't to foil their customers. Rather, it's to keep copies of the software from appearing on warez newsgroups and being distributed to friends and sold illegally overseas, that sort of thing. It's usually quite easy to get a replacement license in these cases if/when you move the software to another piece of hardware.
  • With a hardware encryption scheme (note: *not* ID), it's *impossible* for someone to "steal" your key and use your identity. That's the whole point behind hardware encryption. The private key is stored in hardware and is totally inaccessible to software. This is the most fundamental reason hardware encryption exists.
  • by Fastolfe ( 1470 ) on Monday September 27, 1999 @08:36AM (#1656461)
    This chip isn't being marketed at all as any "real" security solution. The article explicitely states this. In the event a consumer needs a more secure solution, IBM has add-ons and other products to suit them. The cryptography, they say, should be adequate for 80% of their customers. I agree.
  • by Fastolfe ( 1470 ) on Monday September 27, 1999 @10:15AM (#1656462)
    Dude, I don't know what sort of top secret information you're planning on distributing, but 256 bits is plenty for me. If this encryption is honestly inadequate for your needs, I'd seriously suggest that you lock your computer in a safe someplace and never ever connect it to any form of computer network. Hell, you might want to dip the hard drive into some molten lead and throw into the middle of the Atlantic if you're that worried.

    HTTPS (SSL) predominantly uses 40-bit encryption. "High security" versions of the same thing run at 128-bits. The last I checked, the "default" PGP key length wasn't anywhere *near* 1024-bits, which this chip supports.

    Again, it's all a matter of *degree*. True, there is software out there that uses key lengths a lot longer than what this chip offers, but you won't find that software in mainstream browsers and e-mail clients, which means it's useless to normal people.

    Additionally, you seem to forget the whole purpose of moving encryption into hardware: It's impossible to recover the private key via software. Today it's theoretically possible for a trojan or other malicious programs to snoop around your hard drive, find your software-based PGP private key ring, and from there, somehow recover the private key. This is not possible with hardware-based encryption, hence its attractiveness.
  • by Fastolfe ( 1470 ) on Monday September 27, 1999 @08:33AM (#1656463)
    If there is a backdoor...

    *IF* there is a backdoor. Somehow I doubt that such a back door exists. There's always the possibility that a back door will be discovered (and it's almost a guaranteed certainty, given enough time). If one is found, IBM will be nailed with lawsuits up the ass, criminal proceedings, you name it.

    It doesn't make good business sense.

    You know, it's certainly possible (I mean technologically, obviously) for the government to sneak in a hidden backdoor in Microsoft Windows. Does that mean we should ban and legislate Windows into extinction? It's also possible that they've secretly placed a backdoor in the operating systems that run on our Internet's routers. Quick! Ban the Internet!

    Yes, each chip has a public key. If you don't want that public key given out, don't use software that makes use of it. Period.

    I occasionally make use of a software-based PGP implementation, but you don't see me scrambling to hide my public key from people.

    Remember: Multi-user systems are pretty commonplace nowadays (NT, Unix, even Windows-based workstations). It makes absolutely NO sense whatsoever to suddenly convert all programs so that they use this hardware-based encryption scheme over a user-defined one.
  • Although this obviously has many privacy concerns, I'm more interested in the security aspects of it. Based on the comments by Ms. Gardner, the IBM rep interviewed, that appears to be their main focus, too: they're interested in making <buzzword>E-commerce</buzzword> more secure. But they're going about it the wrong way (IMHO): see below.

    ``People from outside (of your organization) can get at your software,'' said Anne Gardner, general manager of desktop systems for IBM. ``People from the outside can't get to your hardware.''

    So there will probably not be a software flash-upgrade for this chip or anything like that: after all, if it can be software-upgraded, it can be cracked: witness the recent virus (forget its name) that wiped your BIOS chip if you had a Flash-BIOS capable motherboard and chip. So the only way to upgrade this thing will be to replace the chip -- and it'll likely be soldered onto the motherboard.

    ``We want this to become an industry standard,'' IBM's Gardner said. ``We want this on as many desktops as possible.''

    Which means that if they get there wish, people who build <buzzword>E-commerce</buzzword> sites will start to rely on their customers having PC's with the chip installed.

    The features of the security chip include key encryption, which encodes text messages,

    What key length? Is it upgradeable? Considering the "can't get at it with software" statement above, probably not. So either it will have export-grade encryption (weak and insufficient, as most /. readers well know) or the U.S. government will restrict its export from the U.S. Furthermore, what happens when 128-bit keys are no longer secure enough and you need to move to 256-bit keys? Whoops, sorry, can't just get a software upgrade, you need a new computer. More lock-the-consumer-into-the-upgrade-cycle stuff here, even if it's not intentional (and it very well may be intentional).

    and ``digital signatures,'' which act as unique ``watermarks'' that identify the sender of the document.

    So everything made on a computer can be traced to that computer. Just like typewriters in the olden days (I seem to recall a few detective stories based on that fact). Great -- could be useful in some circumstances; law enforcement would love that, for example. This is where the privacy issues (which I'm not discussing here) come in. BUT this just identifies machines and is useless for identifying people. It will almost certainly, however, be misused for identifying people by what computer they use. What happens when (not if) Joe L. User sits down at one of the public-access PCs at his local library to surf the web, sees a cool "web shopping" site and registers as a customer? Assuming the site uses the chip ID the way IBM seems to be suggesting here, it will send Joe's computer (which is actually the library's) a digital certificate for Joe to make it "easier" for him to shop there since next time he won't even have to log in. Joe likes this, of course: it makes things easier for him. So Joe orders a few things and leaves. (Log out? What's dead trees got to do with things, anyway?) Now Carl Cracker comes along, uses the same computer at the library, and checks the Netscape history to see what he can find. He finds Joe's recent visit to the <buzzword>E-commerce</buzzword> site, checks it out, and sure enough, Joe didn't log out. So he visits the site and their software thinks he's Joe. He orders a bunch of stuff and charges it all to Joe.

    Plausible scenario? You bet. Could <buzzword>E-commerce</buzzword> site designers be so clueless as to use a mechanism designed for computeridentification to identify people? No doubt about it.

    The real solution to the <buzzword>E-commerce</buzzword> security issue is software. Ubiquitous, open-source, peer-reviewed software. Like, say, PGP (International version) [pgpi.org], or GNU Privacy Guard [gnupg.org], or SSLeay [uq.edu.au]. The hard part is that "ubiquitous" bit. You want real security? Here's how: Convince your boss to go open-source on the security aspects of the company's new <buzzword>E-commerce</buzzword> site. Read the Linux Advocacy mini-HOWTO [linuxdoc.org] first, then point out the advantages of using PGP or GnuPG or SSLeay rather than a proprietary solution. It'll be a hard sell, but stick with it. If everyone works at this, we'll eventually achieve the "ubiquitous" part.

    The solution is out there, folks. Let's go implement it.
    -----
    New E-mail address! If I'm in your address book, please update it.

  • You need this to be handled by a trusted and independent non government organization that is charted with the sole purpose of retrieveing stolen PCs, nothing else.

    Yeah, but you know it will end up being handled by the Bureau of Alcohol, Tobacco, PCs, and Firearms.

    There are already private companies that do this - if the criminal doesn't have the sense to blank the hard drive, a little program will phone in to the central office the first time he goes online. It's a voluntary system for those who wish to trade off a bit of privacy (and a bit of cash) for an improved chance of recovering stolen property. It only works because most criminals don't know about it yet.
  • But I can unplug my ethernet card and install another one, or maybe (depending on model) re-program the MAC in my existing card. Everyone knows this, so nobody makes any claim that my MAC address identifies *ME* personally in any business transaction across the net.

    The MAC address is used for node-locking certain types of software (the kind of software that costs more than your computer did, and where the salescreature gets a free trip to Hawaii if you buy the "gold" support package).

    The noise about the Intel ID was not that it existed, but that Intel planned to use it in a very silly and dangerous manner.
  • First of all, I found the article to be a bit scant on detail, so for all I know we're all misinterpreting this. But assuming that there is a unique id sort of thing in the hardware, as with the pIII:

    The thing about this is that if it works and becomes ubiquitous having the source to your OS won't help. You'll start noticing that web sites require you to submit this ID, and that software have access to it in order to take advantage of certain "features". So, in order to make sure that linux/oss software can take advantage of these "features" support for this ID will be programed in. Sure, you can choose not to use it, but when everybody else is using it it could quickly become impossible to get by without it.

    You might be able to spoof it, but people that write the web pages (or whatever) that use it will find ways around this. They could restrict page views to 100x per ID, for instance, so people couldn't all use the same ID. (I know, so make it random -- that might work. But then things devolve into a hack war, like the aol/m$ instant messanging war.)
  • Where do you get that a user has to submit a key? My understanding was that there would be key defined in the chip that would be automaticaly used. If I were a big, consumer-oriented corporation that'd be the way I'd do it -- anything else would involve a lot of things like user education & convincing programers to go to a lot more work to implement it (e.g., instead of calling a routine to grab the public key or encrypt something, call a widget to grab key info from the user). Frankly, if this is in fact public-key stuff, then the public key itself serves as a unique identifier no matter how you slice it.

    Don't get me wrong, you may be right -- I just don't know.
  • IBM's thing sounds weird and worrisome, but give it a rest with the Pentium ID thing. Intel is a 600lb gorilla, and I'm not a fan of theirs in any way (I own an AMD box, and am planning on a Cyrix palmtop). But this Pentium ID thing is just a goofy windmill to go tilting at.

    If you don't like ID's on CPU's then I hope you avoid SPARCs. AFAIK most server oriented processors have ID's. Not for tracking on the net (which is just a moronic and insecure thing to do), but for node-locking an application.

    Think of it, an application on a web server asks for your CPUID. It gets the answer across the net - how does it know where it came from?!

    Sheesh! Give it a rest.
  • Then what do you call that thing over off
    of San Thomas here in Silicon Valley?

    Yeah right, no factories - sure.....
  • by ajf ( 7321 )
    To tie software to any of these components seems rather stupid to me.

    That's because you're not someone selling the software that needs to be replaced after an upgrade...

  • by Signal 11 ( 7608 ) on Monday September 27, 1999 @06:52AM (#1656472)
    You know, I'm reminded of a quote "Anything done by a man can be undone by a man". Witness software piracy.. witness the crypto community... witness our own [ Free software | open source ] communities reverse-engineering proprietary and highly guarded Microsoft protocols (Samba, DAV, etc).

    How arrogant of IBM to assume the subversive element of our society won't abuse this new privacy-invading 'feature'. What's worse.. they're actually encouraging the very thing this ID feature was supposed to stop - fraud!

    To use an old, but good, example - if you don't have a secure channel with another person, you probably aren't going to be tempted to communicate sensitive information with it. But.. if you think you have a secure channel with another party.. you may be more willing to divulge sensitive information. The key word here is think. If that channel isn't secure.. you're exposing yourself to more risk than if it didn't exist at all! It defeated the very reason it was created - security. The use of this chip holds a similar analogy - if it is used for verification, then anybody who can defeat it can masqarade as anybody relying on it as a method of authentication. In short.. the barn door is wide open.

    So privacy nuts... I suggest you adopt this approach instead - crack this scheme as fast as you can! Defeat it before people start relying on it - and issue a joint statement on why this is such a bad idea.

    --

  • This is going to be a huge, huge, market. The music companies are the first ones to experience this. Hence, hardware companies are building end-to-end security. All of this is nicely outlined in the whitepaper that Entrust [entrust.com] (investment from NatWest and everyone else in the money business) used to have online (they've investorified all their documentation into PDF). They say that you need a TCB (trusted computing base) to process the containers and my guess is that IBM is doing just that. It will be possible to hack this and Entrust even says so, but they don't worry, because legal steps are taken to make things "safe for business".
  • Way back when... I worked with some comercial software (Jane's Ships...) that was licenced to to the ROM on the SUN system it was running on. Have to change that mainboard? Put it on another system? stops working...
  • It's already started. Hundreds of Slashdot readers are already becomming even more parnoid when there is no threat. The PIII's ID can be disabled, and someone will disable IBM's if they don't allow it as an option. Or you can simply not buy an IBM machine with this feature. Or you can wake up and realise that until something like this is standardized for all computers, noone will really care. I highly doubt every web site and other servers are going to start tracking PIII ID's anytime soon. Why? Because probably 10% or so of the computers out there on the internet may have a PIII. And a fraction of those may have the ID on. So why go to all the trouble to try to track a nonstandard ID? So all those parnoid readers prepairing to boycott IBM just might want to sit back down.

    -----
  • by cswiii ( 11061 )
    from the welcome-to-our-website-customer-ID-128723598756 dept.

    ...That's my customer ID!
  • One of the reasons the CPU ID was such a hassle for Intel was that Intel made a complete fool of itself by asserting that the CPU ID feature could be easily controlled by the user. This article, however, makes no mention of a CPU ID, but rather, a specialized chip for encryption/decryption and the handle of digital signatures. It doesn't sound like the same issue, but then again, the comments from IBM could be nothing more than a carefully-constructed PR piece.

    That having been said, there is at least one issue - if the encryption/decryption is handled in firmware, will it mandate a limited key length? While I don't want to sound like a whacko conspiracy theorist, having an ability for limited encryption built in to a system targeted at the mass market, could give the government most of the control it needs over encrypted material.

  • Read the two articles for yourself. No mention is made of them having the keys in them - for all we know they could just be hardware implementations of RSA/DH/whatever.

    It's no secret that public/private key operations are slow, even today. Without special hardware, you can't get an SSL web server to keep up with a very heavy load at all. If you imagine a future where even clients may be doing dozens of these operations a second, then having such a chip in every pc would be useful.

    Unfortunately, there currently isn't enough information to really know what is going on with this chip, so at least lets not jump to conclusions and burn IBM at the stake now...

    (side note: as others have said, if you don't want a unique ID in your computer, you better get rid of that ethernet card...)

  • The notion of basing security on some piece of hardware associated with the CPU or motherboard is intrinsically flawed. I don't want my machine to authenticate as me: the identity of my machine doesn't matter. The machine may get sold or stolen or used by someone else.

    Hardware based authentication and security tokens should be based on something portable, and that portable needs to have enough compute power to implement something like zero knowledge proofs. SmartCards fit the bill, and they are cheap. Keyboards should have SmartCard readers, and standard cryptographic methods allow secure transactions to be executed with SmartCards even over untrusted machines.

    At best, the computer itself could benefit from hardware encryption that doesn't carry a key, in order to speed up throughput for encrypted data streams. But in the current political climate, putting hardware-based encryption into a PC is futile, since, according to US laws, it cannot be secure anyway.

    Of course, e-commerce companies don't like SmartCards because, oh my, the consumer can remove them when they don't want to buy anything and don't want to get tracked. ID chips tied to the CPU or motherboard are great: the kids can order, the software can be used to track people wherever they go, and there is little most people can do about it if they run standard software like Windows.

    If IBM wants to drive secure e-commerce, they should be shipping computers with SmartCard enabled keyboards.

  • I thought I read somewhere (no, I can't remember where) that this chip was just a random number generator in hardware. Which would theoretically be much more secure than one in software, because it could incorporate environmental variables that software can't access... If that's the case, then it's a good thing, so long as it's free for others to implement.

    If the chip is a new ID, it's a huge waste of effort now that every intel CPU has an ID, every ethernet adapter has a MAC address, and every PC sold (through "legal" means) has a unique windows serial number (i know i know i know, use linux... just as soon as (fill in the blank) is ported! :) IBM predominatly uses Intel chips, so what justification could they give for making a new ID?

    Responding to a comment above, I know I don't know where my link is, but do you have a link to where it says this chip implements 256-bit RSA??? I find it very hard to believe that IBM would be shortsighted enough to use that.
  • by jabber ( 13196 ) on Monday September 27, 1999 @07:15AM (#1656481) Homepage
    What I think we're seeing here is the difference between two philosophies.

    The geeks seem to hold fast to the belief that: You can not expect differing results from the same behaviour. We've seen the Intel precedent, and the result, and so we're expecting (reasonably) that the same actions by IBM (X) will have the same outcome (Y).. Next time, when a new value of X is fed into the function, the same value of Y will pop out the other end.

    On the other hand, it looks like the corporations see it as: The squeak wheel gets the grease. Intel took the brunt of the opposition to the concept. Now IBM has picked up the gauntlet and is trying to run with it. Public opinion has been tested, and now the news is old. There is less likely to be as much opposition to the idea now, since it's not 'sexy' anymore. And if enough large companies reach concensus on this, the cusotmer is likely to simply believe, or give in assuming they can't win. Intel, IBM, any X, will keep chipping away at the issue until the wall gives way.

    Eventually, what this will become is a matter of will. We have already made clear the reasons why this is not a good idea. We see it as a solved problem - how many times can you run through the same process until it becomes too tedious, and we move on? Intel was shown to be wrong and has backed down (a little). Now IBM put a new spin on an old hat. Eventually, one side will get tired, and it's likely to be the side that has less PR money.

    Eventually we will get tired of voicing the same objections. The customers and the public-at-large will get tired of hearing the same arguments. The right legislator will get greased, and it will come into being.
  • Yes, but if I were to steal someone's computer all I would have to do is throw out the NIC and buy my own (for 15 bucks or less, whatever). Putting a trackable serial number on the processor makes my machine that much more secure to theft, because sure they can take out my CPU and throw it away and buy a new one, but the CPU is the main cost of the computer in the first place...
  • by BeBoxer ( 14448 ) on Monday September 27, 1999 @07:37AM (#1656483)
    The linked article never mentions a serial number ala Pentium III. Never. Not once. What it does say is that the IBM PC's will include a chip which performs some public-key encryption routines. Specifically, it will perform digital signatures. Now, how exactly is that an invasion of your privacy?

    I'm amazed at how many posters on this thread are running on the "it's another CPU ID" gripe when that has no basis in reality. Besides, these PC's will probably ship with P-III's, and why reinvent the wheel ;-)>

    To quote from the C|Net story about this:
    ------quote on--------
    Big Blue, taking a lesson from Intel's blunder, worked with privacy groups, such as the Center for Democracy and Technology, on implementing the security chip.

    "We found we could create a solution that does not create additional privacy concern, but built on a good security base and lets the user be the ultimate decision-maker," said Hester.
    ------quote off-----------

    While it's true that the devil is in the details, and we don't know a lot about how this will be implemented, I have a hard time seeing how this a bad thing. Unlike the PIII ID feature, which provides no security at all for the user, this has the potential to provide a lot of security for the user. The reality is that encryption based digital signature techniques, which this chip will help enable, are the only way to protect people from identity theft online.

    The big question is how avaiable is the documentation going to be. If it will be possible to write linux drivers and (say for example) allow GPG to perform RSA using licensed hardware, that seems like it could be a good thing. Depending on what the API looks like for this thing, it may be possible to turn around the "strong" signature capability and turn it into a "strong" encryption engine. Now that would be cool...
  • However, it is *possible* to use it to prove that your computer, specifically, is a party to a transaction.

    And right there's where this number breaks down completely for e-commerce. When I run a transaction, I need to prove to the other end that I am involved. If I go to another computer, I want to authenticate as me. If someone else sits down at my computer, I do not want the computer to authenticate as if I were sitting there.

    And if you think being at a different machine isn't a problem, bear in mind that right now I regularly use 4 different machines. 1 of those is used by several other people when I'm not using it, and another is used by about 75 other people simultaneously.

  • Here's what I'm saying: your bank should verify both *your* identity, and *your machine's* identity, before acknowledging requests to access your account.

    Do they need to verify which phone in your house you're using to place a telephone order once they've verified that you are really you? I don't see why, and I don't see why they should need to verify which computer I'm using if they can securely verify my identity independent of which computer I'm at. If the machine I'm using is irrelevant to who I am, then it shouldn't be checked. If my identity is subject to forgery, then improve the method used to verify my identity and close the hole rather than trying to limit the number of places someone can exploit the hole.

  • You're being naive. How do they know how to verify your identity? Well, right now, you enter a username and a password on a web page.

    You're talking on-line banking. I'm talking about over-the-phone ordering in the real world.

    Applying the principle of least privilege to computers on the net, it is clear to see that only the ones you use should be able to issue orders regarding your bank accounts. A computer that I use and that you don't use shouldn't have that capability.

    You haven't used least-privilege, though. Least privilege would mean that the machine I'm sitting at at the moment and no other should be able to issue orders to my bank. If I use the machine and am not sitting at it, that I use it is irrelevant. If I have never used a machine before but am sitting at it, that I have never used it before is irrelevant. Access should follow me and not the machine in any way, so the check should be against me and not the machine in any way.

    My opinion: if a check doesn't actually add to security, it should not be done. The identity of the machine is completely unrelated to my identity, so when trying to verify that it's me running a transaction you should not be concerned about the identity of the machine. If your method of identifying me is weak enough to need additional verification based on my being where I'm expected to be, then your scheme is too weak and needs improved, not papered over with irrelevant checks.

  • Ok, modems are exempt. Still...

    I don't recall ever being *without* some sort of ID.

    And honestly, I've given away so much to online registrations at this point that there's really not much point trying to hide now. I like my nickname too much to change it and re-do all my accounts, so I guess until I next shift houses and don't forward my snail mail, or drop my email address and get a new one, I'm skunked.

    Having seen some of the ins and outs of the legal system, I can say I'd *rather* be tracked when doing something *legal*, than *anonymous* when doing something *illegal*.

    Where did this "Privacy Is The Be All And End All" mind set come from? My mom and dad used to be able to hear me with my girlfriends at night... they had the good taste not to mention anything. I'm sure most people *don't* snoop.

    mindslip

  • There's a serious security issue here but I don't think it's the one you're describing.

    The chip obviously needs to have access to your private keys. This means we have no proof that it isn't burning every private key it sees into flash memory for future recovery. You might be better off using an open source crypto implementation you can investigate with a processor that doesn't really know which bits of data are keys (your CPU).

    However, the problem of unique chip identification seems to me to be a non-issue... What makes you think the chip will use factory hardcoded keys? Don't you think it's more likely that it will use user supplied keys issued by public certificate authorities like everything else does (except PGP, which doesn't use CAs but still doesn't use hardcoded keys)?
  • The article stated that IBM probably would want the system to be "ubiquitous" and therefore slapped on every motherboard in existence. Yeah right. There were a number of hardware vendors willing to dyke out the PSN from the BIOS when the P3 squabble came about. There were even ones that were completely against the idea in the first place.

    So what if (I'll use my fave vendor in this example) FIC refuses to put this "ubiquitous" chip in their motherboards. What if VIA thinks this is a stupid idea from so-called industry leader IBM, and declines to support it in the chipsets. Then what?

    The chip dies and nobody's going to care!! Why? Since when have you seen somewhere the PSN is required to complete a transaction? Nowhere! Retailers aren't stupid, they know by supporting legacy hardware they get more customers. If just one slightly big name vendor refuses to support the chip, the whole system goes under. As the system propagates over the years, FIC/VIA's motherboards make up a huge userbase of people who don't have the chip. So when it gets to be somewhat reasonable to assume people have the chip, you have a couple hundred thousand or million users who will be left out. Will retailers lose that many customers? Heck no! They aren't going to tell potential e-commerce customers "To use this site, you must replace your motherboard". That would be a HUGE turnoff.

    Until there is a universially accepted (by EVERY vendor) standard for unique IDs (I pray to god that doesn't happen, but MAC addresses are allready here...), this idea will never fly.

    And don't forget, it's not illegal to make your own programmable ID chip, is it? If it was, this topic would be moot.

  • Ya know, I can only believe this is pure flamebait, so I'm an idiot for responding, but your description shows zero knowledge of how digital signatures work.

    Well, perhaps you are an idiot, but I do know how asymmetric cryptography works.

    Did it ever occur to you that this chip may implement the algorithms for key generation, message signing, and encryption, while the keys themselves get stored on disk, and fed to the chip using device drivers?

    As I said, like "PGP on a chip". Did you read me post at all?

    No, I do not know how this chip from IBM works, but neither do you, as far as I can tell. Meanwhile, you and a bunch of other people are doing a headless-chicken-scene, which never helps.
  • by DragonHawk ( 21256 ) on Monday September 27, 1999 @07:04AM (#1656491) Homepage Journal
    Has anybody tried reading the article?

    The features of the security chip include key encryption, which encodes text messages, and "digital signatures", which act as unique "watermarks" that identify the sender of the document.

    Where in that sentence does is say there is a unique ID embedded in each and every chip? To me, it sounds more like IBM is marketing a hardware-driven security engine, a "PGP on a chip", if you will. I do not see how this translates to a unique serial number on each and every chip.

    (Whether you want to trust IBM's security implementation is another matter entirely.)

    What does this have to do with My Rights Online? If every hardware crypto product on the market is a violation of the First Amendment to the US Constitution, Slashdot is going to become awful darn cluttered.

    When I first read about YRO, I thought it seemed like a good idea. The Internet is a new medium in many ways, and I do not want the government panicking and trying to restrict it. However, YRO seems less about keeping a sensible eye on things and more about paranoid sensationalism, written by anarchists who think that all laws must be bad, all corporations must be bad, everything not invented here must be bad, ahhhhhhhhhhh!

    Even if there is a unique ID embedded in this chip, so what? A Unique ID for each computer can be a useful thing. For example, if you are trying to implement property control in a large organization, an electronic serial number would be a Godsend.

    The problem with Intel's serial number was twofold: First, they were marketing it for "secure online transactions", something which it is not appropriate for, and second, they tried to smuggle it into every system made, turned on by default. That is not good at all. But there is zero evidence that this scenario is even possible with IBM's chip, let alone going to happen.

    Please. Keep your head. Do not react first and then stop to think, or you are just as guilty as the government for panicking when something new comes along.

    (And before you tell me "Nobody is forcing you to read YRO": There is thing thing called feedback...)


  • by Kaa ( 21510 ) on Monday September 27, 1999 @07:12AM (#1656492) Homepage
    I don't like this idea at all and if one of my future computers will have such a chip inside, I'll take major measures (soldering iron included) to make it not perform as intended. However, I'm not blind and can see the writing on the wall. Hardware authentication makes too much sense to be ignored. Given all the security scares (real and imagined), the government and corporations will want reassurances of security and a hardware solution will appeal (with reason) to them. Besides, I don't really object to hardware authentication on, say, my office box. Not that it can successfully pretend it is something else anyway... :> But as to my home machine: not bloody likely I'll install this thing willingly.

    For my fellow paranoids (we know who you are!): keep in mind that all ethernet devices, including the NIC in your machine, already have a global unique identifier -- MAC.

    Kaa
  • If this is for digital signatures, what happens when I replace a machine? When I am at another machine?
    Wouldn't a better way be a smart card and thumbprint reader in one that hands this off to the software? That prevents theft of the card (at least without taking the thumb).
    So, again, why?

    --jeff

    The optimist thinks this is the best of all possible worlds.
    The pessimist fears it is true.
    --Robert Oppenheimer
  • The MAC address does not get sent out over the internet, just over your Ethernet LAN. The only MAC address the web server is going to receive is theMAC address of the router that gave it the packet from the internet.
  • by knarf ( 34928 )
    In these cases, that serial number is stamped physically on the case of the item. AFAIK, your average HD isn't able to report its SN. Or can it? Maybe I've missed some advancement in the hardware industry here....

    Well, it depends on what you call `reporting', but modern drives CAN produce their serial numbers on request:

    [r00t@yourbox]# hdparm -i /dev/hda

    /dev/hda:

    Model=Maxtor 90840D6, FwRev=WAS8283C, SerialNo=K60AV2XA ---serial number
    [stuff deleted...]

  • Unless Microsoft embeds it into every Word document you write. (For those who use Word, it does in fact do this. You can download a utility from Microsoft to remove the marks in the files on your disk, but I believe it still adds the MAC address to all new files.)
  • I've seen IBM aptiva computers which have unique hardcoded serial numbers into the BIOS of the system. In addition they have the ability to burn the system password into the motherboard (no you cant remove it.."permanently" burn is more like it). I imagine this will simply allow you to interrogate the number over the web rather than locally as it is now.
  • you can spoof MAc addresses and theyre not unique. My MAC address is actually changed directly via linux /proc filesystem. see linux.com for full details on how to do this.
  • no you dumb shit. MAC addresses are NOT UNIQUE. YOU CAN CHANGE THEM WITH SOFTWARE. At least READ people..there are a million posts above telling you the same thing.
  • I agree this sounds like an encryption chip, but I have to ask then, "What's the point?" I mean, where is the benefit of a hardware impl versus a run of the mill software implementation?

  • This chip isn't being marketed at all as any "real" security solution. The article explicitely states this. In the event a consumer needs a more secure solution, IBM has add-ons and other products to suit them. The cryptography, they say, should be adequate for 80% of their customers. I agree.

    Why shouldn't a customer expect a "'real' security solution" to be "adequate"? Put another way - why bother with security if it is, in fact, not "real" security?

    This "solution" just leads to a false sense of security. Furthermore, it leads to confusion and sensationalism when that false security is shattered by a compromise.

  • It's not possible to be 100% secure with your data. Period. It's all a matter of "degree". How "secure" do you want to be?
    Agreed. Security is more a exercise in risk management than an absolute. The goal is to lower the risk as much as possible within reasonable contraints as set by the environment. That environment may involve, for example, security vs. functionality or security vs. cost.

    Sure, this solution is secure, but it's not *as* secure as other, unexportable alternatives. In ten years, "real security" will mean something entirely different. The original poster was using the term "real security" by saying the key sizes allowed by this chip were inadequate for truly sensitive data. I was simply saying that IBM is not marketing this mechanism for people that regularly make use of truly sensitive data.

    The "cost" difference between exportable and "real" security is pretty close to nothing in functionality. That is, the two implementations do not differ in cost to produce or functionality. The "cost" is US law. So what we end up is an inferior system pushed out to the public as a "solution" for their data. The trouble is, it shouldn't matter if that data is trade secrets, credit card data, or Aunt Nellie's secret chocolate chip cookie recipe. The solution IBM provides should be the best possible. This isn't it.

    Instead of providing "real" security, IBM is providing a false sense of security. Your Average Joe doesn't understand encryption. They'll read about this "secure" solution IBM is providing them and they'll use it. They'll feel secure. They aren't. If the worse happens and their data is compromised, they'll feel shocked, violated, and vulnerable. Those evil hackers have managed to defeat even IBM! Even if the worse doesn't happen, Average Joe will skip along happy and "secure" and the demand on the US Gov't to drop their artificial anti-export laws will never manifest in the general public.

    IBM is not providing a solution. They're providing marketing fluff.

  • Where did this "Privacy Is The Be All And End All" mind set come from? My mom and dad used to be able to hear me with my girlfriends at night... they had the good taste not to mention anything. I'm sure most people *don't* snoop.

    Information is power. Today, that statement is more true than ever before. Entire companies are built on information. No other products; no widgets, no foodstuff... just information. Therefore, anything and everything a company can record about you is worth money... to someone. And they will record it. Even if it has no use today, tommorow it might be invaluable. And every step is an invasion into your privacy.

    I'm sure you can trust your parents. And I'm sure there are a lot of other considerate, non-snooping people out there. However, I can't say the same for corporations. If there is value, they will snoop. And information warehouses have already shown a complete disreguard for privacy and safeguarding the information they sell.

    Identity theft was science fiction in the past. Now its a real problem. If databases of personal information didn't exist, or were at least better guarded, the problem wouldn't exist. But it does. And many advances in data technology simply adds to the ease of generating these databases. This is why we SHOULD be aware of our privacy.

    Where did this "Privacy Is The Be All And End All" mind set come from? Its a sign of the times.

  • So, next time you buy something, realize that that serial number is being tied to your sales receipt, which is tied to an invoice for a distributer, which is tied to the manufacturer.

    In these cases, that serial number is stamped physically on the case of the item. AFAIK, your average HD isn't able to report its SN. Or can it? Maybe I've missed some advancement in the hardware industry here....

  • The implication held here scares me:

    Unlike previous security measures that rely on software ''firewalls'' that filter out unauthorized users of information, IBM has developed a security chip embedded within the computer hardware, which, it says, adds additional levels of security.

    Now, the suits *may* ask, "Why do we need that pricey firewall when IBM's got this hardware security solution? We could standardize on that."

    Hopefully, the smarter sysadmins will respond to the sentiment with, "Well, yes, I suppose we could dump the firewall and rely only on an untested hardware chip and our desktop operating system's inherent security."

    Please. Well tuned firewalls, carefully administered networks and attentive sys admins are going to do a lot more than any ID chip.

    As for identifying users and protecting digital documents, the pre-existing software solutions are well tested. GPG and PGP are the best examples. What's more, they've already got the support of nearly everyone.

    End rant

  • All of you are being ridiculous. Ever used a Sun box? *GHASP* they have a hardwired host identifier built into them!!! AND OHMYGOD YOU CAN TYPE A COMMAND AND SEE THE IDENTIFIER!!!! OHMYGOD YOUR ETHERNET HAS A HARDWARE IDENTIFIER TOO!!!!

    This kind of paranoia only matters if you're using a browser/app that will send back that identifier on request. I'm going to doubt that Netscape will, I'd be pretty assured that MSIE will, and I'm *positive* people will come up with ActiveX tools to get that Host identifier. And the BIGGEST thing is that it will probably only affect windows users aversely because they can't get source code to their OS...
  • Ya know, I can only believe this is pure flamebait, so I'm an idiot for responding, but your description shows zero knowledge of how digital signatures work.


    Digital signatures, on the whole use public and private keys. These public and private keys are unique numbers, somewhere on the order of a few hundred digits long (usually). In order for a packet to be signed, that packet must have access to the private key. In order for a packet to be verified, the receiver must have access to the public key.

    Now, think about what's been said:

    1. Keys are unique numbers, several hundred digits long.
    2. Both private and public keys are reuired for signatures to work (signing and verification).
    3. It follows, then, that the chip must have both the public key, and the private key, on it at the same time.
    4. Backtracking through definitions, we see that the private key is a unique number, and it must be embedded into the chip.


    Now, the number isn't a true "serial number", simply because it doesn't count up in order (in fact, due to other facts, not mentioned here), it can not count up.



    Instead, we have something even better: a unique, cryptographically secure (supposedly) identifier attached to each and every computer which Big Blue sells. If there is a backdoor in these chips, then the government will now have a way of tracking, and reading, everything which gets encrypted/signed by these chips.



    Can you see the problem yet?

  • I still don't get what the big deal is about these unique numbers. First of all, you, as a user, would have to AGREE to physically run the software that is accessing this stuff. This is no different than handling unknown binaries. I don't see how, short of running custom software on the client side, any malicious party could obtain your number. If you're running their software then you have either tacitly acknowledged the authenticity, or it's your fault for being stupid and running code of unknown origin right? I envision a plug-in which perhaps is authenticated itself by a certificate, which a site may require you to download. You download it, and then any time you want to purchase something from that site, the plugin runs and uses your unique hardware key. You still are AWARE of what's going on. You still have to acknowledge the execution of the software.

    It's not like any punk can use a daemon dialer and magically obtain your number right?
  • This is supposedly for security of online transactions. I can't see how some trusted party could invade your privacy by simply knowing a unique id. It's just used for encryption while data is on the wire right? If you don't already trust the company, you aren't going to be running the software or transacting with them in the first place.
  • Different type of encryption. 128 bits would be woefully inadequate for RSA; I imagine you could crack it on your PC in about 5 hours.
  • Except that you are forgetting that the MAC address does not leave the subnet that it is on, and can be changed on a lot of NIC's out there.
  • Now that all PCs are on the internet, it would be pretty cool if it would transmit its ID now and then to some registry of stolen PCs. If it was done right it would make stolen PCs almost useless, and we would have a better world.

    The problem of big brother-ism is real, but not insolvable. You need this to be handled by a trusted and independent non government organization that is charted with the sole purpose of retrieveing stolen PCs, nothing else.

    I swiss banks can keep a secret, others can too!
  • Sure, but if a stolen PC could not be safely used on the internet, the market for it almost vanishes.

    I mean, how do you sell a PC and make sure it will not get online...?
  • So let's get a few things straight:

    1. Unique serial numbers have been with us for a long time. (The MAC address of your Ethernet card is unique to your computer. Moreover, the tools are already in place to track your computer using this identifier, I.E. arp.)

    2. Unique ID's have many useful functions besides violating, your already non-existent, privacy. (Just to start with, tracking is not necessarily bad. Anybody who has had a laptop stolen from them probably knows what I am talking about.)

    3. The real threat is not that we can be tracked, it is that it may be done without our consent and in secrecy. (There are more than enough trojan java and activeX applets that will track every web site you visit AND record your passwords already out there.)

    Don't fight the technology, demand a better implementation. Anytime something like this comes up, just make sure the implementation is open and well documented.
  • by konstant ( 63560 ) on Monday September 27, 1999 @07:03AM (#1656515)
    Convenience is the great enemy of privacy. Corporations like IBM, Intel, Microsoft, and Sun will always be able to justify (or perhaps legitimately believe) that the convenience of ID stamping or data broadcasting for their latest nifty upgrade-inducing "feature" outweighs the small decrease in consumer privacy. And because most of us are lazy - yes, even you noble Slashdotter - we will ultimately accept these small intrusions in the name of preserving our free time and sanity. Can you imagine living life in American without a SSN? It is legal I believe, and it would indeed greatly inhance your personal privacy, but it is incredibly inconvenient. What about eschewing license plates, and therefore cars? Possible. Not convenient. The process will continue as long as we are blinded by our love of "progress", as defined by the availability of neat new gadgets everywhere we go. Real progress is social change than enhances lives, not merely technology that makes life more ornate. Fat chance of changing our culture, though.
    -konstant
  • Suppose there is a unique number embedded in your computer. Now suppose that it is never shown to anyone (not even you).

    This is a practical and useful thing to have. The unique (secret) number could be a private key; the corresponding public key could be widely published by the manufacturer (and be related to e.g. the serial number).

    Now, because there is software between you and the bits that comprise the private key, nothing says that you have to do anything with it. However, it is *possible* to use it to prove that your computer, specifically, is a party to a transaction.

    For example, right now identity theft makes it trivial for a crook to access your bank account over the web, if you have electronic banking enabled. He just needs some info, like SSN, mother's maiden name, etc. However, suppose you gave the bank (in person) the serial number of your computer. Then the bank could verify the identity of the machine that tried to access your account, using a zero-knowledge proof (give it something to sign, and verify the signature).

    This doesn't make your security iron-clad, but I think it does help. Of course, there will be places that demand that you authenticate even for transactions that don't need authentication (e.g. the New York Times). Again, though, the fact that those bits exist in your machine doesn't mean that you are under any obligation.

    It will be interesting to see how the "default software" will be set up on MS platforms; if IE authenticates everywhere without asking, then Joe Windows User is as badly off as he would have been with the Pentium ID, in terms of his activities on the 'net being traced. Maybe IBM should manufacture a separate hardware switch that can disengage the chip, so the user can do an end run around Redmond shenanigans (sp?).
  • When you sell your machine (or it is stolen), you tell your bank not to trust that machine regarding your account anymore. As far as chip switching, that can be made "hard enough" to suit many purposes, even though it of course cannot be impossible.

    This could be a useful security aid, and is certainly not "total security in a box". Those two statements are not exclusive.
  • And right there's where this number breaks down completely for e-commerce. When I run a transaction, I need to prove to the other end that I am involved. If I go to another computer, I want to authenticate as me. If someone else sits down at my computer, I do not want the computer to authenticate as if I were sitting there.

    And if you think being at a different machine isn't a problem, bear in mind that right now I regularly use 4 different machines. 1 of those is used by several other people when I'm not using it, and another is used by about 75 other people simultaneously.


    Here's what I'm saying: your bank should verify both *your* identity, and *your machine's* identity, before acknowledging requests to access your account. You use any of four different machines? Well then, tell the bank to accept transactions from any of those four. You still are eliminating lots of potential attackers. Some of those machines are shared? Well then, make sure the permissions on your *personal* private key aren't world readable (and consider how much you trust 'root' on those machines)!

    Being able to identify a particular computer is useful. I don't claim that this means of doing so is bulletproof, or that the ability to do so represents a security 'magic bullet'. However, it can offer additional security *in conjunction with* your PGP or other software-based crypto system.
  • computer I'm at. If the machine I'm using is irrelevant to who I am, then it shouldn't be checked. If my identity is subject to forgery, then improve the method used to verify my identity and close the hole rather than trying to limit the number of places someone can exploit the hole.

    You're being naive. How do they know how to verify your identity? Well, right now, you enter a username and a password on a web page. Many banks use your account number or your SSN as your username, and neither of these is especially secret. In fact, most banks have their web banking set up by default to save your username in a cookie so that your browser remembers it.

    If you know my username, SSN, and Mother's maiden name (also not hard to learn), you can call up the bank and convince them that you're me! You can say that you forgot your (er, my) password, and can they assign a new temporary one (that of course you will change five minutes later). Alternatively, if my bank offers web banking and I haven't signed up for it, you can sign me up over the phone (with the same ID information).

    In the days of phone banking, this wasn't too big a risk, because all you could do was transfer money between your own accounts. However, now web banking allows you to write electronic checks to anyone, making the risk much greater. Worse yet, don't bet that your money is insured if it gets stolen this way -- how can you prove that you didn't authorize the funds transfer?!? The transaction required your authentication information...

    Even if the banks switch to public key cryptography, there's a problem, because there is no public key infrastructure (PKI). Without this, it is difficult (on a large scale) to correctly associate an individual with his correct public key; identity theft is still entirely possible.

    One way to tighten the security is for you to give the bank a list of computers from which to accept orders ostensibly from you. This really isn't unreasonable, especially as the trend from desktops to laptops and palmtops continues. Indeed, there will probably be 'smart cards' or some such that you use for electronic cash or electronic voting, anyway, that live in your wallet.

    In the security field, there is something called, "the principle of least privilege". What this means is that the proper way to approach security is to take away all privileges, and then selectively grant them back. "Selectively" means that a subject should be given enough privileges to be able to do what it needs to do, but no privileges beyond that. Applying the principle of least privilege to computers on the net, it is clear to see that only the ones you use should be able to issue orders regarding your bank accounts. A computer that I use and that you don't use shouldn't have that capability.

    At any rate, if you don't want to use the added security I propose (and I still call you naive in that case), then don't! Note that in my original post I also proposed an additional hardware switch (say, right next to the power button) that would disengage the chip. My reason for this was that it becomes possible for others to trace your activities if your machine authenticates everywhere on the web, not just at sites where you benefit from the added security.

    My point is, if there is a unique ID on the machine, it doesn't have to be a privacy violation, and it can be useful -- both at the same time. Note that I don't claim that hardware identification is bulletproof, just potentially useful if it's "hard enough" to break. You can claim that you don't think it's useful for your purposes, but I argue that it's still useful for lots of other people. And, even if you claim it's not useful at all, that doesn't constitute a proof that it violates your privacy.

    So where's your beef?
  • You're talking on-line banking. I'm talking about over-the-phone ordering in the real world.

    My example is legitimate, whether or not yours is. If hardware ID is useful in the case I describe, then it is useful. For you to show that it is not useful in general, you must do more than argue that it is not useful in a particular case. Furthermore, my example is a real-world example, and nontrivial.

    You haven't used least-privilege, though. Least privilege would mean that the machine I'm sitting at at the moment and no other should be able to issue orders to my bank. If I use the machine and am not sitting at it, that I use it is irrelevant. If I have never used a machine before but am sitting at it, that I have never used it before is irrelevant. Access should follow me and not the machine in any way, so the check should be against me and not the machine in any way.

    You want all computers to be able to issue orders regarding your account, which is farther from least privilege than my proposal. Also note that when least privilege is applied to *people*, instead of computers, the statement is that only you should be able to issue orders regarding your account. So, you need to authenticate yourself, as well.

    My opinion: if a check doesn't actually add to security, it should not be done. The identity of the machine is completely

    The proper way to approach these problems is with the principle of least privilege in mind; be as restrictive as possible while allowing subjects to be able to do what they need to do. My proposal comprises a restriction that does not prevent you from doing what you need to do.

    If your method of identifying me is weak enough to need additional verification based on my being where I'm expected to be, then your scheme is too weak and needs improved, not papered over with irrelevant checks.

    You're still being naive. Authentication is a problem that is getting worse, not better. The lack of a PKI (and the difficulty of implementing one) implies that identity theft is actually easier on the 'net than it has ever been before; all you need to do is fool somebody into believing that your public key is really your victim's public key. Right now, in the real world, though, the security is even worse, because anybody can claim to be you by looking up easily accessible information about you. In the real world, it is trivial to steal your identity (this is even talked about in the mainstream press). You claim that this is the problem that should be fixed -- but the fix likely will involve public key cryptography, which is only slightly better (even in principle, let alone in practice).

    There *is no* completely satisfactory solution to the authentication problem, where the check is not done in person.
  • ``People from outside (of your organization) can get at your software,'' said Anne Gardner, general manager of desktop systems for IBM. ``People from the outside can't get to your hardware.''

    The funny thing is, anyone _can_ get to my software, including me. It's open source. But only IBM, or their designated manufacturers, or people who send a signal to my computer to get my "digital signature", can get at my hardware, excluding me. I like systems I can control a bit more.

    On another note. Isn't an embedded security device likely to go obsolete pretty rapidly? Then what, we have to buy a whole new motherboard instead of just installing the latest version of the software? That sucks.

    Hmm, the article just says that the chip is embedded in the hardware, somewhere. I wonder where? How easy would it be to pry the sucker off? ;) Or, I could just not buy an IBM. Yeah, that's the ticket.

  • Big deal. For it actually to be used against you, it would have to be actually transmitted to the commerce site, or what have you that wants it. It seems to me that its your web browser where this feature shauld be controlled. Given that web browsers allow you to turn cookies off, they should allow you to not transmit ID#s as well. Heck you can always back up to Netscape 2.0 if you want that won't know anything about the IDs.

    Besides, if you have an ethernet card, you already have a unique ID in your computer hardware. Its called your MAC address. Microsoft uses it to uniquely stamp your word documents. (Thats how they traced down the mellissa virus author.) The misuse of it is all at the software level. I can't imagine anybody writing free software that will use IDs like this. I'll keep away from MS thank you.

  • Looks like we're in for yet another privacy invasion "debate". I use the term debate very tongue in cheek because everyone will point to the serial number and scream big brother.

    People who don't do things they shouldn't have no fear of "privacy invasion". But with porn being the true fulfillment of e-commerce on the web and the occasional illicit mp3 download it's a safe bet that a sizable percentage of the internet going public have justifiable reasons for not being tracked.

    What everyone seems to forget is there is no anonymity on the net thanks to a little thing called an IP address.

    Did you download a song from alt.binaries.sounds.mp3? Or maybe that latest nude in alt.binaries.pictures.erotica.*? Your ISP knows exactly who you are. Your IP address is logged along with your user name and password. Your user name is in their billing records - complete with your name, address, phone number, and probably your credit card number.

    They could also care less unless someone is calling to say you broke something or you spammed slashdot or something.

    Maybe you've visited a ftp site and downloaded a movie. If the ftp site was a sting operation then they've got your number and can force your ISP to turn that number into a name. The same is true for web sites. If you downloaded a movie you're probably broadband and have a greater chance of having a fixed IP address, in which case you already have a serial number even if you use AMD.

    Having run a large and successful website I can state absolutely that after 100 unique visitors a day people stop being people and start being demographics. The real life corollary is that everyone has a driver's license and a social security number (and credit card numbers and all that) but even though it's possible to do, you have a better chance of winning the lottery than having someone piece together your every move. So the only true privacy we have is safety in numbers.

    Privacy is and always has been an illusion and never more than on the web. The people who want to embed serial numbers in your computers realize this. Shoot -- every slashdot reader should know that given time, determination, and lots of search warrants anyone can be tracked down. The elite slashdoters can do it without warrants and the best of the best can probably do it without using a single z to describe the process. So if there's no anonymity then the good of serial numbers far outweighs the "bad" (mainly giving you a false sense of security which is bad in its own right).

    A similar fuss was made over the introduction of caller id. Caller ID still went through and guess what? I haven't gotten a prank phone call since it was introduced. Like caller ID, this too is going to happen. There are too many good reasons for it not to. Forging, changing, or blocking the serial number will also be a very easy. The program to do it will probably have a z in it though. "SerialZ no more" or something. Look for it at that zero-day-warez site near you. :-)
  • This sounds more like a e-commerce marketing ploy than an evil plot to spy on us. IBM is simply marketing for the same audience that buys into 'Blue'; AOLusers. They're selling it to the uninformed. For the rest of us, the chip is useless. It may provide some limited form of encryption, (read small key) but which of us would actually trust it over GPG? Watermarking? Not as reliable as a Verisign registered key, I'm sure.
    Plus, we can be sure that the possible 'Big Brother' applications of it will only be included in MS products, so we're all safe! Right?

How many QA engineers does it take to screw in a lightbulb? 3: 1 to screw it in and 2 to say "I told you so" when it doesn't work.

Working...