Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Government Privacy Security

US Senators To Introduce Bill To Secure 'Internet of Things' (reuters.com) 138

Dustin Volz, reporting for Reuters: A bipartisan group of U.S. senators on Tuesday plans to introduce legislation seeking to address vulnerabilities in computing devices embedded in everyday objects -- known in the tech industry as the "internet of things" -- which experts have long warned poses a threat to global cyber security. The new bill would require vendors that provide internet-connected equipment to the U.S. government to ensure their products are patchable and conform to industry security standards. It would also prohibit vendors from supplying devices that have unchangeable passwords or possess known security vulnerabilities. Republicans Cory Gardner and Steve Daines and Democrats Mark Warner and Ron Wyden are sponsoring the legislation, which was drafted with input from technology experts at the Atlantic Council and Harvard University. A Senate aide who helped write the bill said that companion legislation in the House was expected soon.
This discussion has been archived. No new comments can be posted.

US Senators To Introduce Bill To Secure 'Internet of Things'

Comments Filter:
  • by Lord_Byron ( 13168 ) on Tuesday August 01, 2017 @11:48AM (#54920267)

    It's good they're trying to do something, but the devil is in the details. For example, define "vulnerability". Anyone who's tried to remediate Nessus findings knows what I mean - those Low findings that just. won't. go. away. And do they mean internal or external vulns? etc, etc. And these things won't be determined in the law, they'll be decided by the bureaucrats implementing it. Pray that they're smart.

    • $100K year per site + 1K / year per device to HPE to manage it. Labor costs extra

    • It reminds me of that court case vs Toyota where they were citing (iirc) strcpy as proof that there was a braking issue. Not the usage of strcpy, no... just the presence of it. Not reproducing a bug... just show that strcpy was being used.

      The hand-waving loose definitions non-coders get when they talk about code is amazing. While I think some regulation might be good, it would be a pretty jarring shift in how fast-and-loose the majority of the industry has become with code these days if people actually beca

    • by ctilsie242 ( 4841247 ) on Tuesday August 01, 2017 @12:35PM (#54920583)

      My question about "securing". Is this more to secure the device against the user wanting to do stuff with it (anti-jailbreak), or secure it against remote bad guys? I worry every time I see anything government based going into security, because I expect more DMCA type stuff, and not stuff that actually keeps the bad guys out.

      • This is about devices sold to the US Government, so it has nothing to do with jailbreaking consumer devices.

    • An Idea (Score:5, Interesting)

      by DickBreath ( 207180 ) on Tuesday August 01, 2017 @12:41PM (#54920625) Homepage
      I have suggested this plan before.

      Make the manufacturer (not seller) of an IoT device liable for any actual damages that are caused by their IoT device getting hacked.

      That's it. No government standards. No registration. No certification. No mandated testing facilities. (But the market could create certifications and testing facilities on its own.)

      What this would do is change the perverse incentives that currently exist to the correct incentives. Suddenly manufacturers would be all about security! It would be Job #1. Manufacturers might standardize and cooperate on secure Linux distributions upon which they base their products. They might cooperate to improve everyone's security.

      Suddenly manufacturers might consider whether they should have an update mechanism (or not).

      Manufacturers might consider whether certain things should even be connected to the cloud at all!. Do we really need a cloud connected toy teddy bear?

      Oh, yes. The retail price of some IoT devices might increase due to the manufacturer's cost of security measures. But that is as it should be. Right now the perverse incentive puts the costs on innocent parties that get DDOS'ed (or worse) by hacked IoT devices. This would fix that.
      • Aren't IoT manufacturers already liable for damages caused by their products? Are there any special exemptions from normal tort law for IoT devices?
        • They may be liable. But it would cost more time and trouble to pursue it than it is worth -- unless you had huge damages.
          • Right, so I don't see how making them liable for damages they are already liable for would help. It would seem that insecure IoT deviecs don't actually cause that much damage (yet), and maybe it's not really worth the added cost of regulation until there is more damage.
      • I have a few problems with that idea.

        One, as written, you'd have startups unable to afford the risk in whatever the end user does with the product/service.

        For example, I have a WiFi connected power switch. Really, for me, it doesn't need any security at all. Worst case scenario, someone could turn off/off the nightlight attached to it.

        However, someone else could take that same switch and connect it to something that if the power went out as a result of it getting hacked, the loss could be millions or more

        • by mvdwege ( 243851 )

          One, as written, you'd have startups unable to afford the risk in whatever the end user does with the product/service.

          And this is a problem how? We also make it impossible for startups to afford industrial development by just dumping their waste in the nearest convenient stream. This is just the digital equivalent.

        • Why would startups be unable to afford the risk? If a startup makes a new $500 cloud toaster, I have the same expectation of it not burning my house down while I'm gone than if I bought a $12 toaster at Target from a name brand.
      • by mentil ( 1748130 )

        The manufacturer could put a (difficult to access) update mechanism on the device, release a minor update shortly after each manufacturing run, and then claim the owner is responsible since they didn't update and their device is therefore unsupported and the shrinkwrap fine print made the owner liable anyhow.
        Alternatively, they'd just increase the retail cost by the cost of liability insurance to cover that unit being hacked, which would almost certainly be cheaper than hiring competent devs and giving them

        • by Bengie ( 1121981 )
          Liability insurance has refused to pay out in cases where the client did not hire competent employees. Insurance is in the business of minimizing the amount they have to pay by legally protecting themselves and accurately assessing risk. Incompetent devs are high risk and if the client does not convey this, they're in breach of contact, zero pay out. Has happened more than once for high profile cases.
      • by AmiMoJo ( 196126 )

        You are on the right path but I can see a few issues.

        The liability could potentially be huge, and in some circumstances not entirely fair. For example, say the manufacturer used a well respected open source library. Did the right thing by not trying to roll their own security, followed the best available advice and practice... But someone finds a bug in it, and starts exploiting their devices.

        Maybe they can patch it, assuming that the exploit doesn't disable the update mechanism. Even if they can, it takes

      • Make the manufacturer (not seller) of an IoT device liable for any actual damages that are caused by their IoT device getting hacked. ... What this would do is change the perverse incentives that currently exist to the correct incentives. Suddenly manufacturers would be all about security! It would be Job #1. Manufacturers might standardize and cooperate on secure Linux distributions upon which they base their products. They might cooperate to improve everyone's security.

        Might do some good in some businesses that would be doing the proper thing anyway, but more than likely you'd just see something similar that you see in movie industry or home contractors. You buy Company X but when things go wrong, they're just the brander and seller and the liable company is Company Y. Company Y went out of business pretty much before the product hit the market because they went bankrupt because they licensed from Company Z who owns Company X and the licensing killed them. Meanwhile, ther

      • I have suggested this plan before.

        My plan would be to make an actual certified computer engineer trade, and then require them to look at the code and sign off on it. Wouldn't make them liable for everything, but would dictate they have reviewed at the code for bare minimum of security diligence as dictated by the standards of a central authority.

      • I have suggested this plan before.

        Make the manufacturer (not seller) of an IoT device liable for any actual damages that are caused by their IoT device getting hacked.

        But "the manufacturer" is some outfit somewhere in China that can't easily be served with a lawsuit from the US. As soon any serious effort is made, it disappears, and some company with a different names starts selling the same thing.

    • For example, define "vulnerability". Anyone who's tried to remediate Nessus findings knows what I mean - those Low findings that just. won't. go. away.

      And all this is being decided by people with no clue what Nessus is or what it does, and wouldn't understand it if you explained it to them. Oh yeah, this is going to turn out well.

    • by sjames ( 1099 )

      That and the findings that amount to user can do odd thing that screws up his own login (equivalent to clicking logout but much harder) but because that's not very exciting we'll say that with extreme handwaving a hurricane could blow up and allow some minor security violation we haven't even imagined yet, much less actually managed to pull off.

    • For example, define "vulnerability".

      That isn't a big problem, and it isn't something where bureaucrats need to be particularly smart.

      This is only about devices sold to the US government; this means that if the government thinks the device you sold them continues to have known security problems, they have a code to write down that means no agency can buy that device from you anymore, until they're happier with your response.

      This just gives the government the needed purchasing responsiveness to deal with this, without relying on each purchasing

  • Instead of encouraging robust perimeter security and a well thought out security model, let's just require expensive and ineffective security on every single little thing.

    • by Bengie ( 1121981 )
      Most professionals are too incompetent to properly implement perimeter security, what makes you think a typical end user can? There was a discussion in a firewall forum about how someone purchased some VOIP device for their business because it's standard in their industry, and the official support said they need to forward TCP and UDP ports 4,000-60,000. Why not just drop it in the DMZ while we're at it? This user has no choice but to use this device, otherwise they alienate all of their customers.
    • You can't legislate perimeter security. IoT devices run on home networks too, and no sane person is going to start arresting people for misconfiguring their wifi routers.

      If each device implements basic hardening and gets security updates, we eliminate 99% of the current problems. Since manufacturers will probably ship the same firmware to home users, that unregulated wasteland will get a little better over time.

      This bill only applies to equipment that vendors intend to sell to the US government. More sweepi

      • But you can legislate greater security for wifi routers. You're right that you can't punish people for not doing something they shouldn't need to know how to do. They can even be marketed as 'more secure wifi routers' and I bet people will buy them. Plus cheap IoT devices to plug into them.

        Securing at the endpoint drives up the price of said endpoint devices. It creates a regulatory environment with a high barrier to entry for new businesses.

        • They can even be marketed as 'more secure wifi routers' and I bet people will buy them.

          I highly doubt that will happen if they cost any more than the insecure ones. Most non-techie people I know use the router that their cable company bundles with their modem. They connect to free wifi everywhere, no matter how dodgy it is. I watched someone the other day having issues with the "Starbucks" free wireless in a coffee shop which was not Starbucks, and nowhere near a Starbucks.

          Most people are terrible at interneting, and cheap as hell. They're going to buy the cheapest router, and they'

        • But you can legislate greater security for wifi routers.

          As long as you don't secure devices from US law enforcement and TLAs. Gotta have a "secure backdoor" for law enforcement and the TLAs because somebody might smoke a joint, start organizing political opposition to an incumbent, or even become a whistle-blower and expose illegal government actions without good old Uncle Stal...err...Uncle Sam...monitoring everything. /s

          Strat

      • What about redundant systems with rolling updates so you can update an router with out taking the site down.

    • by bluefoxlucid ( 723572 ) on Tuesday August 01, 2017 @12:48PM (#54920687) Homepage Journal

      I've actually thought a lot about IoT security, as well as independence from service providers.

      I had at some point started this [github.com] but uh. Was ... diverted.

      The idea was to have an IoT hub that acts as the gateway to your IoT device. An IoT device or client would connect to an IoT hub via some system (e.g. Bluetooth) that's not flat-out open (e.g. you have to push a button and confirm pairing). The IoT hub uses a self-signed TLS certificate and exchanges it with a newly-generated certificate on the device or client. Viola: identity.

      It works with self-signed certificates because you have to be physically present to exchange them: you've verified face-to-face with the issuer, so the certificate is valid. Because of this trust, the IoT hub can sign extra certificates, acting as a CA.

      The IoT hub can get itself an IPv6 Internet address. If so, it can exchange that address to your client (e.g. phone, Yubikey) or IoT device (which might now be in another building, communicating over the Internet to your hub!). Now your devices know how to talk to the hub, and can tell it their address if they so desire when they're somewhere off in another network or on the local LAN.

      When your phone, computer, or anything else tries to talk to the IoT Hub, the HTTPS connection initiates over TLS using the exchanged keys: each device authenticates the other by validating certificates first. Your entire attack surface is the Kernel's network stack and facilities it uses; the code paths in the Web server that handle the request; and the code paths in your encryption library that validate e.g. Curve 25519 ECC (TLS 1.3 required). If you have an exploitable vulnerability and it's not in that set of code, then your IoT Hub and your IoT devices are patently unhackable, period.

      Let's face it: You can't hack what you can't access. The surface I describe above is equivalent to the air gap when you unplug a network cable, except this air gap might be hackable. If you can't hack that air gap, you can't hack what's behind it.

      That leaves you one big, important piece of security: key management. You have to keep those private keys on the client devices away from malicious actors (hackers, worms, trojans). Pass-through to a Yubikey U2F would be great, but ...tricky. The only way to use a hardware security key is to validate the certificate, then do a U2F validation, enlarging the attack surface. In theory, the client software could send a challenge to the Yubikey, get a response, and send a signed session key down the pipe encrypted with the IoT Hub's public key; but you can't use the Yubikey to decrypt something sent encrypted by the certificate, so it's a no-go.

      This is actually app-to-app 2-factor if you're doing it by TLS exchange, then U2F: the app "knows" (permanently stores) its TLS key, and it "has" (is running on a machine physically capable of accessing) the Yubikey.

      So, yeah. Unhackable IoT proxy, for some reasonable definition of "unhackable" (that being the reduction of probability of hackability by restricting the portion of running code in which vulnerabilities will enable a successful exploit).

      The other part was to provide service, either in proxy or right on the IoT hub, packaged as Docker containers. You'd have to provide authentication per-app, validated by IoT device identity (i.e. your Nest Cams each have a separate key, and those keys identify them, and those devices are given access only to the Nest Cam service) or by Client identity (each client application would have a separate key) both at the front-end Web server and by the service itself. Services may be clients of each other.

      So what have we got?

      You can access your IoT devices through your own public IP, rather than bouncing through a cloud service.

      You may be able to disconnect your IoT from the cloud. Google has a lot of stuff with the Ne

      • Your idea has promise, but it isn't the lack of ideas that is causing problems. It's vendors leaving their telnet port open without a password. They are not following best practices from decades ago, how can we expect them to implement a modern standard?
        • Implementing a modern standard is easier than implementing general security. The vendors aren't leaving telnet ports open (most of them, anyway); they're implementing Web applications with shitty validation, listening and processing unauthenticated requests from anywhere and then simply not taking action because the (now-fully-processed) request wants to access a resource that requires authentication. You find a validation bug, you get fun stuff to happen.

          The vendors are implementing standards. Poorly.

          • The vendors aren't leaving telnet ports open

            A huge chunk of them are. There was a talk at defcon last week (titled "All Your Things Are Belong To Us") where they showed exploits for a couple dozen devices. A good number of them had ports open. The Mirai botnet spread through devices that not only had telnet open, but also had them connected to the internet (which is where your idea would be helpful). You can see the source code and a list of passwords used starting on line 124 [github.com].

            • Fair enough. I haven't run into that as an issue, although I tend to not buy arbitrary IoT stuff. I know it was a problem with printers for a while--FTP and HTTP, not telnet--and otherwise only heard of routers having Telnet open.

              Still. The strategy I describe would put the IoT hub on the Internet, but not the IoT devices. Any such vulnerability would be ... rapidly exploited by the first laptop you used to brows Internet Web sites, ad-blocker or not, because of course it would.

              It's still a good fi

      • by ctilsie242 ( 4841247 ) on Tuesday August 01, 2017 @01:46PM (#54921071)

        This. I'd like IoT devices to communicate to a hardened hub (or perhaps hubs for redundancy), and the hubs do the work. Each device would have a manifest of what servers it talks to as well, so any communication outside of the pre-arranged files gets blocked.

        This is so simple, it is just crazy that this hasn't been made into a standard.

    • by Anonymous Coward

      Instead of encouraging robust perimeter security and a well thought out security model, let's just require expensive and ineffective security on every single little thing.

      Relying on a secure perimeter to protect your devices/network is a guaranteed failure in the making.

      Defense in depth. It is not a new concept.

  • by Anonymous Coward

    Not requiring standards for the entire industry, but for vendors to the Federal government which, if they work, will then propagate as de facto requirements for the private sector.

  • by thegreatbob ( 693104 ) on Tuesday August 01, 2017 @11:53AM (#54920291) Journal
    Not holding my breath, but hopefully this will result in something resembling sanity. Tired of the pollution of the internet with crap configurations that would have smelled funny even in 1997. In addition to regulations for manufacturers, the end user REALLY needs to become educated about the dangers of connecting stuff all willy-nilly.
  • ...legislators get busy solving technical problems they don't understand. We all know they will direct us toward more superfluous complexity that we need to work around, but at least that produces more job security for me. So, in a way, those popular people are the experts at creating security!
  • by MobyDisk ( 75490 ) on Tuesday August 01, 2017 @11:55AM (#54920311) Homepage

    He added that the legislation was intended to remedy an "obvious market failure" that has left device manufacturers with little incentive to build with security in mind.

    This guy gets it. But I was hoping for a market solution. The government could start by requiring vendors of US government products to meet certain guidelines. States could require that police and government tech meets a standard. That typically makes civilian companies jump on board and require similar guidelines. Then individuals start to say "Oh, I only buy IP cameras that meet FIPS-12345 standards."

    This approach is nice because it is flexible, and allows the market to decide what standards to apply. I fear Senators trying to write tech legislation.

    • Yep, "light touch" approach usually works out in practice better. If the Goverment would do that with net neutrality, we'd be much better off too ("Sorry, we only buy bandwidth from vendors that don't shape the traffic beyond reasonable measures accordings to FIPS-11231")
    • The government could start by requiring vendors of US government products to meet certain guidelines.

      But-but-but-that makes things cost more! Stupid government spending $10,000 for a device that I can buy for $100. The government is wasting taxpayer dollars, etc., etc.

    • by e r ( 2847683 )
      If people aren't willing to pay extra money for "secure" IoT devices then in what meaningful sense can you say that the people really need it?

      I often hear of the notion that sometimes things aren't all black-and-white, sometimes there's a gradation. Well, that's what price is: a gradation of how valuable something is.

      If people don't want to pay for it, or don't want to pay extra for it, then they literally don't find it that valuable.

      Folks may find it much more important in the future, but let's cross
      • by MobyDisk ( 75490 )

        If people don't want to pay for it, or don't want to pay extra for it, then they literally don't find it that valuable.

        True. I think they just don't know.

        They see the headlines about how some ISP went down for 2 days causing a massive outage, but they don't know that it was because a million Americans bought compromised IOT devices. But if we had labels and guidelines, they could at least know. Retailers might refuse to carry devices that don't meet the guidelines. Companies like Microsoft or Sony or Nintendo or Google would tout how their devices meet the guidelines. The public would become aware. We just need a bran

        • by e r ( 2847683 )
          So your solution is to mandate a label of some kind to indicate how secure a product is?

          I believe this would improve awareness, but overall would be a weak solution. I would prefer a stronger one.
          • by MobyDisk ( 75490 )

            Yes, that is a key part of my suggestion. Consider this: it works for electronics. Almost every product sold in the US has the "UL" mark of Underwriters Laboratories. In Europe and Japan they require the "CE" mark. It's really an industry thing. Retailers won't sell electronics that don't have the mark. Homeowners insurance won't pay for your burnt down house if the fire was caused by a non-uL approved device. I have that issue because my 3D printer was a kit so it isn't ul approved.

            • by mentil ( 1748130 )

              My universal constructor is UL approved, but the identical copy isn't. Good luck proving which is which!

      • The people with the compromised products don't care, of course. That's why we need a law to prevent people from buying IoT botnets that spend their days attacking the rest of the world.

      • The problem is that those devices are actively harming others on the internet. No one would care as much if it was just inconveniencing the people that bought the shitty devices.

        • by e r ( 2847683 )
          How about this: if my device that I bought and that I have control over is attacking other people's property (i.e. their servers) then I should pay a fine.
          The key here is that it must be proved that I have control over my device i.e. I built the device myself and have full access to the works, or it's an open source project and I have the ability to configure it etc.
          If I don't have control over the device (i.e. it's closed source or locked in some way so that I can't configure it) then the entity which ha
          • by MobyDisk ( 75490 )

            then the entity which has control of (i.e. the creator of the device) must be held responsible and must pay the fine.

            So what if that entitiy is in China? Or Russia?

          • It's impractical to prosecute or fine individuals with IoT devices, because all there is is an IP address that's flooding the internet with crap. It's not trivial to prove an IP address belongs to an individual, and the investigatory burden you're imposing with your proposed scheme would be too much for any investigatory system to bear.

            I don't think home-built or open-source devices are really an issue. Moreover, this sort of regulation is typically only applied to devices that are *sold* commercially. A

            • by e r ( 2847683 )
              I want to agree with you, because regulation seems like such an obvious and straight forward solution... if it works.

              But government regulation didn't actually make VW diesel cars comply with emissions regulations. Instead, VW sneaked around it for quite some time before being caught. Who knows how many other car manufacturers are doing the same thing?

              Also, how do we keep the security regulations up to date? The arms race between security and malicious attackers is pretty fast paced. I have a hard time b
              • Yeah, I don't mean to make it sound like regulation is some panacea. Legislation can be just like anything else, good or bad. There's a risk you take in trusting legislation, because it can actually make things *worse* if it's bad, and yes, we've seen a lot of bad legislation. Still, in the case of VW (and colluding partners, allegedly), regulation was only temporarily thwarted, remember.

                Generally speaking, I would propose that the best type of legislation is somewhat vague, declaring intent instead of s

    • Re: (Score:3, Insightful)

      by JoePete ( 4714549 )
      The market will correct once the courts start permitting businesses and individuals to sue on the grounds of negligence. This remedied problems in the auto industry fairly quickly. If an automaker is liable for the safety errors in its vehicles, then software manufacturers should be liable for their shoddy programming.
      • So we should just wait for enough people to be violated and then let the courts handle it? Based on the current laws, the courts will require you to prove that you personally were harmed which means that you can't sue unless you've personally suffered damage as a result of their negligence. Instead of being reactive, let's be proactive. There are many obvious flaws in the products that make up the Internet of Shit. These can be resolved by doing things such as specifying a minimum period of time that pr
      • by MobyDisk ( 75490 )

        Many of those businesses are located in China. We won't be able to sue them.

  • by kurkosdr ( 2378710 ) on Tuesday August 01, 2017 @11:57AM (#54920317)
    Please... please... if some deity is listening, make it so this becomes a law. It's quite sad seeing my perfectly serviceable Nexus 4 and 5 not receive basic security patching, and this has already spread to TVs, and soon vaccum cleaners and smoke detectors are to follow.
    • It's quite sad seeing my perfectly serviceable Nexus 4 and 5 not receive basic security patching, and this has already spread to TVs....

      I just bought a new TV for my bedroom. I specifically avoided getting a "smart" TV for exactly this reason.

    • The batteries on older phones die after like 3 years. Google doesn;t want to spend money supporting older devices that few people still own and use. Maybe google is purposefully making them to fail after a few years, but I've had like 3 nexus 5's and a nexus 4 that are all broken now.

      I don't really have that much of a problem buying a new phone every couple of years, and I'd rather Google focus their time and money on current devices.

      I don't think I would appreciate this strategy for all devices (i.e. de

      • I don't really have that much of a problem buying a new phone every couple of years

        There are a lot of people who are not as rich as you.

        • There certainly are. There are also a lot of people who are much poorer than me, spending more money on smartphones than I do.
      • The batteries on older phones die after like 3 years. Google doesn;t want to spend money supporting older devices that few people still own and use.

        It's not even just older phones that run years old Android versions. It's brand new low-end phones. Which are the majority of phones on the market.

        • Why do low end phones run on older versions of android? Because it's cheaper. If we start requiring phones to be secure (i.e. running on versions of android that are not obsolete, either by forcing manufacturers to support newer android versions or forcing google to support older versions of android), then those phones will not be so cheap anymore.

          Maybe it's a good idea to force people to buy more expensive phones by forcing them to pay the cost for better security. People don't always know what they wan

      • > The batteries on older phones die after like 3 years.

        Do you not check whether a phone has a user-replacable battery, before you buy it?

        > I don't really have that much of a problem buying a new phone every couple of years,

        I bet the greedy MBAs love you. Tell me, do you buy a new car every couple of years?

        > I don't think I would appreciate this strategy for all
        > devices (i.e. desktop computer components, routers, etc),

        My desktop, is a 9-year-old Core2 duo with 3 gigs ram, running linux, and stil

        • Do you not check whether a phone has a user-replacable battery, before you buy it?

          No, but it's because I've never had the problem of being unable to replace a battery. My problem is usually that around the time the battery dies, something else also ends up breaking (e.g. the GPS, the Cell radio, etc).

          I'm actually thinking about replacing the battery in my nexus 6p as we speak, but I might also just buy a new pixel 2 when those come out.

          I bet the greedy MBAs love you. Tell me, do you buy a new car every couple of years?

          I probably would if they were 100x cheaper, and every time I got a new one, it was twice as good as my last one. Wouldn't you? It's all about weighing

  • Spinning wheels (Score:4, Insightful)

    by ilsaloving ( 1534307 ) on Tuesday August 01, 2017 @12:13PM (#54920425)

    It's more important to APPEAR that you're doing something, than it is to ACTUALLY do something.

    How exactly do they propose to secure a marketing term? Cause that's all "Internet Of Things" is. It means absolutely nothing. While they're at it, why don't they also try to secure Big Data, The Cloud, and Web 2.0?

    • by e r ( 2847683 )
      On that point, it isn't the legislators nor their legislation which will secure squat.
      It's the hard working engineers and developers of the world who'll do it-- in spite of the legislators and their virtue signalling bills.
    • How exactly do they propose to secure a marketing term? Cause that's all "Internet Of Things" is.

      According to TFS, which is just the normal copy from TFA, the legislation says nothing about IoT. It deals with devices that connect to the internet purchased by the US Government. That's a vastly larger collection of things than just "IoT" (but includes IoT) and doesn't require the vendor to say anything about IoT in any marketing material. Your device has an internet port that uses internet protocols to communicate? Tag, this law's for you.

      • Okay, fair enough. But unless the legislation introduces something to the procurement process that isn't already there, then it's still pointless legislation. AFAIK the gov't is supposed to be *already* evaluating such things when they do their purchasing. It's still up to the buyer to verify that the vendor didn't cheat somehow.

        Furthermore, this still leave out the consumers who are still getting shafted with shoddy insecure hardware.

        • AFAIK the gov't is supposed to be *already* evaluating such things when they do their purchasing.

          It is a good bet that they do not.

          Furthermore, this still leave out the consumers who are still getting shafted with shoddy insecure hardware.

          Legislating technical standards for production of consumer products often, if not always, has unanticipated side effects. Some of those side effects are good, some are bad. EMI standards, for example, are a good idea in general, but often result in radiation above standards when a consumer uses a product in any way other than what was tested. Or the product costs more and radiates anyway. Or the second and subsequent production runs have what is considered to be a minor en

  • If the end game of IoT is to be ubiquitous then there is no way that you can rely on manual intervention to keep things up to date and secure. So how will this all be enforced?

  • Any device with internet connectivity needs to have s reasonable support window where the manufacturer provides known security updates. The unfortunate side effect is that support has large costs and will either drive down profit, or drive up price. People will care less (anb buy less) when the magical IoT keyfob/light controller/toaster oven costs $500 instead of $99.
    • The big manufacturers who can afford the expensive 'Regulatary Affairs' staff will be delighted to be able to produce $139 toasters instead of $129 toasters if it make it impossible for any upstarts to get into the market. In fact, I bet they would happily form a Trade Association to sit on the project and keep meddlesome startups out.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      The support window should be based on IP rights. As long as a company owns the IP, they should be responsible for patching it.
      If they don't want the responsibility to support their property for the duration of their copyright (however many decades that is now), then they should have the option to make it public domain by making their source code public domain.

  • this bill, as proposed, will not keep up.
  • Comment removed based on user account deletion
    • by mikael ( 484 )

      They would just put you onto an automated answering system and lead you down a maze of different questions, before reading out a disclaimer, the latest news, then putting you on hold. They would claim that was customer support.

  • Without laws that actual protect consumer information, privacy, etc - no law like this will mean anything. First because it doesn't cover all loses, second because if someone has information but wasn't the person that hacked you, then they're not in the "wrong." It must be made illegal to have the information in the first place. Start with something like the EU "right to be forgotten" and go from there.
  • auto drive car buy an new car each 4-5 years as updates stop after about 4 years or you need an high labor cost (at the dealer) computer swap / or upgrade + the markup. Thing dell / hp like 300-400% markup on HDD's and ram before dealer labor changes.

  • While I believe it's well intentioned this has zero chance of effecting any change. So a few in Congress are finally aware but the majority of both houses are clueless when it comes to tech (think: a series of tubes clueless). Either it will fail entirely (highest IMO) or it will be broken re something basic regarding the Constitution. It's almost as if half in Congress have never read the document.

    Broken also covers the clear misses that are very likely regarding trying to lock down security for reason X
  • Like home automation equipment isn't expensive enough already. Now we get to add on government red tape and delays to market while some inept bureaucracy looks over a new device. Your $50 light switch just doubled in price.
  • The new bill would require vendors that provide internet-connected equipment to the U.S. government to ensure their products are patchable and conform to industry security standards. It would also prohibit vendors from supplying devices that have unchangeable passwords or possess known security vulnerabilities.

    "We can't be bothered to do our own research on what we buy, so you aren't allowed to sell us those. And to make sure you comply, we'll do our own research on what you sell us!"

    Why not just mandate that the government can't buy those? That would be a lot simpler and accomplish the exact same thing. Neither addresses the real issue, but that's obviously not what this is about.

  • There are many IOT companies that market a product, sell it, design it, then die; in that order too.

    A light switch can last decades. I'm going to use that has a hypothetical IOT device.

    Let's say there is an orphaned brand of light switch that was installed in many places decades ago. It might have been "secure" when it was released, but encryption and systems security are only as good as the next few generations of computers. At some point in the future, everything will need to be patched.

    All else being e
  • These standards are pretty worthless. Unless you significantly exceed them, your products will suck at security.

  • lol yeah right. we notice your system sucks here is a little lib from the nsa so we can help you keep it âoeup to dateâ looooollllololol noooope
  • In all the mil tests we found there was always an IoT backdoor.

    Always.

    Without exception.

    It's the nature of forgetfullness. "Honey, did you remember to update the toaster and the fridge?"

  • Requiring updates is good and all, but for how long?
    Even Microsoft was reluctant to patch XP. Somone, somewhere is still running WIndows 95. Are they entitled to patches?

    Say I buy a $50 IP camera for home security. Do I still demand patches in 2 years? 5 years? 10 years?

    This might lead to a "kill switch" for cheap IoT devices once they go EOL. New model next year, no budget to patch both, so let's kill last year's. Otherwise, support costs will bring down any company.

    For a car analogy, there's a time limi

    • Don't worry. This only applies if they want to sell to the government. If they only want to sell you their junk, they're fine.

  • Have you read TFS? They don't make it mandatory to make IoT devices patchable or even at least secure the moment they get shipped, all they do is say that if you want to sell your crap to the government it has to be.

    So no, the Intelligently Designed Internet Of Things Systems can still be sold to their acronym.

On a clear disk you can seek forever. -- P. Denning

Working...