Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Government Security

How Do US Government Agencies Verify Security Software from Private Contractors? (politico.com) 117

A recent article at Politico argues that the U.S. government "doesn't do much to verify the security of software from private contractors. And that's how suspected Russian hackers got in." The federal government conducts only cursory security inspections of the software it buys from private companies for a wide range of activities, from managing databases to operating internal chat applications. That created the blind spot that suspected Russian hackers exploited to breach the Treasury Department, the Department of Homeland Security, the National Institutes of Health and other agencies...

Attacks on vendors in the software supply chain represent a known issue that needs to be prioritized, said Rep. Jim Langevin (D-R.I.), the co-founder of the Congressional Cybersecurity Caucus. "The SolarWinds incident... underscores that supply chain security is a topic that needs to be front and center," Langevin said....

He said Congress needs to "incentivize" the companies to make their software more secure, which could require expensive changes. Some others are calling for regulation.

Private companies regularly deploy software with undiscovered bugs because developers lack the time, skill or incentive to fully inspect them.

Long-time open source advocate Steven J. Vaughan-Nichols argues another issue is the closed-source nature of SolarWinds' software: Proprietary software — a black box where you can never know what's really going on — is now, always has been, and always will be more of a security problem. I would no more trust anything mission critical to proprietary software than I would drive a car at night without lights or a fastened seat belt... A fundamental open source principle is that by bringing many eyeballs to programs more errors will be caught. That doesn't mean all errors are caught, just a lot more than those by a single proprietary company... Just consider the sheer number of serious Windows bugs — does a month go by without one? — compared to those of Linux...

In short, proprietary software companies, like SolarWinds, are still making huge security blunders, which are hidden from users until the damage is done.

This discussion has been archived. No new comments can be posted.

How Do US Government Agencies Verify Security Software from Private Contractors?

Comments Filter:
  • How? (Score:4, Insightful)

    by nospam007 ( 722110 ) * on Monday December 21, 2020 @07:37AM (#60853212)

    Not at all.
    Next question?

    • This is the first headline I see that falls under Betteridge's law of headlines [wikipedia.org], but begins with "how".
  • by gweihir ( 88907 ) on Monday December 21, 2020 @07:50AM (#60853222)

    Inspecting software for backdoors is significantly more difficult and expensive than writing that software in the first place, at least if the backdoors are placed with some level of sophistication. If you do not trust your software make, the only sane option is to instead wrote the software yourself. Obviously there is a lack of skills/funds for them to do it themselves. How on earth would they be capable to inspect software successfully for backdoors in that situation?

    • Well, they are paying orders of magnitude more too.

      It's the original reason, the rule goes:
      If you are working for a corporation, add one zero.
      For a government, add two zeroes.
      And for the military, add thee zeroes. :)
      It's just that when you operate on a "for profit over literally everything else cause 'shareholders demand it'" model, you will find that you can leave away that additional work thst you are paid more for, as long as the victim can't tell.

    • Re: (Score:3, Interesting)

      by Anonymous Coward

      Yes this is the core of the matter right here.

      Lets consider that Apple cert validation flaw from some years back. I don't think that was a deliberately placed backdoor and to my knowledge it was not widely exploited before it was reported by its exactly the kind of thing a malicious state actor wanting to slip in a back good could do, perhaps would do.

      If memory serves it was duplicated goto line or simalar after a branch statement, where someone did not use braces and the duplicate line essentially made the

      • by gweihir ( 88907 )

        Yes, pretty much. Also remember that there is a wast wealth of insecure code out there where the problem has been found and that can serve as inspiration. There is code that looks right, feels right and still does something completely different than expected when some specific conditions are met. No code security review is going to find something like that reliably.

    • The government generally does not have the technical expertise, either to inspect or to write software. And that's ok - we don't want the government doing everything itself. It also doesn't build buildings itself, or highways, or lots of other things that require specialized knowledge. The question may better be: under what circumstances should the government require and pay for an independent software inspection?

      I see the bigger problem as this: What competent company are you going to hire? One of the prob

      • by gweihir ( 88907 )

        Well, that would mean that governments cannot get reliably secure software. But it may well be the truth.

        There is a high price to pay for organizational dysfunctionality. And, let's be honest, if they were enterprises, basically all governments would have vanished into bankruptcy a long time ago. As it is, they can just bill the tax-payer (in money, in time and in blood) for the results of their abysmal performance.

    • by Jaime2 ( 824950 )

      You don't need to inspect the actual software. Actually, you probably don't want to rely on inspection, because it's terribly expensive. The open source "many eyes" mantra is all about spreading that burden. However, if you have to make sure your software is secure, you can't rely on the rest of the world to do it for you in most cases.

      So, here's what you do; first, get all the free benefits you can by always installing security updates very soon after they are published. Put whatever funds are necessary to

    • at least if the backdoors are placed with some level of sophistication

      There are a lot of companies that sell products on the GSA schedule that support and development teams in India and China. They are also allowed to hire people from highly problematic (from a counter-intel POV) countries like Russia, China, India, Vietnam and Israel. The pipeline to get spies into those teams is relatively short, as the recent doxxing of the Chinese Communist Party [skynews.com.au] shows. The CCP has operational cells all across Western

    • by vyvepe ( 809573 )

      Inspecting software for backdoors is significantly more difficult and expensive than writing that software in the first place, at least if the backdoors are placed with some level of sophistication.

      Not true. A careful review is about 4 times quicker than writing code from scratch. Now, you can claim that placing a backdoor can be done in a very sophisticated way and finding this in a review would be hard. But so is placing such a sophisticated backdoor there. That increases the development time a lot so more review time is justified as well.

      The bigger problem with customers inspecting vendor code is whether the vendor is willing to provide all the source code, design documents and the build environmen

      • by gweihir ( 88907 )

        Inspecting software for backdoors is significantly more difficult and expensive than writing that software in the first place, at least if the backdoors are placed with some level of sophistication.

        Not true. A careful review is about 4 times quicker than writing code from scratch. Now, you can claim that placing a backdoor can be done in a very sophisticated way and finding this in a review would be hard. But so is placing such a sophisticated backdoor there. That increases the development time a lot so more review time is justified as well.

        The bigger problem with customers inspecting vendor code is whether the vendor is willing to provide all the source code, design documents and the build environment.

        Actually, placing sophisticated backdoors is not very hard for qualified. Finding them is next to impossible. This does assume a significant skill gap between the person designing and placing the backdoor and the person doing the review. That gap is usually there in a competent attack. Also, you are probably still thinking of some explicit backdoor that has its own network socket or the like. That is low amateur level.

    • Given the current military budget and how important "national defense" is to government spending, it ought to be simple to justify moving literally everything in-house. But there's nothing flashy about that vs. a new fighter jet.

      • There's absolutely no reason to believe that software developed by people who had to settle for a government job at 1/3rd the pay are going to produce secure, reliable software.

        • So you've identified the real problem. You know they are paying way more than that to outside contractors.

          • > So you've identified the real problem. You know they are paying way more than that to outside contractors.

            And paying more has caused them to get secure, robust software?

            I work in software security for a living, reading and writing policies for secure software development and reading research on the subject. There are dozens of parameters / indicators you can use for software development security. I think the policy I'm writing for our enterprise has about a dozen things that I'm requiring for security

            • by gweihir ( 88907 )

              It is not that simple. In-house you get more control over the process. Whether you are able to use that control to your advantage or not is a different question. A key problem that makes people vulnerable to being corrupted by outside agents is low pay and low job satisfaction. A secure coding policy should _start_ with requiring that the people writing the code need to be satisfied with their work situation and generally like what they do. Because there is no valid replacement for loyalty and that you get

              • One thing I've found helpful with both devs and management is to recognize and repeatedly emphasize a particular fact that is only obvious once it's been stated.

                A secure system is, of course, one that continues to function reliably even when it's under attack. (Morris's definition).
                That requires that the system must function reliably when it's NOT under attack. Let's look at our CIA.

                Starting with availability. You might think of a DOS attack.
                Availability means a system continues to run, to be available fo

    • And at a practical level, writing it yourself poses more or less the SAME RISKS.
      Any reasonably cogent organization might just plant someone on your coding staff (either as an employee or contractor) or even a bloody janitor with the right set of keys on his keyring and a single USB stick - voila, you're penetrated.

      The fact is we operate with a lot of trust in a world that deserves none.

      • by DarkOx ( 621550 )

        The thing is writing it yourself means you can avoid a lot of the complexity. What part of the Windows code bases do think is direct or indirect requirement for backend systems at the US DOE for example?

        The smaller you make the software foot print the less opportunity there is place something malicious there and the more likely it is to stand out.

        Look at what pentesters did with powershell versions 2.0 and prior. It gave them access to the entire .Net framework without having to actually compile anything o

        • by gweihir ( 88907 )

          Complexity is the enemy of security. No question about that. Even if most of the current crop of developers does not seem to know that.

        • Completely agree. But on the other side of that is expertise and competence as well.

          Again, I basically agree with your points, but ultimately what's going to be a better (to say nothing of more secure) piece of software: something you build yourself with in-house 'talent' (who has this, without having to hire contractors anyway, really? And in that case, are you really getting any security benefit?) or something built by people whose SPECIFIC expertise and talent have spent a decade or more on this exact

      • by gweihir ( 88907 )

        Th trick is to write it yourself with people that are highly qualified, highly motivated, very well paid and very well treated. Also, people that have gone through a real security background check. With that, you need deep-cover agents to get in and they will likely not be able to write code on that level anyways.

        The funny thing is that due to the productivity of people in this class, producing the code will be much cheaper overall and the result will be much better in other aspects as well. But it requires

  • by BAReFO0t ( 6240524 ) on Monday December 21, 2020 @07:59AM (#60853230)

    The problem is that you need to be skilled, to tell who is skilled.
    Otherwise you will always have to trust whoever looks the most trustworthy. (Like SolarWinds.) Which forces "argument from autority". A logical fallacy. And the only way around it is experience. By you, personally.

    Open source is all good and well, and yes, closed source is literally [like the figure of] buying the cat in the bag, but there is a difference between "everybody *can* look" and "somebody *does* look", as OpenSSL did prove to OpenBSD. :)

    Especially with hostile entities delinerately injecting underanded backdoors [wikipedia.org], that not even an above-average audit can catch.

    But if you are operating on the level of the Mossad, USA, Russia or China, you're probably injecting dopant-level hardware trojans [startpage.com] from your own fab by now. So that's all moot anyway.

    • by gweihir ( 88907 )

      The problem is that you need to be skilled, to tell who is skilled.
      Otherwise you will always have to trust whoever looks the most trustworthy. (Like SolarWinds.) Which forces "argument from autority". A logical fallacy. And the only way around it is experience. By you, personally.

      Exactly. I keep telling that to people, but I think so many are now Dunning-Kruger examples that they simply do not understand that being able to look at glossy marketing material does not constitute the skill to evaluate a company.

  • how many times...

    BUY REDHAT/IBM

    dont get fired/hacked

  • by sphealey ( 2855 ) on Monday December 21, 2020 @08:18AM (#60853252)

    This was the big push, started in the 1990s under Clinton and of course accelerated by Cheney/Bush, to convert US government systems (Federal and state) from internally developed and/or bespoke to "COTS" (commercial off the shelf) provided by commercial contractors, and then ultimately to have all systems and software operations and maintenance outsourced too. There were cries of anguish here on this site from e.g. military systems operations people who were watching their systems built up, modified, and optimized over 40 years (in software, and often 100 years on pencil and punch card before that) ripped out and replaced with shiny new commercial systems that didn't even come close to meeting the overall requirements. And who were then moved to other jobs or just RIF'd.

    What did we expect would happen?

    • by gweihir ( 88907 )

      What did we expect would happen?

      Well, I did expect exactly this. There are too many Dunning-Kruger cases around that simply follow the hype without any understanding at all. It is utterly pathetic.

    • I've seen the same thing happen in the private sector (healthcare), where a well-running legacy system with a mix of custom code and a small amount of COTS was replaced with a completely COTS system. It was a disaster of course.

      But isn't there a counter-factual argument about this kind of scenario? Where you reach the end of the line for a platform and suddenly you've got a vast amount of legacy code which needs to be ported if not refactored entirely for a new platform? Or you need to introduce a new pl

      • I think the actual cost of refactoring is greatly exaggerated by developers ... because it's such an awfully annoying thing to have to do.

        • It's just a pandora's box, once you open it then everyone wants their pet change made and it winds up being a complete re-write.

      • One of the great profundities of life is that the optimal solution is not always a good solution.

    • Re:COTS (Score:4, Informative)

      by gtall ( 79522 ) on Monday December 21, 2020 @09:51AM (#60853372)

      Actually is was started under Reagan. The feeling was the federal government should be privatized. It was the New Management equivalent of peeing on the Previous Management policies. It made the new management feel good that they could point to bullet points on slides.

    • I expected the people who pushed for COTS would receive rewards for cost saving.

      (And no, I don't believe that was necessarily the right solution. I once got a cost saving award for...well, I won't explain it here, but suffice to say it was not warranted.)

    • Developing everything internally adds some security but is by no means a silver bullet. I've worked software security audits before and regardless of whether or not it was COTS or an in house product, security was always an afterthought. Making the software secure was always the last thing to be considered, and usually after the product is essentially finished. They'll look through the list of known vulnerabilities that were found to exist in their product, and maybe pick a few easy ones to actually fix, an

  • by Entrope ( 68843 ) on Monday December 21, 2020 @08:25AM (#60853260) Homepage

    There are fairly well-understood, although often expensive and not so commonly used, ways to audit software security. The DoD's upcoming CMMC also encourages contractors to make the security of delivered products part of their institutional practices. For secure (airgapped) networks, all code is supposed to be scanned for malware before being released, and other security controls are strongly recommended to check for obvious security problems -- virus scanners are never going to be perfect. But not even the government wants to spend the money and time to do in-depth checks against all that code all the time.

    However, the SolarWinds hack isn't really addressed by those mechanisms. This was straight up a case of letting a vendor deploy new code straight to your network without checking that code. That's a simple supply chain vulnerability, exacerbated by not having a good way to audit or authorize those changes before they are used.

    • The DoD's upcoming CMMC also encourages contractors to make the security of delivered products part of their institutional practices.

      Encourages? How about requiring it? That's what our MoD and larger corporations already do: they don't just audit the code (when possible); they also audit the supplier and their "institutional practices". Who works there? have they been vetted (for MoD work, employees at a supplier may need a security clearance)? How does the supplier audit their own code? Do they rigorously apply the 4 eyes principle before allowing commits? How is the integrity of the repository monitored? And so on. There are e

      • by Entrope ( 68843 )

        Those are already covered by other standards, yes. CMMC goes a lot farther and is much more security-centric.

    • by endus ( 698588 )

      Any company in a regulated industry is going to be looking for their suppliers to have an application security program. Barrier to entry. Very very common.

      The quality of outsourcers' third party risk programs and their ability to track and follow up on whatever level of assurance they require from the vendor varies, but auditing software security at some level is very basic.

      • by Entrope ( 68843 )

        Any company in a regulated industry is going to be looking for their suppliers to have an application security program. Barrier to entry. Very very common.

        The quality of outsourcers' third party risk programs and their ability to track and follow up on whatever level of assurance they require from the vendor varies, but auditing software security at some level is very basic.

        That kind of auditing is about auditing the supplier and their processes, not the software they deliver. It attempts to prevent exploi

        • by endus ( 698588 )

          "Application Security Program" refers specifically to application level pen testing and secure code review. Those two types of testing are usually called out specifically in any contract with a large entity. Annual testing is the standard requirement. Most regulated outsourcers will want some level of reporting from that program as well.

          Follow up on the reporting varies more, but most third party risk programs run by large directly regulated entities are going to ask for it during the annual audit/risk a

          • by Entrope ( 68843 )

            You are missing the point entirely. Those kinds of programs are good for what they do, but they address points in the supply chain earlier than what was exploited in the SolarWinds case. When a software update server is compromised, it almost doesn't matter whether the developers have strong peer/code review or use SAST tools or do pen testing in an integration environment or whatever else -- the bad code gets in after those.

            There are two simple ways to catch this: First, scan and test software when it co

            • by endus ( 698588 )

              Got it, I see what you're saying now.

              Second method is definitely much more realistic from...well...just about every perspective. You could do it with stable open source software but outside of that the problems start to mount pretty quickly.

              I would bet on increased due diligence targeted at the security of update servers in the wake of this as well. Not necessarily applicable to what I work on now, but I would be aware if there was a lot of focus on that specifically, and there is not. The bar for what i

  • It used to be that there were multiple layers. You had intrusion detection systems at the firewalls, so that the security office could watch for abnormal traffic flows.

    But then someone at the GSA thought it was a good idea to encrypt everything, and so we had to move away from HTTP, FTP, and other protocols where we could easily monitor the traffic.

    HTTPS-Only might not've been so bad if they had used proxies to decrypt the traffic, then inspected the data between the proxy and actual endpoint. But that ba

    • by sinij ( 911942 )
      Monitoring illicit information flows is much harder now that everyone knows to mask command and control traffic as HTTPS. The available solutions - end point agent that monitors traffic, forced MITM interception with decryption all introduce their own set of problems. For example, if I have agent that controls firewall rules on each end point, this agent needs to connect to the central server to get its policy. This means that now there exists a new attack surface, where agent, central server or communicati
    • Your observations regarding the relative impenetrability of SSL-encrypted traffic might have been valid some time ago, but the world has moved on from there.

      First, there is widespread use of SSL Interception technology. Many large companies routinely inspect all SSL that crosses their network perimeter via this means - and those organizations have specific clauses in their contracts of employment to allow them to do it. See here [zdnet.com] for an overview of how it works.

      Next, there is basic meta-data. Companies
  • Just stop exempting commercial software products from the laws that apply to other products. If you sell software that says it does "X," then by God it should do "X" reliably and to specification. If it doesn't, and you won't fix it at your expense, you should be civilly liable for selling a product that is defective and not "fit for purpose." No new regulations are needed. We just need to put the fear of God into the pointy haired bosses.

    • by bws111 ( 1216812 )

      What laws, exactly, are you supposing software is 'exempt' from? What (non-software) products are you supposed are required, by law, to protect you from criminal activity? Do you think it is a 'defect', that must be fixed at the manufacturers expense, that cars do not come with bullet-proof glass and armor plating?

      • What laws, exactly, are you supposing software is 'exempt' from?

        Contract law. A EULA only agreed to by one party and offering the other party nothing they haven't already paid for should be unenforceable. And it's that EULA that gets them off the liability hook.

        • by bws111 ( 1216812 )

          A EULA is not a contract. You can not exempt yourself from the law. If there are laws that 'apply to other products', as the OP states, they also apply to software, So what are these mythical laws that software is exempt from?

          • A EULA is not a contract.

            wat [wikipedia.org]

          • The EULA is, in fact, a contract. And, it is a contract that specifically exempts the seller for all liability and usually state that they aren't responsible for anything involving the software up to and including the software complete failing to perform.
    • by sinij ( 911942 )
      Lets look at how civil engineering handles this, as they are tightly regulated and are suppose to be responsible for all flaws for decades after construction. They take out insurance and design to the code. If something up to the code but still crashes and burns, nobody cares - they are protected from liability. Now, do you think any kind of "software building code" could exists? What is it going to say on page 1, don't use C because it is too dangerous?
    • by vyvepe ( 809573 )

      If you sell software that says it does "X," then by God it should do "X" reliably and to specification. If it doesn't, and you won't fix it at your expense, you should be civilly liable for selling a product that is defective and not "fit for purpose."

      What specification? Do you have any idea how huge a detailed specification is? You are only moving problem from "check the program against the specification" to "check the specification against the desired reality". Anyway finding good sets of specifications for some repeating use cases can be a good research area.

  • by Meneth ( 872868 ) on Monday December 21, 2020 @08:49AM (#60853290)

    Cars are perhaps not mission-critical, but with self-driving systems they are increasingly life-critical for all who brave the road.

    Now I wonder: what parliament, anywhere in the world, is most likely to pass a law like "you may not operate a self-driving automobile on public roads unless its software is Free"?

    • Probably none. Free software developers to have deep pockets to line those of parliamentarians. Their "adversaries" do. The votes follow the money. I'm sure you're shocked, just shocked, to hear that.

      Best,

    • None. Your statement is actually "you may not operate a self-driving automobile on public roads unless you give said software away to your rivals for free and thus destroy any competitive advantage and destroy the companies investment in self-driving cars".
  • There are several technical problems here, including (1) producing better code. (2) better verification. We have techniques for both of these, but they are both expensive and unpopular. Too many developers would rather just hack away to meet schedule than to actually do the much harder work to design better code and spend more effort in verification.

    So the solution involves business decisions, getting companies to adopt practicies that produce better code. I'm not sure about 'carrots', but I know the bi

    • Too many developers would rather just hack away to meet schedule than to actually do the much harder work to design better code and spend more effort in verification.

      "Hey boss, can I spend the time to verify this code for security?"

      "No."

      Now what? Is that the developer's fault?

      • Actually, yes, at least in part. If you were a P.E. at a structural engineering firm, you would still be liable even if your boss told you to cut corners.

        • Actually, yes, at least in part. If you were a P.E. at a structural engineering firm, you would still be liable even if your boss told you to cut corners.

          If the river were whiskey, all the fish would die. But we're here in the actual, real world, where a software engineer isn't liable even if their boss tells them to cut corners.

          These choices aren't up to engineers. The structural engineer is legally obligated to do those things, so their boss reasonably has to let them do them.

          We either need to bring the same logic to software, or we need to accept that the engineer isn't at fault. They have to pay their bills. Without rational controls, capitalism always g

          • by bws111 ( 1216812 )

            Ah yes, it is capitalism's fault. Because in your little non-capitalist dream world there are no time or resource pressures. Every project can just drag on and on, as long as there is some developer who isn't satisfied. Yeah, right.

            • Ah yes, it is capitalism's fault.

              Is it capitalism's fault you can't read? What I said was that capitalism always needs to be carefully controlled, or it goes wrong. Kind of like you need to be carefully coached, or you ignore what was written.

          • by alexo ( 9335 )

            Actually, yes, at least in part. If you were a P.E. at a structural engineering firm, you would still be liable even if your boss told you to cut corners.

            If the river were whiskey, all the fish would die. But we're here in the actual, real world, where a software engineer isn't liable even if their boss tells them to cut corners.

            These choices aren't up to engineers. The structural engineer is legally obligated to do those things, so their boss reasonably has to let them do them.

            We either need to bring the same logic to software, or we need to accept that the engineer isn't at fault. They have to pay their bills. Without rational controls, capitalism always goes wrong. Always.

            The same logic already applies to software. A Licensed/Chartered/Professional Software Engineer [uhcl.edu] is legally liable for the work they sign off on. Although the fact that in many jurisdictions unlicensed software developers are allowed to call themselves "Software Engineers" -- without the Licensed/Professional/Chartered (as applicable) designation -- can be confusing.

            • Software Engineering licensing -and assumption of liability- (The software equivalent of a civil engineer sealing the drawings) are both required to fix this problem.

              It would be interesting to know how Texas handles 'assumption of liability' and whether there have been any liability judgements associated with that.

              • I'm not sure about liability, but due to the way the regulations work there aren't very many software PEs in Texas. In order to get your PE, you have to be vouched for by an existing PE you've worked with for several years. Which creates a chicken and egg problem. You can't have another software PE vouch for you when there are no software PEs. So you have to work with a civil PE or something for a few years and get them to vouch for you. It's going to be a slow ramp-up process.

                • The transition to 'professional liability' won't happen overnight. This includes establishing qualifications including licensing requirements for Software PEs, contractual requirements between providers and consumers for the delivery of appropriately certified/accredited/whatever-ified software (what is the delivery mechanism? What are the exact assumption and limitations on liability?)

                  This requires active engagement between the engineering community (IEEE/ACM?), the licensing authority (each state has pr

  • Trusted Systems (Score:5, Insightful)

    by chill ( 34294 ) on Monday December 21, 2020 @09:11AM (#60853316) Journal

    Trusted systems [wikipedia.org] can provide a great deal of security, but they are very expensive and slow to change.

    Today's environment of "agile" and frequent updates to software make things like trusted systems impossible. Updates and new versions bring new revenue from existing customers. Once software is done, you have a hard time getting people to keep paying for support, and thus staying in business.

    As major projects move away from semantic versioning [semver.org] to pure build numbers or incremental release numbers, you no long have the option of rejecting new features and complexity if you just want the predictability, stability and reliability that comes with pure bug fixes.

    The Unix philosophy of "do one thing and do it well" is a main reason the *nix systems (and their like-minded bretheren) were the only real choices for mission critical government, military, and industrial systems. Complexity breeds insecurity and instability. Time for that lesson to be learned yet again.

    • by sinij ( 911942 )

      Trusted systems [wikipedia.org] can provide a great deal of security, but they are very expensive and slow to change.

      "Very expensive" is understatement of the year. This approach requires formal design process... We are talking 100x to 1000x times more expensive, to the point that even government can't afford the expense. It is also much, much slower way to develop software.

      Unless there is breakthrough in automation to speed up development and reduce costs, this is as much of a solution as using typewriters for everything [bbc.com].

      • Trusted systems have varying degrees of trust that are appropriate for different users. Certified operating systems include:

        Apple Mac OS X 10.6 (Rated EAL 3+[2])
        HP-UX 11i v3 (Rated EAL 4+)
        Some Linux distributions (Rated up to EAL 4+)
        Microsoft Windows 7 and Microsoft Server 2008 R2 *in a particular configuration* (Rated EAL 4+)
        AIX 5L with PitBull Foundation (Rated EAL 4+[4])
        Trusted Solaris
        Trusted UNICOS 8.0 (Rated B1[5])
        XTS-400 (Rated EAL5+[6])
        IBM VM (SP, BSE, HPO, XA, ESA, etc.) with RACF

        Others probably me

    • by hey! ( 33014 )

      There isn't a single right way to do software that fits every circumstance and works even if you totally lack common sense. If business priorities change faster than you can deliver software, you simply have to find a way to be more agile. If a piece of software is going to be a fundamental part of your network operations for years to come, you need to push back harder against arbitrary deadlines.

      The problem with security is that you can't evaluate it with a functional test. Management quickly learns that

    • Once software is done

      That's a rather idealized view. In the real world, in most cases, software is never done. Not because the requirements aren't met (though they often aren't) but because the requirements change, because the context changes.

  • by kaur ( 1948056 ) on Monday December 21, 2020 @10:06AM (#60853408)

    You can have all the controls and requirements in the world defined, applied, tested and passed with flying colours.
    Operational security will eventually trump - or defeat - them all.

    How could an agency test if access control to server in SolarWind's update chain was secure?
    Not by "verifying" the software or service, for sure.
    Perhaps by auditing SolarWinds?
    But I am sure they had all the necessary bits in place - required MFA for admin accounts, network separation, maybe 4-eyes principle on important updates.
    All this blew into pieces because of lacking operational controls.

  • Once a contractor shows that they've dotted all the is and crossed all the ts and have the qs and ps aligned correctly...that's it. They're approved. Nobody ever goes back through and verifies either the software or the update process. EVER. No matter how many iterations or updates are produced, or when, or why, or what's going on in the world at the time.

    So, instead of the Reagan-era "Trust, but verify", we now have "Verify, then trust unconditionally". And that is a major factor of what led to this incide

    • by endus ( 698588 )

      I'm not sure what industry you're in, but companies go back and verify the software, and the update process, and the security around development of the software, and the security around the IT environment the software is developed in, and the contractor's personnel security, and...and...and...and...all the time.

      All. The. time.

      For smaller companies, running a third party risk program is definitely more difficult, but every big player in every regulated industry is doing this enthusiastically right now. The

  • The main government-focused certification process is DoD APL [disa.mil]. It includes functional testing and compliance checklists. Part of these checklists is to show FIPS [wikipedia.org] and Common Crietria [wikipedia.org] certifications.

    These programs are good at preventing known flaws (e.g. your AES-CBC implementation is bad) and known vulnerabilities (e.g. OpenSSL CVEs). These programs are also good at enforcing basic security hygiene like audit, access controls, use of secure channels.

    These certification programs do nothing to prevent unknow
    • To quote from your link:

      " the language with the least use of open-source libraries is also the one with the most bugs"

      • Which is kind of damning with faint praise. Seriously, the guy is saying "This wouldn't happen with open source because many eyes" when that is proven false by an article on Slashdot yesterday.
  • A non-military, Executive branch agency is required to follow FISMA. This means following NIST SP 800-53, latest revision, for control guidance and getting explicit authorization to operate (ATO) for an information system.

    However, in FISMA parlance, SolarWinds is not an information system unto itself. It would be a component on the underlying General Support System (GSS). That is the underlying infrastructure for IT in general.

    In practicality it means deployment of SolarWinds would require changing default

    • by PPH ( 736903 )

      Again, none of this really address the situation where a vendor's build process is compromised.

      An audit of a vendor's build process might have helped. All those supply chain risk assessments are fine. And I would expect that the security of the build process would be included. But if nobody actually takes that assessment and says "password on a public facing update server is weak", why even bother doing the assessment?

      • by chill ( 34294 )

        Except "password on a public facing update server is weak" wasn't the issue. That was something separate from almost a year ago. Internal build servers -- the ones that do the compile and sign -- aren't public facing.

        That may have been the initial entry way back, but once they were in they weren't detected. That brings us back to "nuke it from orbit is the only way to be sure". But rebuild from scratch everytime a mistake is caught? Ugh.

  • People want to make sure the government doesn't spend too much money.
    To accomplish this, a government job is often lower what you would get commercially.
    Good employees will leave government jobs and go to commercial industry where there is better pay. *
    Mediocre employees or employees who failed to succeed in commercial industry get jobs in government. *
    This creates lackluster results.
    Lackluster results, forces governments to hire expensive contractors to do the work necessary.
    or
    Lackluster results creates p

  • How quick we are to forget:
    https://en.wikipedia.org/wiki/... [wikipedia.org]

    Sorry, while open source has a good story, its just as plausible that an open source update has something nasty. If you think that just because its open source means that someone is actually vetting code changes before they run binaries (of otherwise trusted applications) you are crazy. Even if I did read the code, am I able to fully understand it and all potential implications? https://www.ioccc.org/ [ioccc.org]

    While open source could help here its no sa

  • The post highlights auditing, regulating, and inspecting software.

    This would not have helped.

    Others point out testing, the problems with automatic updates, virus scanning, inspecting for back doors, and auditing vendor's infrastructure.

    These would not have helped either. (Well, maybe disabling automatic updates - which trades for other problems.)

    The problem that SolarWinds had was LEAKED CREDENTIALS*. Their legitimate software was _replaced_ by a customized version by a group with the resources of a nation,

  • Managing supply chain risk is not a new problem, and there are tons of established best practices. No doubt, this breach will cause many companies to hire a bunch of consultants to try and reinvent the third party risk management wheel by ignoring all of these best practices and placing unreasonable and unrealistic demands on suppliers, but it doesn't need to be that way at all.

    Requirements for suppliers to have an application security program is basic stuff. There are service providers out there doing ap

  • Once the kickback checks clear, the software is secure.

  • The GSA should be held accountable for the solarwinds123 fiasco. They have sat on their hands for years spending billions (trillions?) and not really taking their supply chain seriously. Also worthy of mention is the NTIA's Software Transparency initiative:
      https://www.ntia.doc.gov/Softw... [doc.gov]

  • Well, with all the right-wing phony deficit hawks, they've cut off hiring new federal employees*, so you hire a federal contractor to review the security on software from a federal contractor.

    Right?

    And, since you're cutting their budgets, they don't have enough time, so they'll use the vendor's "evidence" as proof of security.

    * Speaking as someone who spent '09 through '19, when I retired, as a federal contractor, civilian sector, sr. Linux sysadmin. You think I had time to review the stuff, when I had my j

  • I'd bring up an alternative to what Vaughan-Nichols says: that open source brings your eyeballs to the program. Even if nobody else is looking at the code, the entity that wants to use it can look at it. They can scan it for security problems and make a decision based on actual data about the code's vulnerabilities. By comparison, proprietary software can't typically be examined the same way because the source code isn't available or is only available in obfuscated form. The government could negotiate acces

  • This bearded old man has looked at the discussions of the Solar Winds hack.

    The points the leap out at me are.

    1) Solar Winds was excempt from scanning & monitoring by other security software.

    2) *.dll file are discussed. Microsoft servers & clients are the prime targets (?)

    3) remote access servers.

    So

"If it ain't broke, don't fix it." - Bert Lantz

Working...