Dutch Gov't Offers Guidance For Responsible Disclosure Practices 37
An anonymous reader sends this quote from an IDG News report:
"The Dutch government's cyber security center has published guidelines (in Dutch) that it hopes will encourage ethical hackers to disclose security vulnerabilities in a responsible way. The person who discovers the vulnerability should report it directly and as soon as possible to the owner of the system in a confidential manner, so the leak cannot be abused by others. Furthermore, the ethical hacker will not use social engineering techniques, nor install a backdoor or copy, modify or delete data from the system, the NCSC specified. Alternatively a hacker could make a directory listing in the system, the guidelines said. Hackers should also refrain from altering the system and not repeatedly access the system. Using brute-force techniques to access a system is also discouraged, the NCSC said. The ethical hacker further has to agree that vulnerabilities will only be disclosed after they are fixed and only with consent of the involved organization. The parties can also decide to inform the broader IT community if the vulnerability is new or it is suspected that more systems have the same vulnerability, the NCSC said."
Re: (Score:1)
Fewer dope heads than in the US!
speaking of responsibility... (Score:5, Interesting)
"Responsible disclosure" is nice. But what about holding banks and businesses responsible for the harm they are causing when their security practices fail? What about the worry and wasted time they cause to customers? What about compensating the victims of identity theft due to sloppy security practices? Businesses seem to be able to screw up arbitrarily without a lot of consequences right now.
Re: (Score:2)
The ethical hacker further has to agree that vulnerabilities will only be disclosed after they are fixed and only with consent of the involved organization.
This sounds like NEVER for the disclosure date, because either (i) the vulnerability won't get fixed if it's not known to the public, and (ii) even if it does get fixed, why should a company agree to expose the fact it screwed up?
Re: (Score:2)
Been Done (Score:5, Insightful)
Re:Been Done (Score:5, Informative)
Being a native dutch speaker, I read the entire guidelines in Dutch, and they include disclosure terms to encourage companies to rapidly fix (60 days) issues, and make agreements with the discloser about the disclosure.
This is common practice and rather well accepted practice already. So, in essence, the document encourages the public disclosure. Any company that wishes to ignore the vulnerability will have their asses handed to them anyway, so this guideline actually helps - security researchers can use it to show to companies that they are acting in good faith as long as companies play by the same rules.
So personally, I highly encourage governments to do something like this.
This Dutch variant is interesting in the sense that it creates a possible middle man that can mediate and monitor the disclosure. This protects disclosers, and puts more pressure on companies to abide by these standards. Not the other way around.
Re: (Score:2)
There are only two things I hate (Score:3, Funny)
There are only two things I hate in this world:
People who are intolerant of other people's cultures... and the Dutch.
Re: (Score:2)
Two thoughts on your message:
1) you must hate yourself.
2) the Dutch will still love you. :^D
Re: (Score:2)
Wooooossssshhhhh
http://www.imdb.com/title/tt0295178/quotes [imdb.com]
Nigel Powers: All right Goldmember. Don't play the laughing boy. There's only two things I hate in this world. People who are intolerant of other people's cultures and the Dutch.
Goldmember: What? Take the fahza away! Dutch hater! And now, it is time to say goodbye. Dr. Evil's orders. Which, for you, is bad news bears,
[talks in a deep vioce]
Goldmember: Walter Matthau.
Disclosure only with consent? (Score:4, Insightful)
Seriously? Who's going to consent?
Also, where's the responsibility on the part of the organization to show that they *HAVE* a secure coding practice, they don't simply outsource $2 coders, and they have a program in place to review security issues?
Re: (Score:3)
The documents create a neutral middle-man organization that can mediate between companies refusing to cooperate and disclosers. It effectively puts irresponsible companies directly in the line of sight of the government and thus legal action. What's not to like?
Re: (Score:2)
We've seen time and again what happens when "responsible disclosure" is abused to allow security holes to go unfixed and exploited. This is big step backwards.
Re: (Score:2)
The problem is that enforcing public disclosure by the organization itself is equivalent to self-incrimination. Think about that for a second. Do you really want to put that in law? In the US, it would be thrown out immediately as unconstitutional.
Re: (Score:2)
The 5th Amendment (which, for those who may not know, is the US Constitutional amendment forbidding the government from forcing individuals to self-incriminate) only applies to people.
Oh, wait. Corporations are "people" now, too. It's official. The world is insane. I think I shall have to build an asylum to keep it in and prevent it from hurting itself.
How do I change my /. username to Wonko?
Re: (Score:1)
All big Dutch telco's have such a policy in place already. I've spoken to quite a few organisations that are interested in adopting a responsible disclosure policy including banks and a security company. It is my experience that most companies who value security want this.
Re:Sounds fairly reasonable. (Score:5, Interesting)
The guidelines (dutch PDF) have a whole chapter outlining the responsibilities of the organization receiving a disclosure. They include guidelines for solving the issues (60 days for software, 6 months for hardware), reporting back progress to the discloser, allowing a discloser to report the vulnerability to a larger audience as part of the NCSC (government). Combined, these guidelines are an effective tool for security researchers to play by the rules and put pressure on companies together with others.
Researchers are encouraged to disclose to the NCSC as well, which means many security experts will be able to put pressure on companies not fixing vulnerabilities according to these rules.
Re: (Score:2)
What happens if the organization doesn't follow those guidelines? Is the hacker then permitted to disclose (in which case the Slashdot summary is inaccurate) or is the hacker still required to stay quiet?
Re: (Score:3)
Most likely scenario for Security, Dick:
1) Criminality. Failure to ensure funding from reputable companies forces these folks into blackmail or abuse of disclosure process. Eventually, they end up behind bars.
2) Corrective collective: Companies never give out freebies, but well-behaved security researchers have far more fun not being chased by police and get all the chicks. This creates a role model. You should see Bruce Schneier at rave parties.
Time limit (Score:3)
The only way I'd accept the "only disclose to the owner" condition is if it included a time limit within which the owner must either fix the vulnerability or disclose the vulnerability to the public, and if the owner fails to meet the limit the confidentiality is lifted and the hacker is free to disclose the vulnerability himself. If software makers want their mistakes kept confidential then the cost is a binding commitment to fixing those mistakes, and the penalty for failing to meet their commitment is that the hackers are freed from theirs.
Re:Time limit (Score:4, Informative)
As I posted before, the guidelines mention explicit timelines that should be followed. 60 days for software, 6 months for hardware.
define software, define hardware (Score:2)
And what about a discovered social engineering v11y.
Directive ethical hacking solves nothing (Score:4, Informative)
IT journalist Brenno de Winter calls the guidance useless. "If hackers first have to report the vulnerability, they lose their anonymity without having a guarantee that they will not be prosecuted. And even if a company promises that it will not press charges, the Public Prosecutions Department can start a case." Link here: http://www.trouw.nl/tr/nl/5133/Media-technologie/article/detail/3372108/2013/01/04/Richtlijn-ethisch-hacken-lost-niets-op.dhtml [trouw.nl] (in Dutch).
Re: (Score:2)
The problem is that the definition for hacking is overly broad.
It is clearly advised by the published guidelines that an organisation should define for themselves what they consider acceptable and what's not acceptable. An organisation might, for example, rule out social engineering attacks or DDoS.
IT journalist Brenno de Winter calls the guidance useless. "If hackers first have to report the vulnerability, they lose their anonymity without having a guarantee that they will not be prosecuted. And even if a company promises that it will not press charges, the Public Prosecutions Department can start a case."
A published responsible disclosure policy is a legaly binding document. If a organisation states that it find's certain behavior acceptable and even clearly states that it won't take legal action against people holding themselves to that document they have to follow that pr
NO GUARANTEE you will not be prosecuted! (Score:1)
That's the only thing that really counts, and it's missing. The Public Prosecutions Department has absolutely no obligation to follow these guidelines.
Only with consent = ENDANGERING CUSTOMERS! (Score:1)
Companies will never give consent, and will only use that, to hide from customers, how shitty their software is! In other words: FRAUD!
And I want to know that there is a security hole, and close it, *right when it is found*! Not half a year later, when the fix is out, after some Russian kid used it for his botnet for *months*! That is *deliberately* endangering me! Another CRIME. (And on top of it, assistance in the botnet operator's crimes!)
NOT ACCEPTABLE! From the security standpoint of the customer.
hacker behavior (Score:1)
The article lists a number of actions that the hacker shall not do. Most are to be expected, such as not modifying the system, not bringing it down, not exposing private information. The first and last points in the list are strange though:
Eh? Why are these not valid attack vectors?