Security Researchers Threatened With US Cybercrime Laws 156
An anonymous reader writes "The Guardian reports that many of the security industry's top researchers are being threatened by lawyers and law enforcement over their efforts to track down vulnerabilities in internet infrastructure. 'HD Moore, creator of the ethical hacking tool Metasploit and chief research officer of security consultancy Rapid7, told the Guardian he had been warned by U.S. law enforcement last year over a scanning project called Critical.IO, which he started in 2012. The initiative sought to find widespread vulnerabilities using automated computer programs to uncover the weaknesses across the entire internet. ... Zach Lanier, senior security researcher at Duo Security, said many of his team had "run into possible CFAA issues before in the course of research over the last decade." Lanier said that after finding severe vulnerabilities in an unnamed "embedded device marketed towards children" and reporting them to the manufacturer, he received calls from lawyers threatening him with action."
This is what happens... (Score:5, Insightful)
...when ill thought out laws are passed.
In the UK, it is a crime (under the computer misuse act) to test a 3rd party system for vulnerabilities.
The Heartbleed incident caused a lot of people to break the law testing whether websites were affected.
Re:This is what happens... (Score:5, Insightful)
When your infrastructure spams me, or get zombied into DDoSing me, you will be held responsible for spamming and DDoSing me.
Now, would you like to reconsider your position?
Re: (Score:2)
Look who you replied to. YHBT. HTH.
Re: (Score:3)
Re: (Score:1)
You are an unethical "I Got Mine" tea begging Libertarian, obviously. Fuck You.
Please, can I have my tea? I need my tea! Pleeeeaaaaase?!
Re:This is what happens... (Score:5, Insightful)
So security researchers and/or security reporters in the UK cannot warn about a lot of unpatched webpages in the UK, but hackers all over the globe can hack and abuse them.
Yeah, makes a damn lot of sense.
Re: (Score:2)
There is plenty of justification, such heartbleed and other vulnerabilities that pop up all the time. It is your job so you are biased against these people from the start because it takes POTENTIAL revenue from your company, but to claim it is not their job is a load of BS.
I would be willing to bet when you penetration test you use known vulnerabilities and not zero day vulnerabilities, after all it is your job to test, not to research. And that right there would be why your statement is flawed. If they wa
Re: (Score:2)
Wonder what that license would be like. Think my CISSP cert would do as a stand in?
Gagging people has never really been the solution to anything. Especially not in a world where your local laws mean jack. Unless you can not only get every government on the planet to agree with some kind of law concerning the internet AND get them to actually care to enforce it (good luck trying to get a malware server shut down somewhere in east Asia...), whatever law you conjure is pointless and will ONLY affect and limit
Re: (Score:2)
Well, let's see, what else have we got... OPST? CISA? CASP? Oh, I know, CEH/CNDA! That should do it.
Then again, it doesn't really matter which one you require. Why? Because EVERYONE has them all. Because to keep them, you have to collect more and more certs.
I can't help but ponder whether a certain well known Cult that also enjoys selling very expensive courses to its members acts as the role model for the whole crap...
Re: (Score:2)
There is no justification for scanning the internet for vulnerabilities on systems you have no authorization.
Other than to see what hackers are trying to do.
Or see how secure your personal data is on someone else's site.
Or curiosity.
Or learning.
Or lots of other reasonable justifications.
Re: (Score:1)
These muppets will end up having us licensed. There is no justification for scanning the internet for vulnerabilities on systems you have no authorization. It is not their job. They are NOT the internet police!
By the same token doesn't that call into question the legality of honey-pots to assess current attack trends? Surely that's entrapment? One day (I'm ever hopeful) people will realise that words on a piece of paper do not make things either right or wrong. Moral judgements trump laws every day all over the world, yet the persecution continues - I wonder why (well, I don't wonder; I know why, and so do you).
Re: (Score:2)
And if your grandma had wheels you could ride her to school.
Sadly the world doesn't work on would've and should've. The internet IS NOT secure and vulnerabilities ARE NOT patched in a timely manner. So we have to find a solution for this imperfect world.
Re: (Score:2)
As it should be. You have no right to hack systems that don't belong to you unless you are asked to do so by the owner.
And what happens if that system has some of your personal information from a previous order or interaction?
I guess we should just throw all of these "security researchers" in jail and anytime an internet vulnerability is announced everyone should just get new logins, new credit cards and just reinvent themselves online. That sounds like the best plan.
Re: (Score:3)
Re: (Score:2)
> As it should be. You have no right to hack systems that don't belong to you unless you are asked to do so by the owner.
Sure you do. You have a right to ensure your own safety. You have a right to know whether a device is likely to harm you. Doesn't matter if this is a physical thing or something mostly governed by software.
This includes things "hosted in the cloud".
Re: (Score:2)
Re: (Score:2)
So if a hack gives reputation to a security researcher while embarrassing the website owners - how is this not exploitation for the researchers gain to the website owners detriment? You go there and pull off an I-am-smart-and-you-are-a-moron on these folks that are trying to make a living. How is that different from being an asshole?
The argument that security researchers are actually doing good is just an unsubstantiated
Re: (Score:2)
It's probably not.
But in this case, the asshole is doing it to tell you that someone who is going to play much less nicely could also do it.
However, the problem is, any sufficiently advanced form of "just looking" is indistinguishable from "I'm in your interwebs, and I'm stealing your data".
See... (Score:2)
Re: (Score:1)
This is why we can't have nice things. Companies won't audit themselves, and they get bent out of shape if others do it for them...
You discovered my credit card number. You think it is now OK to run a program to try random PINs until you find the correct one?
Why would you do that?
Re: (Score:2, Insightful)
I would say that this is more like:
You leave your credit card on a table under a wet napkin. I look at the napkin and think I can read the number. I look closer and can indeed read the number and exp date. I tell you that your credit card is easily readable, and you should probably do something about it. You then report me to the police for stealing your credit card number.
Re: (Score:1)
If you are reading my credit card number while it is under a wet napkin on a table inside my house then you can be sure I will call the police.
Re: See... (Score:2)
That's a really bad analogy. Peering at someone's credit card - even if it is under a napkin - is quite obviously very bad manners indeed. If you're saying unauthorised penetration testing is like peering at someone's credit card, then it's clearly wrong.
And speaking as someone who has his own little toy server out in the cloud, I'd very much prefer to do my own damn penetration testing, thank you.
Re: See... (Score:5, Insightful)
That's a really bad analogy.
It is. It's more like the wet napkin has retained an imprint of the credit card and you have left the napkin behind on the bar. Someone then takes the napkin, hands it to you and says "you want to be careful with these wet napkins, look". You call the police because someone you don't know has your credit card details.
Re: (Score:2)
And we have a proper analogy.
Somebody give this guy some points.
Re: (Score:2)
More like you left a note on my kitchen table that I shouldn't leave my key under the doormat.
Re: (Score:2)
Fuck America (Score:1)
In America any good intentions are met by defensive idiots
fuck them don't even try to help them anymore use your research to secure the rest of the world and let them rot in the festering cesspool they created
Re: (Score:2)
And that's not just the government!
NSA (Score:5, Insightful)
The NSA and other security services will not want security researchers to find and fix vulnerabilities the security services are exploiting.
Re: NSA (Score:2)
Re: (Score:2)
Re: (Score:2)
National ScrewYou Agency would be better because the acronym would remain the same.
Company Assets (Score:4, Informative)
Yeah how dare they ask these companies to take their heads out of the sand and do something about their customer's security/privacy!
I'm appalled at the amount of "Good, they broke the law" comments in this thread...
Re: (Score:2)
Many of these people are essentially trying to get the company with the vulnerabilities to pay for the service of fixing them. People who've gotten the sorts of emails that say "I broke into your system and I can help you fix it" probably don't end up as fans of these drive-by services.
Re: (Score:2)
Consider who is issuing the posts. Or just assume that they come from astroturfers...you won't be far wrong.
Good bye US, hello Russia! (Score:3)
Odd as it may sound, for security research, you have WAY more liberties there.
No good deed (Score:3)
Re: (Score:1)
There must be an asshole gene that natural selection has yet to make dormant.
It must be closely related to the 'have lots of money and power' gene.
Re: (Score:2)
Good (Score:5, Funny)
Everything is going according to plan.
There are no white hats (Score:4, Interesting)
And it's about time the so-called "ethical security researchers" got off their high horses and realized that. There are far too many laws for there to be white hats. If you want to do useful research into vulnerabilities other than those of the company you are a security researcher for, you're going to have to put on the black hat.
Re:There are no white hats (Score:4, Interesting)
Which is the technical equivalent of allowing only researchers in the employ of the tobacco industry to research the risks of smoking.
the way it should be for TOFC type stuff (Score:2)
mock up a few copies and then dare folks to hack it (sort by remote and physical access type hacks)
when you get something that can stand up to a decent number of hacks (remote hacks that require you to be on the same subnet on a blue moon with Big$ tool between the hours of 22:00 and 23:59 and the product needs to be in mode X and physical hacks that would be obvious don't count) then you as a last check put up a BIG$ bounty on hacks.
Then you can release a cyber product targeting children.
In an unrelated story, (Score:5, Funny)
the mayors of several crime-plagued cities release a joint announcement that reporting apparent crimes in progress to police would result in the arrest and summary punishment of the person making the police report.
"If you losers would stop reporting crimes, we wouldn't have so much crime," one prominent mayor stated to this reporter. "We're going to push down crime rates the only way that works: make it impossible to report a crime."
When asked for a comment, the aforementioned mayor's Chief of Police muttered "Whaddyawant, I'm busy here" through a mouthful of donut while pocketing a thickly-stuffed brown paper envelope proffered by an unidentifed man flanked by several apparent bodyguards.
Re: (Score:2)
sounds like a chief wiggum move.
Re: (Score:2)
I was thinking that sounded like these mayors are the result of interbreeding between Vogons and Ravenous Bugblatter Beasts.
Lawyer point of view (Score:3)
I suppose lawyers don't have locks on their homes because there's laws about illegal entry.
Solution is Transparency? (Score:1)
Seems like there could be a law that tries to differentiate "Research Hacking" by setting requirements to qualify as a researcher. They must provide full transparency to prove they have no malicious intent. They inform law enforcement authorities of their activities before and after
Re: (Score:2)
But how do you trust people? Someone sending you a threatening email that they found a vulnerability that their consulting company can fix for you is not the sort of person likely to be trusted. Just saying "I'm one of the good guys" isn't good enough, as the mafia uses the same argument when selling protection services.
Re: (Score:2)
Actually, where I work we do have a security firm that is auditing all the code. It's being paid for by a customer though (who is unknown to the devs). As in we sell the products and services to a customer for a very large amount of money over a period of time, the customer demands that we have top notch security in the products and as a condition of sale we agree to be audited with results shared between us and customer only.
The approach seems reasonable: we win because we''re not paying but still get be
Finding and reporting vulnerability is one thing (Score:1)
Re: (Score:1)
tools to exploit vulnerabilitys should only beavailable to licensed researchers. Stop handing over tools to the criminals and stupid teens. That is IMO
Fair enough, but it's not particularly achievable is it? How would you go about stopping people getting hold of the software or, heaven forbid, from writing their own?
Re: (Score:1)
Re: (Score:1)
Naturally (Score:2)
Of course security researchers are being targeted by US cybercrime laws.
Who do you think they were designed to stop? Security researchers, whistleblowers and anyone who wants to see the nation's security apparatus held accountable were always the intended targets of these laws. And anyone who believes the Internet should be free and research that impacts the public welfare should be readily available to all.
You didn't think these laws were about Estonian hackers, did you?
License researchers like investigators (Score:4, Insightful)
I work for a company that does a lot of forensics work, including collections activities and incident response. The company has to be licensed as a "private investigator" in all of the states that our employees do collections in.
It seems like a similar licensing regime would be a good place to start for computer security researchers.
It might also be worth considering making the researchers or their employer carry a bond as collateral against any potential damage that they might inadvertently cause.
It has been my experience that when people and organizations have something to lose (like forfeiture of a bond or loss of a license / ability to do business), they tend to act in a more predictable manner, and within well established guidelines.
There might also be some lessons to be learned from maritime law. In a way, researchers are sort of like privateers on the digital oceans. (So yes, once again, pirates ARE better than ninjas. Just in case there was ever any doubt.)
Re: (Score:1)
Re: (Score:2)
But you must first establish a business relationship and get permission in order to be ethical. You don't need to say when your test will be only that you will be doing it. There are legitimate security companies that do this.
Well that's ironic... (Score:1)
I'm a student at Naval Postgraduate School, and every single "cyber" security course taught here could be renamed to "How to use Metasploit to [blank]". There are all of a half dozen of the CS students here that came from any kind of background involving coding, making it necessary to dumb things down to "How to be a script kiddie".
So the makers of the primary tool taught to service members from all branches (Air Force, Marines, etc all attend there), many of which are absolutely dependent upon it, are als
What's the issue here? (Score:2)
Law enforcement doesn't want researchers uncovering their backdoors put into consumer products? Or some sleazy manufacturer with defective crap getting a buddy in the FBI to lean on people who might go public?
Basically (Score:2)
They are finding features.
Times change. (Score:3, Funny)
1990 - 2000 - "Script Kiddie"
2014 - "Security Researcher"
Re:OK, Whatever... (Score:5, Insightful)
Re:OK, Whatever... (Score:5, Insightful)
They're very effective. To paraphrase Futurama:
Documentary Narrator: Fortunately, our most expensive lawyers sued the security researchers and shut them up. Of course, the security holes are still there, we just sue anyone who talks about them. Thus solving the problem once and for all.
Suzie: But...
Documentary Narrator: Once and for all!
Sadly, too many companies don't see this as a joke, but as a valid security vulnerability response strategy.
Re: (Score:2)
And by companies, you mean the US gov't in this case.
Re: (Score:2)
When you get an email saying you have an exploitable bug in your web site, it becomes extremely difficult to tell if that is someone genuinely caring about your wide in a free and altruistic manner versus someone shaking your down for money or trying to drum up business. If it's a "security researcher" then presentation of credentials will help (ie, name the university being worked at plus peer reviewed papers, not the name of a consulting company).
Re: (Score:2)
Second, the act of publishing is problematic, maybe even the act of downloading, no the act of accessing your system in proof-of-concept.
Third, if someone trying to report a problem to your organization and does not have an easy way to do so, then it is yet another failure that you should address.
Re: (Score:2)
Remember the old days when motive was a substantial part of a court's consideration of an alleged illegal act.
But that was in the days before lawyers became gods on earth.
Re:OK, Whatever... (Score:4, Interesting)
It's OK, it's for the children!.
Re: (Score:1)
These are all business decisions. Fact of the matter is that every business owner needs to make a calculated decision on whether or not to fix a known security problem (or any bug for that matter) based on the cost/benefit. They may decide that the likelyhood of being attacked, cost of damage, value of data that could be stolen, or otherwise is simply too low in comparison to the cost of fixing the issue. This may or may not be true, but any ethical "security researcher" should allow that company to make
Re:OK, Whatever... (Score:4, Insightful)
For example, a SCADA system that your organization maintains got compromised. Fixing such system vulnerability will be inevitably expensive, and simply sending out a technician to reset it would generate billable hours. Your business interest are to ignore this problem, but imagine if this system is part of water treatment system for large residential neighborhood.
Business needs worship is a flavor of 'market will fix it' fallacy. It only works if all players are forced into making moral decisions.
Re: (Score:1)
Your business interest are to ignore this problem, but imagine if this system is part of water treatment system for large residential neighborhood.
This was exactly my point. It is a business decision of cost/benefit. If that SCADA system is just part of your office building's HVAC control, you would probably be wise to leave it be since the likelyhood of anyone attacking your air conditioning is low and any fallout cost would be relatively low. If it's controlling a nuclear power plant, that's another story. It is the responsibility of the business to make that call.
Let me put it another way. If you tell a homeowner that their front door lock i
Re: (Score:3)
Flaw in your examples and analysis is that you view each individual networked system in isolation. This is not how Internet works. Every compromised system makes it less safe for the rest of us.
Fix it or take it offline.
Re: (Score:2)
Let me put it another way. If you tell a homeowner that their front door lock is unusually vulnerable to being picked, first of all they should sock you in the face for trying to pick their lock (before they call the police), and second you should not go publishing that information if they choose to not fix it.
Who says you actually tried to pick their lock. There is a decent chance that your house has the same make and model of lock that theirs does, and when you accidentally locked yourself out, you discovered how easy that particular lock was to pick. Wouldn't warning them about the risk be the right thing to do?
Re: (Score:2)
Once third parties can be damaged, it is no longer the business' call. Sure, it's their right to ignore the risk that their A/C could get shut off, or that their corporate bank account could be hoovered. However, if the hack could flood neighborhoods with sewage, it is no longer their call, it's up to the people who might get flooded.
Re: (Score:3)
Let me put it another way. If you tell a homeowner that their front door lock is unusually vulnerable to being picked, first of all they should sock you in the face for trying to pick their lock (before they call the police), and second you should not go publishing that information if they choose to not fix it.
How about if I owned the lock and found it was easy as pie to pick, then went to your place and said "oh hey, this is easy to pick, see", pulled my front door out of my pocket and demonstrated to you how easy it was to pick.
Would you still punch me in the face and call the police on me?
And how about I then tell the lock maker, give them six months to fix their locks so people have an alternative to upgrade to and then publish my paper I was writing for university (I was doing a thesis on how shitty locks on
Re: (Score:2)
Consider that lovely phrase cost/benefit. We're talking *perceived* cost/*perceived* benefit.
As far as TEPCO executives were concerned, the cost of protecting Fukushima Daichi
was enormous, while they could pooh-pooh the possibility of an earthquake which might
need such protection.
Such costs can be reasonably estimated, so perceived cost closely equals actual cost.
However, earthquake probabilities are much easier to dismiss, so it is easy to have
perceived benefit MUCH lower than actual benefit when the earth
Re: (Score:2)
That depends on who it costs if the security is breached. If it is JUST the company that stands to lose, fine. But if their customers also stand to lose (for example, credit card info, medical records, etc), then no. It is no longer the company's risk to take and their customers have every right to know how poorly their data is being guarded.
The latter is more common than the former.
Re: (Score:2)
Were I to accept that argument, I would be accepting it as a valid argument for assassinating business owners whenever a life threatening problem was discovered. Is that the argument that you want to be making?
Re:OK, Whatever... (Score:4, Funny)
First, if anyone can get to your "shit-ton of data" you are not doing it right
Then my company is doing it right...Not even the employees can access their own data.
Re:OK, Whatever... (Score:5, Interesting)
First, if anyone can get to your "shit-ton of data" you are not doing it right
Then my company is doing it right...Not even the employees can access their own data.
Heh. That doesn't even mean you're safe. I recall a project back in the late 1980s, when I was part of a team hired by a big company (who shall remain unnamed so you'll suspect it was your company ;-). We'd had a few discussions with "top management" who'd hired us, about their problems with the DP department. Their computer folks effectively owned the data, and all access was mediated by the DP department. There was a lot of information that was there, but management couldn't get at it, because the DP folks feigned an inability to provide it.
One evening, a bunch of us decided to stay around after hours. We went to work on their big (IBM of course) mainframe, and in the morning, we demoed to management that we could read any file on their machine. Our demo included a few reports we'd printed out that got wide-eyed reactions. We'd given them access to all of their own data, and they were very happy with us. We stuck around and provided them with a lot more reports ("over the dead bodies" of some of the DP department ;-).
Some time later, we discussed in private the question of what we should tell the IBM folks about what we'd done. Our decision was essentially "Nah; they'll just block our current clients' access to their own data and give control back to the DP priesthood. And we have other customers who'll pay us to similarly break into their own data."
The fact that your own employees can't access their own data doesn't necessarily mean it's safe from outsiders.
(We never did discuss with them the implication that other outsiders might as easily access their data, if they happened to know the things we did. In the late 1980s, managers at corporate computer installations generally had no concept of a "network" other than as a way to connect remote terminals to the mainframe. There's no way we could have got them to understand the wider implications of the security holes we knew about and exploited for their benefit. It's not obvious that most of today's "management" class has such understanding, either. The current story pretty much demos the extent that understanding. ;-)
Also, employees WILL bypass unreasonable restricti (Score:2)
I would say that unreasonable restrictions on employee access make data less safe. Many people WILL get access to whatever they need to do their job effectively. There is always a way, whether it's a technical bypass or having friends in the right department. Where I work, three of us are good friends. There are almost nothing that isn't accessible to one of us. If I needed access to something to get done what needs to be done, I'd get access. The only question is whether or not I'd be allowed to tell th
Re: (Score:2)
Suppose an organization decides that they've had enough of trojans, so they l decree that everyone gets the approved desktop image and noone may install the software they need to do their job effectively and efficiently. To enforce this, employees get only a very limited account on the machine, similar to the default Guest account in Windows.
The result? The IT department no longer knows what software is being used since employees have to keep it secret (or be unable to do their job effectively). They don't know how the software got there. Maybe a lot of people are doing their work on personally-owned laptops or tablets, so company data is now handled on the same system their kids use to play online games.
BINGO!
That is exactly what they do. I stopped carrying a laptop a couple of years ago and just set up a VM I use to VPN in to the office from my home PC.
Re:OK, Whatever... (Score:5, Insightful)
Re: (Score:2)
If you want to research how a deadbolt fails buy one test it and send the results to the manufacture. If you break into the manufactures warehouse to test the deadbolts or someone's house you are going jail.
Yes, either you are invited as a consultant or you do your research in a controlled environment but not on someone else equipment without permission.
Re: (Score:3)
Black hats are always ahead already.
Everyone else is just trying to keep up, or at least not drown.
Re: (Score:3)
I think it is OK if someone drives down the street and identifies houses that leave the front door open and report on what they see.
That is, so long as they do not go through the door. That would be a crime.
People who leave the door open are enabling and encouraging criminal activity. Oddly enough, I was in a museum just this morning reading some translated Sumerian cuneiform. It was some laws that addressed just this problem. If someone leaves a property unmaintained and it attracts criminals, then that p
Re: (Score:2)
industry's top researchers are being threatened by lawyers and law enforcement over their efforts to track down vulnerabilities in internet infrastructure.
Yes, it's surprising when companies get bent out of shape when random "security researchers" hack into their systems uninvited.
Sure, it's nice to know if you are vulnerable, but still, it is difficult to take at "face value" when some random "security researcher" claims to have altruistic aimes when caught hacking your network...
Why, because it's so difficult to believe these days that any system would have vulnerabilities that need to be addressed?
Perhaps I would question the source a bit, but being alerted via email isn't exactly the standard Way of the Black Hat. They prefer you find out the hard way, and given that fact alone, I'd probably put some value on the face of the notification.
The legal reaction described is quite pathetic. Hiding behind your lawyers instead of trying to look into an identified problem isn't going to
Re: (Score:2)
but still, it is difficult to take at "face value" when some random "security researcher" claims to have altruistic aimes when caught hacking your network...
Why bother? The script kiddies are rattling the doors all day, every day. That noise is always there. One more visitor, or ten, isn't going to make a difference in our threat posture. And if one of those visits results in a discovery that we all benefit from, so much the better.
Re: (Score:1)
Re:As it should be (Score:4, Insightful)
Now try to explain why it was A-OK for the border patrol to kill the people trying to flee from East Germany because it was the law.
Re: (Score:2)
Where did he say it was OK? I'm an American, and no, I don't think what we're doing with drones is OK. Just because it's a law doesn't make it right.
Re: (Score:2)
Have they? A declaration of war requires a 2/3 majority vote in the Senate. I don't think they even got that for the "war on terror".
Re: (Score:2)
Sorry, but I don't think those count as "declarations of war". OTOH, after checking the Constitution I see that I was wrong about it requiring a 2/3 vote in the Senate. It seems that no particular procedure was specified. As a result you have a viable argument...just one that I don't accept.
Re: (Score:2)
For trying to get out of your country?
Sounds like a good law?
Re: (Score:2)
Re: (Score:2)
picking random locks that don't belong to them.
I'm assuming that any competent researcher would purchase their own "unnamed embedded device marketed towards children" and crack that. Otherwise, 'Duh.' I don't want you hacking my kid's toy.
But if you pick your own lock, it belongs to you. And so long as the purpose is research, DMCA and other laws do allow for some limited R&D exceptions.
Re: (Score:2)
From what I understand the primary way they can prosecute under the CFAA is a device is being used other then the manner in which it is intended.
If that device was a general purpose computer, then any task it is capable of performing can in no sensible way be classed as a use in a manner for which said general purpose computer was not intended.
Re: (Score:2)
Yeah, it is a twisted and convoluted system. That is for sure.
And thanks for giving me a Beavis and Butthead moment by your declaration that you anal and can twist and screw with the best of them. Huh-huhhuhhuhhuh...