Tim Berners-Lee Warns of Danger of Chaos in Unprotected Public Data (theguardian.com) 20
Hackers could use open data such as the information that powers transport apps to create chaos, Sir Tim Berners-Lee, the inventor of the world wide web, has said. An anonymous reader shares a report on The Guardian: "If you disrupted traffic data for example, to tell everybody that all the roads south of the river are closed, so everybody would go north of the river, that would gridlock you [and] disable the city," he said. Prof Sir Nigel Shadbolt, a co-founder, with Berners-Lee, of the Open Data Institute (ODI), described this as "the Italian Job scenario" and "the ultimate hack". The pair, who have both advised the British government, are leading campaigners for publicly accessible data. Berners-Lee points out as an example that reliable, detailed transport information "really makes London better". But they warned that the potential for such datasets to be tampered with if not properly protected was largely overlooked. "When people are thinking about the security of their systems, they worry about people discovering what they are doing," Berners-Lee said. "What they don't think about is the possibility of things being changed."
Security on my mind (Score:5, Interesting)
Companies and governments will not give one thought to information security until there is direct legal and monetary pressure, usually after a major hack or outage.
Telling people that security is poor will not make them act. It has been shown time and time again. And from the government side, reporting a security flaw usually gets the reporter investigated and harassed.
Even monetary penalties are not sufficient. Companies will just consider it "the cost of doing business" and pay off fines as necessary, it is cheaper than implementing good security from day one.
We have reached the point of "hacking is inevitable, why bother protecting ourselves?" as corporate policy.
Re: (Score:3)
Nine different police departments and public security agencies were absorbing the information that an obscure subsect of militant Christian fundamentalists had just taken credit for having introduced clinical levels of an outlawed psychoactive agent known as Blue Nine into the ventilation system of the Sense/ Net Pyramid. Blue Nine, known in California as Grievous Angel, had been shown to produce acute paranoia and homicidal psychosis in eighty-five percent of experimental subjects.
Gibson, William (2000-07-01). Neuromancer (pp. 60-61). Ace. Kindle Edition.
It really won't take hacking into databases. Just rely on the interconnectedness of everything and the fact that people tend not to think things through too carefully.
Re: (Score:3, Insightful)
Companies and governments will not give one thought to information security until there is direct legal and monetary pressure, usually after a major hack or outage.
Part of the problem is that we're not allowed to sue until after we've been harmed. Our complaints about potential harm are just dismissed.
In a more pro-active system of government, any complaint about potential harm would be considered, and only dismissed if the possibility seemed too remote, or too inconsequential.
Re: (Score:2)
To play Devil's advocate: the 'upside' of the way things work at the moment is that actual harm having been caused is a far more reliable indicator of the potential for harm, that lawyers' arguments. That, combined with limited government resources, means the current system has at least some merit.
Re:Security on my mind (Score:5, Insightful)
Companies and governments will not give one thought to information security until there is direct legal and monetary pressure, usually after a major hack or outage.
That is not true.
I'm an information security professional (CISM and all). I've been part of more than one project where IS expertise was brought in before the project even launched, though for my personal taste it should have been even earlier, during the design/architecture phase, but that's a seperate discussion.
Some companies don't understand IS until it bites them in the arse. But many other companies and government institutions (never make the mistake of thinking the government is one monolithic entity) do understand the necessity for IS.
Even monetary penalties are not sufficient. Companies will just consider it "the cost of doing business" and pay off fines as necessary, it is cheaper than implementing good security from day one.
That is called accepting a risk and is a perfectly valid business strategy. If a law is intended to make a risk inacceptable, it needs to make that clear. For some risks, accepting it should mean going to jail. But both the laws and the enforcement is lacking in such things. How many CEOs have ever gone to jail for something except financial crimes (stock fraud, tax evasion, etc.) ?
We have reached the point of "hacking is inevitable, why bother protecting ourselves?" as corporate policy.
That's not what I tell people hiring me. I tell them that being attacked is inevitable. But because their competitors probably bother less with security than they should, not being the weakest target is more important than having perfect security.
Because many companies come from literally nothing and need to build up an ISMS to get an ISO certification or whatever the business reason for the sudden interest in IS is. They cannot possibly get to state-of-the-art in one year. But they can fix the most serious problems and then go from there. And yes, that takes considerable resources, because people like me are expensive, and training your IT people and hiring some new ones is expensive, and the costs of some firewalls and IDS systems almost doesn't matter because replacing your fundamentally, unfixable broken legacy systems (you know, that machine still runing windows 95 with your business critical application...) is what will really kill you.
Sometimes, management has no other choice but to accept the risk, and your success as the IS guy is that at least they're aware of it now and understand they need to do something about it next year.
Crowdsource to verify (Score:3)
Would never share that exploit... (Score:2)
Heck, if I could do that with the traffic data, I'd never share it - wouldn't want it fixed.
I'd just write an app where I could input a route 30 minutes before I leave for somewhere, that makes everyone think the traffic is horrible, so they clear out...
Path motivation (Score:2)
so everybody would go north of the river, (Score:3)
The (Red( River flows north, all the way to Hudson Bay
There are no roads north of Hudson Bay AFAIK
Not at this time of night... (Score:2)
"If you disrupted traffic data for example, to tell everybody that all the roads south of the river are closed, so everybody would go north of the river..."
And for London black cabbies, nothing of value was lost.
Incompetence, gross negliegence in fact (Score:2)
"When people are thinking about the security of their systems, they worry about people discovering what they are doing," Berners-Lee said. "What they don't think about is the possibility of things being changed."
In such case they need to be fired and replaced.
Considering information disclosure as well as manipulation or simple destruction (without disclosure) is Information Security 101 and anyone working with data who doesn't think about the possibility of manipulation should be escorted to the door immediately.
This is the I part of CIA - Confidentiality, Integrity, Availability - and thus practically one of the first things you learn when you deal with IS.