Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Censorship Your Rights Online

From Slaying Dragons To Dictators 233

tcd004 writes "In a weekend, programmer Austin Heap transformed from an apathetic MMO player to a world class regime-slayer. When word for Iran's rigged election broke over Twitter, Heap decided to dedicate himself to building a better proxy system for people behind Iran's firewall. Heap's creation, Haystack, conceals someone's real online destinations inside a stream of innocuous traffic. You may be browsing an opposition Web site, but to the censors it will appear you are visiting, say, weather.com. Heap tends to hide users in content that is popular in Tehran, sometimes the regime's own government mouthpieces."
This discussion has been archived. No new comments can be posted.

From Slaying Dragons To Dictators

Comments Filter:
  • by Rod Beauvex ( 832040 ) on Tuesday August 17, 2010 @05:25PM (#33281296)
    Sometimes, good things should not be discussed.
  • by Fnkmaster ( 89084 ) on Tuesday August 17, 2010 @05:42PM (#33281538)

    As opposed to the laughably juvenile attempts by Iranian intelligence agencies [twitspam.org] to spam twitter with pro-Iranian-government messages?

    Also, please include citations when you make accusations like that. I pulled up a bunch of articles on the Iranian twitspam with no problem but found it harder to dig up reports of US Agencies doing the same (though I wouldn't be shocked if they had, this seems to go both ways).

  • by Anonymous Coward on Tuesday August 17, 2010 @05:47PM (#33281630)

    That is baloney. Security through obscurity means that the entire system itself is obscured, as the only protection. Traditional (lock) security means that one (or more) aspect[s] of the system is/are protected by requiring significant effort to circumvent.

    Security through obscurity is leaving your door unlocked, but living in a remote area.
    Security without obscurity is locking your door, and living in the city.

    In case 1, the barrier between criminals and your house is knowing that your house exists at that spot. In case 2, it is not only knowing the ridges on your key, but spending the effort of creating a key that matches that qualification*. There is effort involved in the second case. If, instead of simply living in a remote area, you lived on the top of a difficult cliff, you would have normal security as well.

    *Also, there is effort involved in examining your lock to obtain those ridges, mugging you and stealing the key, etc.

  • by Wyatt Earp ( 1029 ) on Tuesday August 17, 2010 @05:50PM (#33281660)

    How is the US a theocracy?

  • by commodore64_love ( 1445365 ) on Tuesday August 17, 2010 @06:06PM (#33281870) Journal

    I hate moral dictatorship. It doesn't matter if it's coming from a Muslim government, the Church of Rome, or politicians. Ya know... it's my life. If I want to be an asshole that looks at porn, doesn't go to church, and keeps to himself, I have that right. Stop trying to force me to adopt your moral beliefs.

    So this HAYSTACK program. Would it work in the US and EU? It appears the answer is "no" since it was specifically designed for Iraq.

  • by couchslug ( 175151 ) on Tuesday August 17, 2010 @06:17PM (#33282004)

    "Regime change isn't very effective when you have the Keystone Kops trying to carry it out for you."

    Regime change isn't going to happen due to a few protesting students, and the mullocracy can choose to kill them off if they threaten Islamic control of government.

    The people who want to change Iran will have to display a greater will to power than the Islamocracy. That's a very tough act to follow. It would require a Maoist level of ruthlessness, not the trifling discontent of a few young people.

  • by blair1q ( 305137 ) on Tuesday August 17, 2010 @06:18PM (#33282016) Journal

    Living in a remote area would be security. The remoteness is the barrier to entry.

    Security through obscurity is more like leaving your door unlocked, but living in a building where all the other doors are locked. Or having a locked door but leaving the window unlocked and using the fire escape. Or leaving the key under the mat. It's not security, it just keeps people from believing they're looking at something unsecured.

    And the reason it's a major fail is that it is defeated by random actions that are far simpler than the randomness needed to defeat the security you're not implementing. Kids trying every doorknob, for instance, or the guy who vacuums the hall knowing all of the doormats that have keys under them.

    As for keys and obscurity; if you have 10,000 doors to lock, and use a key system that only allows for 1,000 keys, you're counting on obscurity to keep people from trying their keys in other locks. But if you use a system that allows for 1,000,000 keys, that's actual security. Because none of the locks has to take a key used in another, and someone making a random key will have to make at least a hundred to get even a 50-50 chance of findng one that unlocks just one of the 10,000 doors, and potentially he could make 990,000 and still not find one.

    As for codes, any code that gets used for more than one message reduces the security of the code. So anything other than One-Time Pad is slightly relying on security through obscurity, but you're talking about having 2048-bit security instead of infinite-bit security and thinking that's insufficient? It's not really security through obscurity until you start using rot-13 instead, and hope nobody notices.

  • Re:Get a clue (Score:2, Interesting)

    by Prune ( 557140 ) on Tuesday August 17, 2010 @06:19PM (#33282034)

    Of course, you're both playing semantics games. In a von Neumann machine, such as is every desktop computer, for example, the separation of data and program is superficial--it's just a psychologically-driven convention. It is also an extremely frequently violated convention (both by machine--Windows tends to rewrite memory-loaded images of binaries heavily--and by humans, in cases not just of the more rare virus-modifying code, but in every instance of scripting/interpreters/just-in-time compilation). Thus, obscuring the keys is not fundamentally different from obscuring the algorithms because there is no fundamental distinction between program and data. In practical terms it may be more convenient to have many keys per algorithm rather than the other way around, but this is merely adopted for trivial practical reasons. Again, there's nothing wrong with "through obscurity" by the usual definition as long as the level of obscurity applied to the algorithms corresponds to the level of obscurity applied to the keys in the more common approach.

  • by Anonymous Coward on Tuesday August 17, 2010 @06:25PM (#33282096)

    Because it takes a while to duplicate the functionality of Tor.

    They could have only gotten this out right away if they had used Tor instead of re-inventing the wheel.

    Sheesh...

  • by sstamps ( 39313 ) on Tuesday August 17, 2010 @07:53PM (#33282906) Homepage

    That's oversimplifying the case a bit, I think.

    In Iran, the state has draconian control over the press as well as any "companies" which act as communication feeds. Not so in the US, where communications companies are (for the most part) autonomous and protected like a sacred cow (thanks to the First Amendment).

    I think a better analogy would be blocking porn (child or otherwise) in Iran. I don't live there, and I don't directly know anyone who does, but the known/published government actions and policies are VERY strict, so I would expect there would be a LOT less ability to access porn of *any* kind in Iran.

    In contrast, in the US, there is very little to no active efforts to filter anything, but rather to detect actual access to illegal porn and prosecute at the individual lawbreaker level. However, even that is a spotty and half-hearted effort at best.

    In Iran, you have to register your website with the government, and they can and do block access country-wide to popular internet sites as they deem unfit (YouTube, for example).

    As a result, while it is not impossible to get access to internet content deemed verboten by the state there, the bar has been significantly raised to do so. Thus, any claims to circumvent it without some really revolutionary technology to back them up have to be taken with a grain of salt. That said, I am glad the guy made the effort, and happy for what little freedom it may provide to someone in Iran looking for hope outside their dismal state of being there, but I also don't want people to get snookered into a false hope that this is something far more than what it claims. Over there, people are jailed/murdered by the state for violating their insanely draconian laws.

  • by jd ( 1658 ) <imipak@ y a hoo.com> on Tuesday August 17, 2010 @08:03PM (#33282994) Homepage Journal

    It's basically security through obscurity. A dangerous, but popular, past-time that never actually delivers at the end of the day. Not through lack of sincerity (necessarily) but through the fact that such a method is inherently flawed. Being easy doesn't mean it's any good. It's ultimately why steganography alone is not secure - there will be fingerprints (always) that allow you to separate the two signals and thus yield the original message, if the message is kept as-is. In the case of steganography, the solution is simple - increase the entropy. If the message is compressed and/or encrypted, and the pixels NOT used for storing the text have the same bits scrambled such that the level of randomness is roughly uniform across the whole message, the most you can definitely do is determine that the low pixels do not contain picture information. You would not necessarily be able to tell if that was due to storing more bits than the capture device was capable of accurately recording or if there was some other cause. The image would look, from any obvious analytical standpoint, the same in both cases.

    So the question here is could you do the same in a proxy? Instead of trying to merely hide the data in traffic, can you reduce the fingerprint on the traffic you actually consider important and increase the background noise such that the signal and the noise are indistinguishable by any method whatsoever?

    Not obviously. Not without rolling out IPSec, and I don't see any dictatorship agreeing to that. Hell, I can't see ANY country being willing to tolerate the bulk of Internet traffic being encrypted. (That, I suspect, is some part of why IPv6 is so late in being rolled out. Originally, it mandated IPSec. Which meant EVERYTHING on the Internet being encrypted well beyond the capacity of anyone to break. From the three-letter agency perspective, that would be bad for business. From the uber-expensive SSL certificate standpoint, that would kill any business other than authentication entirely. From the ISP perspective, it would cut into their profit margins for virtual leased lines.)

    Yes, if multi-path is enabled, ISPs and backbone networks haven't turned themselves into spanning trees, and routers are configured to balance things properly, then you could randomize the paths of packet fragments. If (and if you thought that last lot was a big if, then wait till you see this!) the web server ALWAYS supports compressed and/or encrypted requests AND sends the replies likewise (ie: gzip and/or SSL/TLS can ALWAYS be used for ANY request, without exception, AND replies are no less compressed or encrypted), then (in principle) it would be difficult even with deep packet inspection to tell if a fragment was from legal content or illegal content. Not impossible, but difficult.

    To make it impossible, you have to make the fragments non-differentiable to an external observer. ATM uses very small packets, so if you tunneled the fragments over ATM to make even smaller fragments, then tunnel over IP so that you could use IP-over-DNS (since the ATM fragments are roughly the same size as DNS packets), and THEN use something like DNSSec to make it extremely hard to distinguish tunneled packets versus conventional packets, and THEN somehow get this DNS tree linked into the official DNS tree in said country...

    This could work. It's a variant of the Byzantine General's Problem. You can only tell what are real packets if (N/2)+1 nodes are legitimate. If the rogue network of interconnected DNS nodes is great enough that the authorized network of DNS nodes cannot reach that threshold, the authorized tree could, in principle, be subverted to serve the purposes of the rogue network.

    In practice, no country is so stupid that it would blithely provide enough tunnels into anything that country regarded as mission-critical in such a manner as to allow such subversion. They MIGHT provide a few such tunnels and use the bottleneck in an effort to track the source (since it would be guaranteed that only hostile traffic would be coming through that tunnel). In other words, create a bloody great honeypot.

    The last time a country DID allow critical infrastructure to be subverted, Julius Caesar ended up with a knife in his back. It kinda discourages the practice.

BLISS is ignorance.

Working...