Forgot your password?
typodupeerror
Privacy Encryption

Introducing the NSA-Proof Crypto-Font 259

Posted by Soulskill
from the for-days-when-reading-words-seems-too-easy dept.
Daniel_Stuckey writes "At a moment when governments and corporations alike are hellbent on snooping through your personal digital messages, it'd sure be nice if there was a font their dragnets couldn't decipher. So Sang Mun built one. Sang, a recent graduate from the Rhode Island Schoold of Design, has unleashed ZXX — a 'disruptive typeface' that he says is much more difficult to the NSA and friends to decrypt. He's made it free to download on his website, too. 'The project started with a genuine question: How can we conceal our fundamental thoughts from artificial intelligences and those who deploy them?' he writes. 'I decided to create a typeface that would be unreadable by text scanning software (whether used by a government agency or a lone hacker) — misdirecting information or sometimes not giving any at all. It can be applied to huge amounts of data, or to personal correspondence.' He named it after the Library of Congress's labeling code ZXX, which archivists employ when they find a book that contains 'no linguistic content.'"
This discussion has been archived. No new comments can be posted.

Introducing the NSA-Proof Crypto-Font

Comments Filter:
  • by cold fjord (826450) on Saturday June 22, 2013 @06:14PM (#44081019)

    The great tragedy of Science — the slaying of a beautiful hypothesis by an ugly fact. -- Thomas Huxley

  • by wonkey_monkey (2592601) on Saturday June 22, 2013 @06:46PM (#44081233) Homepage

    Yes, as anyone with half an ounce of sense will have already realised, no font will ever be NSA proof. The first mistake was publishing it on the internet...

    The creator is trying to make a point about privacy, not implement a workable solution.

  • Re:Easy to crack? (Score:5, Interesting)

    by dgatwood (11270) on Saturday June 22, 2013 @08:12PM (#44081691) Journal

    Depends on the steganography method used, and on how many images are sent using that method. If you're a spook and you see somebody suddenly sending lots of images to someone else, you might grow suspicious, at which point you'll start performing analysis to see if there are patterns emerging across the entire set of images, such as certain pixels that are always higher than the adjacent pixels by a certain amount. Granted, such patterns can just as easily be caused by sensor flaws, but some fairly primitive steganography techniques could be detectable in this way.

    Second, because subpixel noise in cameras isn't random—it tends to obey a gaussian distribution, and thermal noise can vary considerably from frame to frame depending on the length of the exposure—when spread over a large enough number of sequential or nearly sequential photos taken by the same camera, the steganography might be detectable by using a model of the predicted levels of noise that the image sensor should produce for a shot of a given duration and the elapsed time since the previous shot. This won't tell you what is embedded in the image, but if you're lucky, it might tell you that with a high probability, something is embedded. Depending on the circumstances, that might be enough to get a warrant. Then again, it could just be Digimarc.

    Finally, there's the question of the randomness of the source material (or, more to the point, the lack thereof). If the base image is at the native sensor resolution of the camera, the nature of the image sensors themselves could potentially be exploited to detect some types of steganography. In a real-world image sensor (except for Foveon sensors), there's no such thing as a pixel; there are only subpixels that produce a value for a single color. The camera must combine these values (a process called "demosaicing" [wikipedia.org]) to compute the color for a pixel in the final image. Because the subpixels that make up a pixel are not physically on top of one another, the camera typically computes the estimated value for the color at a given physical point on the sensor by combining adjacent subpixel values in differing percentages. For example, if the green subpixel is chosen as the "center" of the pixel and the red subpixel is to the left and the blue is above, it might mix a bit of the red from the "pixel" to its right and a bit of the blue from the "pixel" below it. (This explanation is overly simplistic, but you get the basic idea.)

    Unfortunately for steganographers, the way that particular cameras construct a pixel value from adjacent subpixel values is predictable and well understood. If a steganographic technique does not take that into consideration, it is highly likely that, given knowledge of the camera and its particular mixing algorithm, the steganographic data can be detected simply by determining whether there is any plausible set of subpixel values that could result in the final computed pixel values for the entire image. For that matter, given that most of the algorithms for subpixel blending are straightforward, even without knowledge of the particular camera, it is highly likely that steganography can be detected, because portions of the image that contain no hidden data will likely only be producible by a single algorithm, and portions of the image that contain hidden data likely will not be.

    Those are just a couple of types of analysis off the top of my head that might potentially be used against some types of steganography, given some types of source material, etc. It is entirely possible that there are steganographic techniques that are resistant to these sorts of analysis, and there are likely many other interesting types of analysis that I have not mentioned. I have not kept up with steganographic research personally, so I can't say with any certainty.

  • by Anonymous Coward on Saturday June 22, 2013 @08:45PM (#44081877)

    If the NSA and other snoops capture and record data that is sent and just store it for subsequent analysis when the need arises, a better approach to foiling them would be to not actually send data at all, but only to compute data live at each end.

    Computing the data of a communication can be done in countless ways, from timing the intervals between items of data sent (where the data is either garbage or readable misdirection), to encoding it in IP addresses used, applying mathematical functions to the live stream, or any of a million other wierd approaches that a suitably inebriated brain could dream up. This diversity is a strength.

    Note that this is not cryptography, it's denial of cryptographic analysis at a later date because essential reassembly parameters are available only at the time of transmission, not later. All it would do is prevent dumb data gathering and storage by the mass dragnet from providing data that is meaningful at a later time.

    Needless to say, you could use it in conjunction with cryptography too if you wanted to ensure that, should they actually be monitoring you live and capturing a whole pile of possible reassembly parameters, then they'd still need to break the real crypto as well. But if they're doing that to you then you're probably in deep trouble already and you shouldn't be online reading Slashdot.

    Where it can help is by being a thorn in the side of the mass data collectors, and so helping the great mass of public communication remain private despite subsequent analysis by the spooks. To combat it, they would not be able to just blindly collect traffic for posterity, because it would be meaningless.

    It's not an original idea, but perhaps after the PRISM revelations it's time to revive some old ones.

  • by SoCalChris (573049) on Sunday June 23, 2013 @12:26AM (#44082707) Journal

    I've got a client that's a non-profit group home for abused kids. Because of what they do, and their funding sources, they have to send daily activity reports for each of the kids, including medical, psychological, behavior, school notes, etc...

    Every day, the reports are hand written on to forms, which are then typed into a computer, which are then printed, which are then faxed to the county (Typically 75-100 pages of fax each day), which is then entered into the county's computers, which is then printed out and filed.

    Between the original handwritten report, printed copy of the entered report, received fax, and county copy, multiplied by around 100 pages per day amounts to almost 150,000 pages created every year for something that could very easily be done almost entirely electronically.

  • by WaywardGeek (1480513) on Sunday June 23, 2013 @01:58AM (#44082975) Journal

    The tools for private communication are there, and geeks like me contribute what we can (not that much in my case). Instead of saying "it's not rocket science", we should say, "it's not crypto." This stuff is hard, which is why it's fun.

    His statement that there is no practical way to safeguard privacy is true to a point. No one in the world is going to decrypt my one-time-pad encrypted email that I encrypt on a machine not connected to the Internet, transfer by USB stick, and email as an attachment. Instead, if anyone really cares, they'll just get my data the old fashioned way. It's really a matter of how much money the eavesdropper is willing to spend. Anything over I'm guessing maybe $100,000, and they just hire an expert to bug my house, car, cell phone, clothing, have an affair with my wife and run dog. If we care to, and have at least a small clue, we can encrypt whatever we want securely. At least if no one really cares to know what we're encrypting.

    I agree with Google, Microsoft, and friends. We should let our service providers be honest with us, and have a public debate about privacy vs. security.
    I don't have any secrets. Not one. Now that doesn't mean I post all my passwords on my blog,

The more cordial the buyer's secretary, the greater the odds that the competition already has the order.

Working...