Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Cloud Privacy The Internet Technology Your Rights Online

Cloud-Powered Facial Recognition Is Terrifying 286

oker sends this quote from The Atlantic: "With Carnegie Mellon's cloud-centric new mobile app, the process of matching a casual snapshot with a person's online identity takes less than a minute. Tools like PittPatt and other cloud-based facial recognition services rely on finding publicly available pictures of you online, whether it's a profile image for social networks like Facebook and Google Plus or from something more official from a company website or a college athletic portrait. In their most recent round of facial recognition studies, researchers at Carnegie Mellon were able to not only match unidentified profile photos from a dating website (where the vast majority of users operate pseudonymously) with positively identified Facebook photos, but also match pedestrians on a North American college campus with their online identities. ... '[C]onceptually, the goal of Experiment 3 was to show that it is possible to start from an anonymous face in the street, and end up with very sensitive information about that person, in a process of data "accretion." In the context of our experiment, it is this blending of online and offline data — made possible by the convergence of face recognition, social networks, data mining, and cloud computing — that we refer to as augmented reality.'
This discussion has been archived. No new comments can be posted.

Cloud-Powered Facial Recognition Is Terrifying

Comments Filter:
  • by wiggles ( 30088 ) on Friday September 30, 2011 @11:00AM (#37567218)
    It was only a matter of time. This has been one of the most sought after anti-terrorism tools of the last 10 years. Imagine the security implications! I'd be shocked if NSA didn't already have a version of this operational 5 years ago.
  • But Facebook... (Score:5, Insightful)

    by gurps_npc ( 621217 ) on Friday September 30, 2011 @11:03AM (#37567240) Homepage
    is not dangerous. There is no danger from posting all of the intimate details of your life, with pictures, and pictures of other people (often taken without their permission) using real names.

    Look, I am not a paranoid man. I am perfectly willing to give out private and personal information - for a reasonable fee.

    I give out private information to my bank all the time. In exchange, I get financial services.

    Facebook offers - a) a blog, b) email, c) games, d) convenient log in

    The first 3 are available for free elsewhere, the last is not worth much.

    I'm not paranoid, I'm just not cheap. And Facebook is asking way way too much for the minimal services it provides.

  • public pics? (Score:3, Insightful)

    by killmenow ( 184444 ) on Friday September 30, 2011 @11:04AM (#37567248)
    This is why I always use a picture like this [imgur.com] for any online public pics.

    Note that the pic in question (a) does not show a face clearly and (b) may or may not be me.
  • Face it (Score:5, Insightful)

    by boristdog ( 133725 ) on Friday September 30, 2011 @11:06AM (#37567268)

    The first real-world, publicly available use of this will be an app that lets you:

    1. Take a picture of someone with your smart phone
    2. Find naked pictures of this person online

    BRB, heading to the local college campus...

  • Sigh (Score:5, Insightful)

    by IWantMoreSpamPlease ( 571972 ) on Friday September 30, 2011 @11:08AM (#37567300) Homepage Journal

    Time to start dressing like The Stig again.

  • by eldavojohn ( 898314 ) * <eldavojohn@noSpAM.gmail.com> on Friday September 30, 2011 @11:09AM (#37567322) Journal

    This is why Google shelved their version of this tech. The implications were too big.

    Having studied this in college and witnessed many failed implementations of it [slashdot.org] I casually ask: Where are the recall rates [wikipedia.org] (see also sensitivity and specificity [wikipedia.org]) of these experiments?

    Because when I read the articles, I found this instead of hard numbers:

    Q. Are these results scalable?

    The capabilities of automated face recognition *today* are still limited - but keep improving. Although our studies were completed in the "wild" (that is, with real social networks profiles data, and webcam shots taken in public, and so forth), they are nevertheless the output of a controlled (set of) experiment(s). The results of a controlled experiment do not necessarily translate to reality with the same level of accuracy. However, considering the technological trends in cloud computing, face recognition accuracy, and online self-disclosures, it is hard not to conclude that what today we presented as a proof-of-concept in our study, tomorrow may become as common as everyday's text-based search engine queries.

    How you want to decide Google passed on continuing down this road is up to you. Frankly, I would surmise that the type I and type II errors [wikipedia.org] become woefully problematic when applied to an entire population. Facial recognition is not there yet, not until I see some hard numbers that convince me the error rate is low enough. Right now I bet if you were to snap pictures of 10,000 people, you would incorrectly classify at least 100 of them leading to wasted time, violated rights and wasted opportunity (depending on the misclassification).

  • by circletimessquare ( 444983 ) <circletimessquar ... m minus language> on Friday September 30, 2011 @11:13AM (#37567374) Homepage Journal

    you did

    it's funny that the tech industry holds some of the most privacy-concerned individuals, yet all their dedication to their craft has done is provide the most privacy destroying entity ever to exist

    privacy is dead as a doorknob. just forget about the concept. really, you needn't bother about privacy anymore, it's a nonstarter in today's world. big brother? try little brother: every joe shmoe with a smart phone with a camera has more power than the NSA, KGB, MI6, MSS: those guys are amateur hour

    i'm not saying it's wrong, i'm not saying it's right. i'm just saying it's the simple truth of the matter, right or wrong: privacy is dead. acceptance is your only option now. you simply can't fight this

    and government didn't kill it, you paranoid schizophrenic goons

    your technolust did

  • by Kazymyr ( 190114 ) on Friday September 30, 2011 @11:28AM (#37567616) Journal

    it's funny that the tech industry holds some of the most privacy-concerned individuals (..)

    That is only if you believe the all-caps paragraphs on all the EULAs and TOS you click through. Often the following paragraphs will contradict the bombastic declarations of commitment to privacy - on the same page.

  • by drnb ( 2434720 ) on Friday September 30, 2011 @11:38AM (#37567772)

    You mean to tell me that 98% accuracy when trying to spot terrorists in airports isn't good enough? That's only 200,000 false positives per year for a typical airport.

    Perhaps the false positives at airports are OK? Rather than randomly choosing people for more attentive searchers, and the occasional grandma to give the facade of fairness and not profiling, we could focus on the 2% who are higher probability. Of course 2% are unfairly inconvenienced but isn't that better than 100% unfairly inconvenienced? Clearly a negative/negative decision.

    Of course this is all academic and falls apart if the false negatives are at a non-trivial level.

  • by drnb ( 2434720 ) on Friday September 30, 2011 @11:47AM (#37567928)

    Because terrorists all have facebook accounts? I would assume most of them have very little online presence, pictorially anyway.

    Oddly whenever a new terrorist is discovered and remains at large law enforcement and the mass media seem to be able to come up with a facial photo. Perhaps there are sources of photos other than facebook, in particular sources available to government agencies. DMV photo, passport photo, school photos, team photos, etc.

    The experiment is facebook centric because it is an academic project that needs to stick to info made public by the individual to avoid privacy issues.

  • by drnb ( 2434720 ) on Friday September 30, 2011 @12:56PM (#37569028)

    Those 2% start to get a bit pissed off after the first two or three times. I suspect we might prefer to stick to random in the interests of fairness.

    The problem with randomness is that it is less effective since finite time is spent on low probability individuals. What is fair about increasing the likelihood that a bad guy gets through an innocents die? I think what you describe is better described as a facade of political correctness than fairness.

    Perhaps the inconvenience could be ameliorated with the known/trusted flier biometric IDs that some are proposing.

    Again, I see the unfair burden placed on the 2%, as I said its a negative/negative decision.

  • by DrgnDancer ( 137700 ) on Friday September 30, 2011 @01:25PM (#37569410) Homepage

    But if you happen to look like Abul bin Awfulguy it means that you will be inconvenienced every time you go to the airport. Everytime. While that might be fine for you (or might not, did you know you look just like Sean McIRAnut?), it's not exactly great for Robert Hussien. Who's a fourth generation American, and has a security clearance, but convince the automated systems of that why don't you?

  • by element-o.p. ( 939033 ) on Friday September 30, 2011 @02:49PM (#37570536) Homepage
    There are two logical fallacies in your argument. First, you are presenting a false dichotomy. Second, you are comparing a worst-case scenario (terrorist takes down an airplane, killing hundreds or thousands of people) to a best-case, or nearly best-case, scenario (innocent passenger gets their luggage swabbed).

    What we are talking about is risk management. Risk management is not just a matter of comparing scenarios; it is a matter of multiplying risk probabilities to risk weight (i.e., the severity of that risk), then summing all of the results of that operation. For example, a hijacker crashing an airplane into a building is a very severe risk -- it killed over three thousand people ten years ago -- but it has only happened *ONCE* (okay, four flights) in what...fifty? sixty?...years of airline service. That's a really, REALLY low probability times a really, really severe risk weight, which I'd argue results in a moderately low OVERALL risk. There is also the possibility of a hijacker murdering individual passengers until his (her) demands are met. That's happened significantly more often than a 9/11 hijacking (although still rare, in terms of number of hijacked flights vs. number of uneventful flights), but it directly affects (comparatively) fewer people. However, because it is more common, I'd argue that this scenario results in roughly the same OVERALL risk. Then there is the risk of an unruly passenger. That's much more common than the other two risks, but the risk weight is comparatively minor, which again results in an overall low risk.

    As far as scenarios you are comparing...if all that happens is a false positive gets the luggage swabbed, then I really couldn't care less. If a false positive gets removed from an airplane, cuffed, locked into a cell, strip-searched and interrogated before finally being determined to be a false positive and released [washingtonpost.com] then I have a MAJOR problem with it. Consider it this way: if there were 520 people detained in Gitmo [npr.org] and the error rate for false positives (as assumed in the above thread) is 1%, then that means there were likely at least 5 innocent people detained at Gitmo. THAT is what I meant by "wrecked", and I maintain that's an accurate description. Ms. Hebshi's life may not have been wrecked, but I'd say that it has been severely and negatively impacted.

    So, yeah. I do think that the worse error is false positives because the risk probability is significantly higher, and the risk impact is moderate to severe as well, which leads to a much, much greater overall risk than a one-in-twenty-million probability of 9/11, even when multiplied by the impact of the death of 3,000+ people.

E = MC ** 2 +- 3db

Working...