Cloud-Powered Facial Recognition Is Terrifying 286
oker sends this quote from The Atlantic:
"With Carnegie Mellon's cloud-centric new mobile app, the process of matching a casual snapshot with a person's online identity takes less than a minute. Tools like PittPatt and other cloud-based facial recognition services rely on finding publicly available pictures of you online, whether it's a profile image for social networks like Facebook and Google Plus or from something more official from a company website or a college athletic portrait. In their most recent round of facial recognition studies, researchers at Carnegie Mellon were able to not only match unidentified profile photos from a dating website (where the vast majority of users operate pseudonymously) with positively identified Facebook photos, but also match pedestrians on a North American college campus with their online identities. ... '[C]onceptually, the goal of Experiment 3 was to show that it is possible to start from an anonymous face in the street, and end up with very sensitive information about that person, in a process of data "accretion." In the context of our experiment, it is this blending of online and offline data — made possible by the convergence of face recognition, social networks, data mining, and cloud computing — that we refer to as augmented reality.'
Google decided against this. (Score:5, Interesting)
Re:But Facebook... (Score:3, Interesting)
One of the reasons I have a facebook account is so I can untag photos others say are me.
98% Accurate! (Score:5, Interesting)
You mean to tell me that 98% accuracy when trying to spot terrorists in airports isn't good enough? That's only 200,000 false positives per year for a typical airport.
For example, this is dangerous for women (Score:5, Interesting)
Re:Where Are the Recall Rates? (Score:2, Interesting)
Depends what the inconvenience is. If it's a quick background check with no lasting effects (i.e. not being added to a do-no-fly list or terrorist watch list or your record or subjecting you to public humiliation or arrest), then perhaps... If it's a 5 year vacation in Guantanamo without access to legal counsel, then no way--that would be a horrible perversion of justice!
Consider this question: Do only famous people have look-a-likes? Why would that be, especially since famous people often look non-average in some way? If they have many look-a-likes, then the rest of us certainly do. I think most people have met someone who says, "Are you so and so--you look just like them?" or has someone tell them that they saw someone the other day who looked just like them. In short, we ALL have many look-a-likes, they just don't seek us out since we're not famous, and thus we are unlikely to meet most of them, and vice versa.
So you have many large pools of facial synonyms, if you will that will, that all potentially result in false-positives with regard to each other or to one *really* unlucky member of the pool. If one of them happens to be a terrorist, then you're in for a world of trouble just because you happen to look like them.
So if we start applying this tech to the population at large, we had better be certain that the consequences of a false match WHEN IT HAPPENS are acceptable, legally, ethically, and morally, or we shouldn't do it at all, IMHOP.
And that's not even addressing the privacy issues associated with correct identifications...
Re:Where Are the Recall Rates? (Score:4, Interesting)
How you want to decide Google passed on continuing down this road is up to you. Frankly, I would surmise that the type I and type II errors [wikipedia.org] become woefully problematic when applied to an entire population.
I dunno. I bet if you combine the location of a photo with what Google knows about where you live/hang out the results would be pretty good.