CEO of Facial Recognition Company Kairos Argues that the Technology's Bias and Capacity For Abuse Make It Too Dangerous For Use By Law Enforcement (techcrunch.com) 115
Brian Brackeen, chief executive officer of the facial recognition software developer Kairos, writes in an op-ed: Recent news of Amazon's engagement with law enforcement to provide facial recognition surveillance (branded "Rekognition"), along with the almost unbelievable news of China's use of the technology, means that the technology industry needs to address the darker, more offensive side of some of its more spectacular advancements. Facial recognition technologies, used in the identification of suspects, negatively affects people of color. To deny this fact would be a lie. And clearly, facial recognition-powered government surveillance is an extraordinary invasion of the privacy of all citizens -- and a slippery slope to losing control of our identities altogether.
There's really no "nice" way to acknowledge these things. I've been pretty clear about the potential dangers associated with current racial biases in face recognition, and open in my opposition to the use of the technology in law enforcement. [...] To be truly effective, the algorithms powering facial recognition software require a massive amount of information. The more images of people of color it sees, the more likely it is to properly identify them. The problem is, existing software has not been exposed to enough images of people of color to be confidently relied upon to identify them.
There's really no "nice" way to acknowledge these things. I've been pretty clear about the potential dangers associated with current racial biases in face recognition, and open in my opposition to the use of the technology in law enforcement. [...] To be truly effective, the algorithms powering facial recognition software require a massive amount of information. The more images of people of color it sees, the more likely it is to properly identify them. The problem is, existing software has not been exposed to enough images of people of color to be confidently relied upon to identify them.
If it can be (Score:3)
"Those in power" (Score:2)
You should try direct democracy if you're fed-up with the people in power constantly tyring to screw you.
It might help when you the people ARE those in power.
Re: (Score:2)
But the majority/mob rule of direct democracy is more dan
Re: (Score:1)
Re: (Score:2)
Direct Democracy is two wolves and a sheep voting on what's for dinner.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Still beats the mob forcing our philosophers to drink hemlock.
At this point in time, thanks to modern liberals abandoning core liberal values, this is a very real problem and not just a silly bit of snark.
What? (Score:3, Informative)
Facial recognition technologies, used in the identification of suspects, negatively affects people of color.
Surely only if the suspect is a person of color.
Re: (Score:2)
I think the idea further down is that it has more false positives for minority groups on which it is not as well trained.
Although wouldn't it be ok if it had more false negatives? Not sure I know enough about how that works to understand why less data would mean more false positives.
Re:What? (Score:5, Interesting)
>Not sure I know enough about how that works to understand why less data would mean more false positives.
More training data means it needs to learn to recognize more subtle distinctions to be able to correctly identify an image. Without that subtly it will tend to overlook the differences and misidentify images.
It's actually very similar to the "X all look alike to me" effect. Let's take an extreme example: Imagine you live somewhere where pretty much everyone is white. You've only ever seen a handful of black people in your life, and Fred is the only black guy you personally know. Cool guy - you like him, grab beers after work, etc. And since we identify people by recognizing the differences between them and everyone else, "dark skin", "wide nose", "full lips", etc. are some of the big features you use to identify Fred. And why not? Nobody else you encounter has those features, so they really stand out to identify him from everyone else you see.
Then one day you're walking down the hall and see a black guy coming your way - similar build to Fred, with the same dark skin, wide nose, full lips, etc. And so you identify him as Fred, ask him how his project is going, and if he wants to grab a beer after work. And a totally confused Steve tries to figure out why the hell some complete stranger is acting like an old friend. Then Fred walks up, and seeing them stand side by side you start noticing the differences you didn't see initially - Fred has way more wrinkles around his eyes, Steve's cheeks are considerably rounder, etc. And, with a bit of practice you get good at telling them apart. Then you go to a conference where almost everyone is black - and once again you keep losing track of Fred, because there's a sea of faces around you, all bearing features superficially similar to Fred's, and you've really only learned to identify the small subset of obvious differences between Fred and Steve. You'll get better at it eventually, but in the meantime you just haven't yet recognized enough of the normal range of variance to make a clear distinction even between not-all-that-similar-looking people that share the same obvious features.
Re: (Score:2)
I was curious about this and read down into the CEOs explanation.
Apparently the only basis he has for this claim is that the software has a high misidentification (false positive) rate among black females. I'm not sure why this makes the software "biased" instead of "broken" or "needing improvement".
Re: (Score:2)
Re: (Score:2)
That still doesn't make it biased, just broken.
To deny this fact... (Score:2, Funny)
Re:Devil's advocate: This technology will save liv (Score:4, Insightful)
Scenario 4: There are a few hundred thousand people who will trigger the detection routines over and over again, because that's just how they look. So they get apprehended and arrested over and over again, and are unable to lead normal lives.
No, thanks, we do not need this.
Re: (Score:2)
Scenario 4: There are a few hundred thousand people who will trigger the detection routines over and over again, because that's just how they look. So they get apprehended and arrested over and over again, and are unable to lead normal lives.
No, thanks, we do not need this.
Stuff like this already happens if you happen to get stuck on the TSA no-fly list thru no fault of your own.....
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Devil's advocate says that if you arrest every 10th person you meet, you will probably advert a lot of crimes, if you just do it often enough. 99.99% of the people you arrest won't be on their way to a crime though.
But how do you know that if you do not arrest them?
Life of Technology (Score:2)
You guys created it to help sell things, obviously. That tech, once created, can also be used for law enforcement, in the "good" countries, and suppression and oppression, in the "bad" countries.
Maybe next time think before you tech.
Re: (Score:2)
Think carefully about what you're asserting. Are there any "good" countries? There are certainly countries that are worse than others, but I can't think of one that I could comfortably label "good". Doctoring of the evidence has been widely reported from countries that are normally considered "good" by posters on this site.
Re: (Score:2)
There is a very easy way to define "good" countries: How do they treat the people in their prisons, and their "terrorists"?
The way they treat the most disliked people in their society reflects the entire societal values of the whole nation. For example, convicted murderers jailed in some Scandinavian countries lead a better life than most non-Europeans can aspire to.
Re: (Score:2)
Those are important, but for this particular case irrelevant. This is about the handling of the evidence and about how trustworthy the police are in that job. That can be impacted not only by malice, but also by having a rating that depends on how many arrests you make.
Translation (Score:2)
it doesn't work.
why? (Score:3, Insightful)
I'm afraid you are going to have to show your work here.
The problem is, existing software has not been exposed to enough images of people of color to be confidently relied upon to identify them.
Are you sure? And if so, why hasn't it?
This isn't the 1960s. Who exactly is biasing facial image databases, in 2018? Noted hotbeds of racism like universities and tech companies? How are they doing so?
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
There is less contrast with darker colors.. I don't know how you change that.
What about the possibility of using infrared light, as opposed to visible light, for this? Wouldn't that more-or-less put everyone on the same page?
Re: (Score:2)
Because they're insufficiently rigorous. There are lots of places that MEAN well, but that doesn't always mean they DO well. Particularly in tech, where we're convinced of our own neutrality on such matters (i.e., that tech is a true meritocracy—if you've worked in tech for any length of time, you know politics is just as active here as anywhere else). Bias is subtle, and stuff like this slips through the cracks right up until the time where people start calling it out, and then it changes. Fortunatel
More training data? (Score:2)
Well, why not put a few million faces of each race or whatever it's called now into your training dataset? I'm sure there are underground datasets that exist.
Re: (Score:2)
Well, why not put a few million faces of each race or whatever it's called now into your training dataset? I'm sure there are underground datasets that exist.
Or make some new datasets of their own?
Sounds like some people think it's an important problem to solve, so getting funding shouldn't be a problem.
Please look up what Bias means (Score:5, Insightful)
Second, it is the policies around how it is used that negatively affect non-white people. This is a policy problem not a technology problem. I'm really not keen on being tracked and scanned by facial recognition or any of the other ways organizations track me but please don't exaggerate and play the racism card just to get clicks. In the end it numbs us to real abuse.
Re: (Score:3)
A large number of technological methods have bias ( https://cals.arizona.edu/class... [arizona.edu] ) and the facial recognition algorithms are usually machine learning I believe, they can indeed have quite a bit of bias built in. This bias can be created by the developers not training the system with properly balanced data, which is a technological issue. That bias can be due to actual bias in the world (as you mention) so here the model is right, it is just reflecting real world bias. Understanding the cause is very
Re: (Score:2)
Correct; in fact, the Supreme Court [wikipedia.org] has explicitly stated that "the police did not have a constitutional duty to protect a person from harm" [nytimes.com]. Even in the case of a restraining order.
I wish people would read that again - the police do not have a constitutional duty to protect a person from harm.
You are responsible for your own protection. Unfortunately, too many people wish to eliminate the ability to be responsible, via draconian (and most likely unconstitutional, given the decisions of DC v Heller [wikipedia.org] and Mc [wikipedia.org]
The Police Have Always Used Facial Recognition (Score:2)
The police have always used facial recognition--both the police recognizing criminals from previous knowledge and mugshots and witnesses recognizing criminals--from the criminals who attacked them to photos they see on TV.
This recognition has always had inaccuracy problems--and a lot of people have wrongly suffered. The (partial) solution has always been to use it in conjunction with other evidence.
So there is no basic difference from facial recognition from software vs facial recognition by people--and wit
Sounds like a sales pitch (Score:2)
I'm just waiting..... (Score:4, Insightful)
FBI needs to monitor Trump voters (Score:1)
They were told who to vote for and chose neo-con alt-right Trump. The FBI should invstigate these people forever, or until something sticks. That's the only way to ensure the super delegate's mandates aren't defied again. Heil Hitlary!
Not the technology's fault (Score:2)
The technology itself isn't responsible or culpable. It's the people who set the parameters and decide on what actions to take that are responsible and culpable. We simply have to find people who aren't bigots to set the parameters (or train the machine learning) and decide on what appropriate, proportionate actions should be taken.
Meanwhile, in the real-world, wouldn't it be great if the people in power and those who serve their needs and who are ultimately responsible for creating cultures of bigotry were
We were warned (Score:2)
Re: (Score:2)
It's not zero or 1 (Score:1)
"Don't. Wait. Stop.", Willy Wonka sighed. (Score:2)
To use the sentiments of the currenlaw enforcement of the US, at the highest levels: "It's public info. Why shouldn't we be able to use it?"
Why shouldn't they be able to use public info to build an automated panopticon to track definitively where everyone is at all times?
Answer: Because that is a dictator's wet dream. Tracking phone "metadata" without a warrant would trivially allowed The Tyrant King George to round up all the founding fathers.
Facial recognition live tracking, license plate live trackin
Re: (Score:2)
> Why shouldn't they be able to use public info to build an automated panopticon to track definitively where everyone is at all times?
Why shouldn't anyone? This is a basic liberty issue. What isn't explicitly forbidden is allowed in a free society. Anyone can do it. I could probably cobble something together myself. That's just the nature of technology in a sophisticated society.
You are whining about the wrong part of the equation.
It's the panopticon that's the problem.
Data that might make it more useful
eh is there a point to worry? (Score:2)
Re:Racist much? (Score:5, Informative)
Facial recognition technologies, used in the identification of suspects, negatively affects people of color
This statement is outright saying that black people are mostly criminals, the only case in which facial recognition identifying suspects "negatively affects people of color".
I think it's time we put an end to subtle racism like this, just because someone is black does not mean they are a criminal.
He's saying that the ML training dataset for people of color is too small. The machine needs to see more black people to identify them properly. This has nothing to do with black crime rate.
Re: (Score:3)
but does it give more false positives ? It seems like a %accuracy calculation would be most appropriate with any computer identification system, so the humans on the ground can behave differently with a 85% likelihood vs a 99% likelihood vs a 20% are these systems not able to generate that kind of data?
Re: (Score:2)
If a system consistently gave out a 20% likely figure, the people using the system would want their money back. So, no the system isn't able to generate that information. Make it the fault of the human and not the system to sell the system.
Nathan
Nobody complains about false positives with drug dog searches. Why would they care about false positives with facial recognition? Just do not record them and the problem is solved.
Complaint is general, not specific (Score:1)
He's saying that the ML training dataset for people of color is too small.
The complaint is about facial recognition *technologies* affecting people of color. Not about a specific application of facial recognition with a particular dataset. I really doubt the person who wrote the phrase I quoted would be placated by a better training dataset, in fact I am pretty sure they would claim it was racist to include too many black faces in a training set used to recognize criminals.
There is not one dataset for eve
Re: (Score:1)
Well, the fact is that most people in jail are from the lower socio economic class and in the USA that means blacks. Therefore, there should be an enormous database of mug shots to scan.
Re: (Score:2)
But are the mug shots of sufficient quality for this system? IIUC they will all be from the same angle (or pair of angles) and against the same background.
There's also the question of whether it is used in learning mode, and many AI programs do learning and use in two separate modes, and they don't learn from data encountered in use mode.
It still seems to me as if he's understating the problems with the system...but I only read the summary.
That said, even with reliable systems the police have a history of
Re: (Score:2)
> Well, the fact is that most people in jail are from the lower socio economic class and in the USA that means blacks.
No it doesn't. Quit being such a racist.
Re: (Score:2)
I think his comment is more narrow - that the training sets used in facial recognition are smaller among ethnic minorities and therefore may have a higher rate of false positives ("they all look alike"). This could mean that minorities would be falsely targeted at a higher rate than the majority culture.
This, of course, is fixable - add larger, more diverse data sets and eventually the AIs will be just as good (or bad) at their job regardless of vagaries of skin tone, face shape and the like. That leads to
Re: (Score:2)
This, of course, is fixable - add larger, more diverse data sets and eventually the AIs will be just as good (or bad) at their job regardless of vagaries of skin tone, face shape and the like.
No, this does not follow. If you skew the input, you also skew the output. Training the AI with more images from, than the actual ratio of, people with specific characteristics introduces a bias.
To get an unbiased result, you need to train it with a larger overall and unfiltered data set that is big enough to also get good results for the minorities.
Re: (Score:2)
This kind of bias can happen if the recognition software has trouble with darker skin tones (lower light information, less contrast? Who knows), or if there is some other external bias that causes poverty and a lack of social mobility to affect certain ethnic groups disproportionately and thus creates more crime among those groups (a societal issue).
It can also happen if your software has been tuned to recognize features of one ethnic group but not the variations in another, and then trained without fur
Re: (Score:2)
> Reality has a racist bias.
Correct. Reminds me of the complaints about the expert system used by courts in CA for sentencing that recommended longer sentences for blacks. It was based on actual data on recidivism rates. Is it racist when it is based on actual facts?
Re: (Score:2)
No. The underlying problem has nothing to do with race. The real problem is that you are trying to punish people based on future crimes. This is the problem of letting your brain rot because you can't do anything but play the race card.
Re: (Score:2)
Re: (Score:2)
Did you have a point to make or are you just trying to troll?