Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Privacy Government

CEO of Facial Recognition Company Kairos Argues that the Technology's Bias and Capacity For Abuse Make It Too Dangerous For Use By Law Enforcement (techcrunch.com) 115

Brian Brackeen, chief executive officer of the facial recognition software developer Kairos, writes in an op-ed: Recent news of Amazon's engagement with law enforcement to provide facial recognition surveillance (branded "Rekognition"), along with the almost unbelievable news of China's use of the technology, means that the technology industry needs to address the darker, more offensive side of some of its more spectacular advancements. Facial recognition technologies, used in the identification of suspects, negatively affects people of color. To deny this fact would be a lie. And clearly, facial recognition-powered government surveillance is an extraordinary invasion of the privacy of all citizens -- and a slippery slope to losing control of our identities altogether.

There's really no "nice" way to acknowledge these things. I've been pretty clear about the potential dangers associated with current racial biases in face recognition, and open in my opposition to the use of the technology in law enforcement. [...] To be truly effective, the algorithms powering facial recognition software require a massive amount of information. The more images of people of color it sees, the more likely it is to properly identify them. The problem is, existing software has not been exposed to enough images of people of color to be confidently relied upon to identify them.

This discussion has been archived. No new comments can be posted.

CEO of Facial Recognition Company Kairos Argues that the Technology's Bias and Capacity For Abuse Make It Too Dangerous For Use

Comments Filter:
  • by oldgraybeard ( 2939809 ) on Monday June 25, 2018 @01:12PM (#56843390)
    made it will be and those in power will use it to expand and protect their power
    • You should try direct democracy if you're fed-up with the people in power constantly tyring to screw you.
      It might help when you the people ARE those in power.

      • maybe, but I think I will stay with a republic. The law and justice are sometimes misapplied or unequally applied. But they and our constitution are our only protections. Currently, we do have a very politicized/tiered justice system, especially in bureaucratic leadership at the federal level. Many ethnic, ideological, etc groups are being mistreated in various ways. Which is very problematic, since both political parties are in it up to their necks.
        But the majority/mob rule of direct democracy is more dan
      • by umghhh ( 965931 )
        In large groups of people direct democracy helps only in legitimizing the choice made. The interest conflicts between subgroups and disconnect between them as well as complexity of the decisions to be made (when one has to decide between different virtues and values) means that this is also not ideal and for some decisions made this way will surely be a problem. It comes down also to the choice of a question that the gathering has to answer - properly phrased influences the answer greatly. And so on and so
  • What? (Score:3, Informative)

    by Anonymous Coward on Monday June 25, 2018 @01:14PM (#56843404)

    Facial recognition technologies, used in the identification of suspects, negatively affects people of color.

    Surely only if the suspect is a person of color.

    • I think the idea further down is that it has more false positives for minority groups on which it is not as well trained.
      Although wouldn't it be ok if it had more false negatives? Not sure I know enough about how that works to understand why less data would mean more false positives.

      • Re:What? (Score:5, Interesting)

        by Immerman ( 2627577 ) on Monday June 25, 2018 @02:13PM (#56843776)

        >Not sure I know enough about how that works to understand why less data would mean more false positives.
        More training data means it needs to learn to recognize more subtle distinctions to be able to correctly identify an image. Without that subtly it will tend to overlook the differences and misidentify images.

        It's actually very similar to the "X all look alike to me" effect. Let's take an extreme example: Imagine you live somewhere where pretty much everyone is white. You've only ever seen a handful of black people in your life, and Fred is the only black guy you personally know. Cool guy - you like him, grab beers after work, etc. And since we identify people by recognizing the differences between them and everyone else, "dark skin", "wide nose", "full lips", etc. are some of the big features you use to identify Fred. And why not? Nobody else you encounter has those features, so they really stand out to identify him from everyone else you see.

        Then one day you're walking down the hall and see a black guy coming your way - similar build to Fred, with the same dark skin, wide nose, full lips, etc. And so you identify him as Fred, ask him how his project is going, and if he wants to grab a beer after work. And a totally confused Steve tries to figure out why the hell some complete stranger is acting like an old friend. Then Fred walks up, and seeing them stand side by side you start noticing the differences you didn't see initially - Fred has way more wrinkles around his eyes, Steve's cheeks are considerably rounder, etc. And, with a bit of practice you get good at telling them apart. Then you go to a conference where almost everyone is black - and once again you keep losing track of Fred, because there's a sea of faces around you, all bearing features superficially similar to Fred's, and you've really only learned to identify the small subset of obvious differences between Fred and Steve. You'll get better at it eventually, but in the meantime you just haven't yet recognized enough of the normal range of variance to make a clear distinction even between not-all-that-similar-looking people that share the same obvious features.

    • I was curious about this and read down into the CEOs explanation.

      Apparently the only basis he has for this claim is that the software has a high misidentification (false positive) rate among black females. I'm not sure why this makes the software "biased" instead of "broken" or "needing improvement".

  • ...is a lie. Welcome to 2018.
  • You guys created it to help sell things, obviously. That tech, once created, can also be used for law enforcement, in the "good" countries, and suppression and oppression, in the "bad" countries.

    Maybe next time think before you tech.

    • by HiThere ( 15173 )

      Think carefully about what you're asserting. Are there any "good" countries? There are certainly countries that are worse than others, but I can't think of one that I could comfortably label "good". Doctoring of the evidence has been widely reported from countries that are normally considered "good" by posters on this site.

      • There is a very easy way to define "good" countries: How do they treat the people in their prisons, and their "terrorists"?

        The way they treat the most disliked people in their society reflects the entire societal values of the whole nation. For example, convicted murderers jailed in some Scandinavian countries lead a better life than most non-Europeans can aspire to.

        • by HiThere ( 15173 )

          Those are important, but for this particular case irrelevant. This is about the handling of the evidence and about how trustworthy the police are in that job. That can be impacted not only by malice, but also by having a rating that depends on how many arrests you make.

  • it doesn't work.

  • why? (Score:3, Insightful)

    by cascadingstylesheet ( 140919 ) on Monday June 25, 2018 @01:32PM (#56843506) Journal

    I'm afraid you are going to have to show your work here.

    The problem is, existing software has not been exposed to enough images of people of color to be confidently relied upon to identify them.

    Are you sure? And if so, why hasn't it?

    This isn't the 1960s. Who exactly is biasing facial image databases, in 2018? Noted hotbeds of racism like universities and tech companies? How are they doing so?

    • According to a previous article that appeared on /. it's because of white male programmers (nevermind that there's also tons of Asian and Indian programmers), who apparently purposefully choose training datasets consisting of people who look like them. Just one more thing on the long list of what we're to blame for.
      • Uh oh, looks like I got modded flamebait for not thinking white men are the root of all evil again. It couldn't be because you all thought I was exaggerating [slashdot.org], right? That's precisely what was claimed.
    • Because they're insufficiently rigorous. There are lots of places that MEAN well, but that doesn't always mean they DO well. Particularly in tech, where we're convinced of our own neutrality on such matters (i.e., that tech is a true meritocracy—if you've worked in tech for any length of time, you know politics is just as active here as anywhere else). Bias is subtle, and stuff like this slips through the cracks right up until the time where people start calling it out, and then it changes. Fortunatel

  • Well, why not put a few million faces of each race or whatever it's called now into your training dataset? I'm sure there are underground datasets that exist.

    • Well, why not put a few million faces of each race or whatever it's called now into your training dataset? I'm sure there are underground datasets that exist.

      Or make some new datasets of their own?

      Sounds like some people think it's an important problem to solve, so getting funding shouldn't be a problem.

  • by FeelGood314 ( 2516288 ) on Monday June 25, 2018 @01:39PM (#56843556)
    The technology doesn't routinely make judgement calls that are inaccurate in a specific direction. It is however much less accurate but lack of accuracy does not mean bias.

    Second, it is the policies around how it is used that negatively affect non-white people. This is a policy problem not a technology problem. I'm really not keen on being tracked and scanned by facial recognition or any of the other ways organizations track me but please don't exaggerate and play the racism card just to get clicks. In the end it numbs us to real abuse.
    • A large number of technological methods have bias ( https://cals.arizona.edu/class... [arizona.edu] ) and the facial recognition algorithms are usually machine learning I believe, they can indeed have quite a bit of bias built in. This bias can be created by the developers not training the system with properly balanced data, which is a technological issue. That bias can be due to actual bias in the world (as you mention) so here the model is right, it is just reflecting real world bias. Understanding the cause is very

  • The police have always used facial recognition--both the police recognizing criminals from previous knowledge and mugshots and witnesses recognizing criminals--from the criminals who attacked them to photos they see on TV.

    This recognition has always had inaccuracy problems--and a lot of people have wrongly suffered. The (partial) solution has always been to use it in conjunction with other evidence.

    So there is no basic difference from facial recognition from software vs facial recognition by people--and wit

  • by bev_tech_rob ( 313485 ) on Monday June 25, 2018 @02:13PM (#56843772)
    ...for the Pre-Cogs to show up in the news and then we are in deep sh*t....
  • by Anonymous Coward

    They were told who to vote for and chose neo-con alt-right Trump. The FBI should invstigate these people forever, or until something sticks. That's the only way to ensure the super delegate's mandates aren't defied again. Heil Hitlary!

  • The technology itself isn't responsible or culpable. It's the people who set the parameters and decide on what actions to take that are responsible and culpable. We simply have to find people who aren't bigots to set the parameters (or train the machine learning) and decide on what appropriate, proportionate actions should be taken.

    Meanwhile, in the real-world, wouldn't it be great if the people in power and those who serve their needs and who are ultimately responsible for creating cultures of bigotry were

  • This is going to go to a really dark place. A choice is coming. We as a society have to choose, do we want a world with crime, or a world without freedom? There is no room for both to co-exist. We can have a colourful world with choice and the crime that comes with that. Or we can be controlled, every negative thought known to the government, dissent suppressed and control handed over to a few elite who are already in power to do with as they please. China shows that this control will not be in the hands of
    • You will never have a world without crime; even the most totalitarian regimes still have crime. What we can have is a world with crime - and the ability to provide for one's own safety, or a world with crime - and the inability to protect ourselves.
  • To use the sentiments of the currenlaw enforcement of the US, at the highest levels: "It's public info. Why shouldn't we be able to use it?"

    Why shouldn't they be able to use public info to build an automated panopticon to track definitively where everyone is at all times?

    Answer: Because that is a dictator's wet dream. Tracking phone "metadata" without a warrant would trivially allowed The Tyrant King George to round up all the founding fathers.

    Facial recognition live tracking, license plate live trackin

    • by jedidiah ( 1196 )

      > Why shouldn't they be able to use public info to build an automated panopticon to track definitively where everyone is at all times?

      Why shouldn't anyone? This is a basic liberty issue. What isn't explicitly forbidden is allowed in a free society. Anyone can do it. I could probably cobble something together myself. That's just the nature of technology in a sophisticated society.

      You are whining about the wrong part of the equation.

      It's the panopticon that's the problem.

      Data that might make it more useful

  • at least here in the states where everything is based on screwing everyone else, is there a point to trying to keep it out of law enforcement hands? it will just get subcontracted out 5 layers deep so they can claim they arenâ(TM)t using it while they use it.

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...