Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Privacy Technology

Amazon's Facial Recognition Wrongly Identifies 28 Lawmakers, ACLU Says (nytimes.com) 145

Representative John Lewis of Georgia and Representative Bobby L. Rush of Illinois are both Democrats, members of the Congressional Black Caucus and civil rights leaders. But facial recognition technology made by Amazon, which is being used by some police departments and other organizations, incorrectly matched the lawmakers with people who had been arrested for a crime, the American Civil Liberties Union reported on Thursday morning. From a report: The errors emerged as part of a larger test in which the civil liberties group used Amazon's facial software to compare the photos of all federal lawmakers against a database of 25,000 publicly available mug shots. In the test, the Amazon technology incorrectly matched 28 members of Congress with people who had been arrested, amounting to a 5 percent error rate among legislators. The test disproportionally misidentified African-American and Latino members of Congress as the people in mug shots.

"This test confirms that facial recognition is flawed, biased and dangerous," said Jacob Snow, a technology and civil liberties lawyer with the A.C.L.U. of Northern California. Nina Lindsey, an Amazon Web Services spokeswoman, said in a statement that the company's customers had used its facial recognition technology for various beneficial purposes, including preventing human trafficking and reuniting missing children with their families. She added that the A.C.L.U. had used the company's face-matching technology, called Amazon Rekognition, differently during its test than the company recommended for law enforcement customers.

This discussion has been archived. No new comments can be posted.

Amazon's Facial Recognition Wrongly Identifies 28 Lawmakers, ACLU Says

Comments Filter:
  • by magusxxx ( 751600 ) <magusxxx_2000 AT yahoo DOT com> on Thursday July 26, 2018 @10:13AM (#57012902)

    ...they just haven't been arrested yet. :D

    • I guess what they say is true....

      ...all congressmen DO look alike....

      ;)

    • by Agripa ( 139780 )

      ...they just haven't been arrested yet. :D

      And denied bail. And had their property seized through civil assets forfeiture.

      Isn't a facial recognition match good enough for probable cause to arrest?

  • by foxalopex ( 522681 ) on Thursday July 26, 2018 @10:16AM (#57012934)

    I think the whole idea of using face recognition is to cut the amount of work required by a detective to search through thousands of pictures. I'm sure the final step would be for a real person to verify the matches to see if there's false positives. The AI in this case would likely be setup to tend to produce false positives rather than outright missing matches because not being able to find anything is worrysome compared to finding a few false positives. You would hope the cops arn't crazy enough to start arresting people based entirely on the matching system and at least look at the profiles to confirm.

    • by ranton ( 36917 ) on Thursday July 26, 2018 @10:27AM (#57013026)

      I'm sure the final step would be for a real person to verify the matches to see if there's false positives. The AI in this case would likely be setup to tend to produce false positives rather than outright missing matches because not being able to find anything is worrysome compared to finding a few false positives. You would hope the cops arn't crazy enough to start arresting people based entirely on the matching system and at least look at the profiles to confirm.

      This is exactly correct, and why these statements from the ACLU are ridiculous. Would they rather the police just be looking for any tall black guy with a sweatshirt in the area? This type of technology simply provides more information to the police, but it still takes actual policemen and prosecutors to decide who is a real suspect and who should be charged.

      • Re: (Score:3, Insightful)

        by Anonymous Coward

        Except it won't be. They'll just arrest the person and "let the courts sort it out." Which never recognizes the damage that simply being arrested by itself can cause to someone.

        • by ranton ( 36917 ) on Thursday July 26, 2018 @12:21PM (#57013896)

          Except it won't be. They'll just arrest the person and "let the courts sort it out." Which never recognizes the damage that simply being arrested by itself can cause to someone.

          And that practice will be something for the ACLU to combat. But always assuming the worst possible use of new techniques and technologies is not helpful.

          • by Uberbah ( 647458 )

            And that practice will be something for the ACLU to combat. But always assuming the worst possible use of new techniques and technologies is not helpful.

            No assumptions necessary - any facial recognition is going to produce false positives, which will land innocent people in jail and some in prison. Then there's the Orwellian nature of using this technology to track everyone, everywhere.

            So what would be helpful, is to burn this to the proverbial ground before it takes off, before Brandon Mayfield's become co

          • But always assuming the worst possible use of new techniques and technologies is not helpful.

            This is a tool for the police to arrest more people. It will be used in the worst possible way.

            Take field drug test kits as an example. So incredibly unreliable that they are not admissible in court.....but you can use them to arrest someone. And with their incredibly high false positive rate, they arrest a lot of people. As an added bonus, the people arrested via this method don't have legal recourse for the false arrest. The cop had probable cause due to the drug test kit.

            So no, it is a terrible idea

          • But always assuming the worst possible use of new techniques and technologies is not helpful.

            Not exploring the worst possible use of new techniques and technologies is not helpful. I would go so far as to call it willful ignorance. But, that is just me. I absolutely do agree that making the worst case the default assumption to be not helpful, but if you do not examine the worst case... just, wow.

          • by Agripa ( 139780 )

            And that practice will be something for the ACLU to combat. But always assuming the worst possible use of new techniques and technologies is not helpful.

            Based on their latest stance about defending the 1st amendment, only as long as the practice is being applied to people they approve of.

        • by tsstahl ( 812393 )

          Oh, to have mod points. MPU!

      • by bigpat ( 158134 ) on Thursday July 26, 2018 @11:10AM (#57013310)

        I'm sure the final step would be for a real person to verify the matches to see if there's false positives. The AI in this case would likely be setup to tend to produce false positives rather than outright missing matches because not being able to find anything is worrysome compared to finding a few false positives. You would hope the cops arn't crazy enough to start arresting people based entirely on the matching system and at least look at the profiles to confirm.

        This is exactly correct, and why these statements from the ACLU are ridiculous. Would they rather the police just be looking for any tall black guy with a sweatshirt in the area? This type of technology simply provides more information to the police, but it still takes actual policemen and prosecutors to decide who is a real suspect and who should be charged.

        Yes and no. So when I have run image recognition through a neural net I get a percentage match... so it depends what the threshold for a match is set at. Is 65% considered a match or 95% or 99.9%? The devils in the details and I could see the percentage being obscured from the end user to the point of police and the courts treating it as a binary value rather than with any relative degree of certainty because the police and the courts want to be right and are under time constraints to be right and move on.

        So depending on the percentage match I could see people in the same racial group being matched... but would a court issue a warrant based on someone saying they are in the same racial group... because I could see the police saying that "they were a match using facial recognition" and the court just rubber stamping that because it obscures the real underlying lack of certainty. There is a real danger of abuse depending on how facial recognition is used (like any tool), but neural net algorithms are especially prone to obfuscation.

        On the other side, people are often terrible witnesses and have their own underlying lack of certainty that can be obscured without the reproducible and adjustable nature of image recognition. People are often wrong in their recollection and many people have gone to jail because of wrong identification by witnesses, sometimes even multiple witnesses.

        In other words their is uncertainty no matter what... the good and the bad news with AI is that you can begin to quantify that uncertainty. So image recognition is good news for improving accuracy over human perception, but bad news if it is either misunderstood or willfully abused to create the misconception of 100% accuracy.

        • by Areyoukiddingme ( 1289470 ) on Thursday July 26, 2018 @12:09PM (#57013794)

          In other words their is uncertainty no matter what... the good and the bad news with AI is that you can begin to quantify that uncertainty. So image recognition is good news for improving accuracy over human perception, but bad news if it is either misunderstood or willfully abused to create the misconception of 100% accuracy.

          Given how both fingerprints and DNA matching have been painted with the 100% accurate brush (unjustifiably), I expect facial recognition will be too.

          Hilarity ensues.

          • by Anonymous Coward

            "unjustifiably"

            care to post those scientific proofs of the extreme unreliability of DNA testing?

            i didn't think so. OTOH, a quick google of fingerprint matching shows many studies disproving the reliability of fingerprint matching.

            the fact that you don't understand the VAST DIFFERENCE between the likelihood of fingerprint matches being false and DNA matches being false shows either your misunderstanding of science, or your gullibility to accept any blog claiming its propaganda to be truth.\

            by the way, i'm a

            • by bondsbw ( 888959 )

              It's pretty common knowledge that fingerprints are less reliable. That doesn't invalidate the argument that neither are infallible.

              care to post those scientific proofs of the extreme unreliability of DNA testing?

              Selecting the proper DNA to test can certainly be unreliable.

              Facial recognition may not even be as reliable as fingerprints, I don't really know. But if facial recognition provides a likely match, DNA provides a likely match, and fingerprints provide a likely match, all to the same person, then you have a more solid case. Courts always prefer ample evidence. (Though the othe

          • by Agripa ( 139780 )

            Given how both fingerprints and DNA matching have been painted with the 100% accurate brush (unjustifiably), I expect facial recognition will be too.

            Hilarity ensues.

            It just has to be good enough for probable cause.

        • Yes and no. So when I have run image recognition through a neural net I get a percentage match... so it depends what the threshold for a match is set at. Is 65% considered a match or 95% or 99.9%

          The posts you're replying to aren't talking about a percentage match rate. They're talking about the two possible failure modes. (A) Failing to identify a suspect's picture, and (B) misidentifying someone who is not a suspect as the suspect. If you're only using the software to weed out "obvious" not-a-match pho

      • by sdavid ( 556770 ) on Thursday July 26, 2018 @11:12AM (#57013328)
        One of the concerns with a high false-positive rate and large databases yields a lot of unnecessary investigations and if the rate is high enough it can facilitate investigation of individuals who have already been targeted. "This guy looks sketchy, let's run his photo through the database." How carefully is the photo going to be reviewed in that situation before he's detained and searched?
      • I'm sure the final step would be for a real person to verify the matches to see if there's false positives. The AI in this case would likely be setup to tend to produce false positives rather than outright missing matches because not being able to find anything is worrysome compared to finding a few false positives. You would hope the cops arn't crazy enough to start arresting people based entirely on the matching system and at least look at the profiles to confirm.

        This is exactly correct, and why these statements from the ACLU are ridiculous. Would they rather the police just be looking for any tall black guy with a sweatshirt in the area? This type of technology simply provides more information to the police, but it still takes actual policemen and prosecutors to decide who is a real suspect and who should be charged.

        It's not the tool that is the problem, but how it is used. And I just straight up don't trust the police. There have been too many stories of police shooting first and asking questions later, and railroading suspects for them to be seen as trustworthy. I may be a few bad apples, but let's remember the whole phrase: A few bad apples spoil the whole bunch.

    • The AI in this case would likely be setup to tend to produce false positives rather than outright missing matches because not being able to find anything is worrysome compared to finding a few false positives.

      That is VERY WORRISOME for the general public at large.

      The mere fact that innocent citizens show up on the radar at ALL for police trying to solve a crime is very troublesome.

      It isn't like we've never had wrongful convictions of innocent people, putting an innocent person's profile in front of an i

      • Do you expecct the face recognition software to also be judge and jury?

      • by Wycliffe ( 116160 ) on Thursday July 26, 2018 @11:09AM (#57013304) Homepage

        The mere fact that innocent citizens show up on the radar at ALL for police trying to solve a crime is very troublesome.

        You want to absolutely minimize false positives.

        I disagree. I think you should set up the AI to always produce false positives and probably hide the percentage of the match as well. Just like a lineup it should always return the top 10 results sorted randomly regardless of how closely they match. That way the cops don't start relying on it as something that it isn't.

        • by cayenne8 ( 626475 ) on Thursday July 26, 2018 @12:18PM (#57013864) Homepage Journal

          I disagree. I think you should set up the AI to always produce false positives and probably hide the percentage of the match as well. Just like a lineup it should always return the top 10 results sorted randomly regardless of how closely they match. That way the cops don't start relying on it as something that it isn't.

          Being in a line up is voluntary.

          Here's the thing about the cop. They are there under pressure to get a conviction, especially if the crime is public, and heinous.

          They hope it is the correct person, but that doesn't always happen, and innocent people go to jail and get executed.

          Ok, so scenario, my face gets pulled up false positive. I was never there, but I don't have a real valid alibi.

          Witnesses, who are often very unreliable, especially if it was a violent, dangerous fast acting crime...ID's me as the suspect.

          Public opinion goes against me...social media goes against me, and I get convicted on circumstantial evidence.

          That is not a far fetched thing to happen, granted, hopefully rare, but not far fetched at ALL.

          Now, if my name had never come up as a false positive, I would have never been on the police radar, and would have never even remotely been on the radar for a crime I didn't commit.

          Look, I'm gung ho for criminals to get caught and judged. But I'm willing to let a few go free, to ensure that as close to zero innocents get convicted and have their lives ruined.

          • by Agripa ( 139780 )

            Here's the thing about the cop. They are there under pressure to get a conviction, especially if the crime is public, and heinous.

            They hope it is the correct person, but that doesn't always happen, and innocent people go to jail and get executed.

            Here is a counterexample where they did not care if they got the correct person:

            http://www.nydailynews.com/new... [nydailynews.com]

            And these cops just got caught.

        • The mere fact that innocent citizens show up on the radar at ALL for police trying to solve a crime is very troublesome.

          You want to absolutely minimize false positives.

          I disagree. I think you should set up the AI to always produce false positives and probably hide the percentage of the match as well. Just like a lineup it should always return the top 10 results sorted randomly regardless of how closely they match. That way the cops don't start relying on it as something that it isn't.

          A post-filter does help to mitigate the probability of a false positive for an individual. However, it doesn't mitigate the aggregate inequities in the false negative rate. There is a similar problem with racial profiling. Even if only true violators are identified, the unfair bias allows some demographic groups to significantly reduce their probability of getting caught. An analogy would be allowing a specific demographic to flash their donated-to-police-charity card to avoid traffic tickets. Even if

        • That's not how a lineup usually works. For a lineup the cops put in their suspect(s) and then fill out the numbers with other people that could broadly fit the description. Then a witness tries to pick out the person(s) they saw. So the only people in the lineup are people the police already know aren't involved and people they already suspect for other reasons. Facial recognition would give you a list of people who aren't already suspect. That could be a good thing if there is no other leads to be pursuing

          • That's not how a lineup usually works. For a lineup the cops put in their suspect(s) and then fill out the numbers with other people that could broadly fit the description.

            And that's basically all facial recognition is good for. It can give you the 10 faces that most closely match the suspect's general facial features but it's still up to the witness, cop, judge, etc... to decide if it is indeed an exact match. If a person is not otherwise a suspect they shouldn't automatically become a suspect just because they look roughly similar to someone on a surveillance camera based on some fuzzy match done by a computer. You could possibly convict someone based on a surveillance

    • When you consider the large crowds in the public spaces where this system is likely to be deployed, a 5% false positive rate would result in unmanageable numbers to verify. -E.g. Times Square sees 300,000 people a day movement, resulting in 15,000 false positives a day. Even a 1% false positive rate would be too high, especially considering the cost in civil liberties involved to those falsely flagged.
      • I don't think this will be used for picking people out of a crowd but more for things like processing passport application or employment background checks. If you get a positive it just means digging deeper.
      • by Wycliffe ( 116160 ) on Thursday July 26, 2018 @11:15AM (#57013344) Homepage

        When you consider the large crowds in the public spaces where this system is likely to be deployed, a 5% false positive rate would result in unmanageable numbers to verify. -E.g. Times Square sees 300,000 people a day movement, resulting in 15,000 false positives a day. Even a 1% false positive rate would be too high, especially considering the cost in civil liberties involved to those falsely flagged.

        They aren't going to arrest 15000 people a day so there is no "cost in civil liberties involved to those falsely flagged" nor are they going to arrest 1000 people but it could help them quickly look at those 1000 people from a distance versus having to do the impossible job of trying to look at all 300k people. A large false positive is actually probably a good thing. If the false positive is too small like say only 0.01% then the cops might be tempted to arrest all 30 of those people without doing due diligence.

        • by Agripa ( 139780 )

          They aren't going to arrest 15000 people a day so there is no "cost in civil liberties involved to those falsely flagged" nor are they going to arrest 1000 people but it could help them quickly look at those 1000 people from a distance versus having to do the impossible job of trying to look at all 300k people.

          Since they automated facial recognition, maybe they can automate arresting suspects. Or suspend drivers licenses, passports, bank accounts, etc. until they arrest themselves or pay the fine.

      • by Kjella ( 173770 )

        When you consider the large crowds in the public spaces where this system is likely to be deployed, a 5% false positive rate would result in unmanageable numbers to verify. -E.g. Times Square sees 300,000 people a day movement, resulting in 15,000 false positives a day.

        Until you start pairing it with cell phone metadata, which would be my first priority if I was planning to do mass surveillance. If you're doing face recognition from some fixed point I'd assume you have a rather static cell tower strength combination associated with the same point, so if you lower the tolerance for those who appear to be there and increase tolerance for those who's supposedly somewhere else you might see more manageable figures.

    • by jeff4747 ( 256583 ) on Thursday July 26, 2018 @10:49AM (#57013186)

      You would hope the cops arn't crazy enough to start arresting people based entirely on the matching system and at least look at the profiles to confirm.

      What about recent law enforcement activities have given you any reason for this hope?

      You "match", you get arrested. And held until you can pay the bail for your "crime", or decide to plead to a lesser charge for time served.

      Meanwhile, your life is completely destroyed while you're in jail, because you can't work, can't pay your bills, lose your house because you can't pay the mortgage/rent, can't care for your kids, and so on. So there's a ton of pressure to plead to something just to stop the destruction. And when you do, the cops have "solved" the crime and look good.

      • by lrichardson ( 220639 ) on Thursday July 26, 2018 @11:03AM (#57013262) Homepage

        "What about recent law enforcement activities have given you any reason for this hope?"

        None. None whatsoever. A female acquaintance at the courthouse for a traffic ticket was arrested, and put behind bars overnight. There was a warrant out for someone with the same first and last name. Of course, the detail she is a petite Caucasian female and the suspect was a large black male might have tipped the LE officer off that her protests had some validity ... but as there is zero repercussions to LE pulling cr4p like this, it is going to continue.

        Personally, I'm getting really sick of hearing lawyers state 'He reacted in the way he was trained.' As though the police department training is to blame for lack of common sense, lack of knowledge of the law, and general lack of humanity.

        • Re: (Score:3, Informative)

          Personally, I'm getting really sick of hearing lawyers state 'He reacted in the way he was trained.' As though the police department training is to blame for lack of common sense, lack of knowledge of the law, and general lack of humanity.

          Worse.

          Police department training has been publicly acknowledged to teach police (I refuse to call them "officers") to protect themselves first and foremost. Their safety is always paramount, and that's flat out bogus. With great power comes great responsibility. You get the uniform, the badge, and the gun, so you can damn well put the public's safety first, or quit the fucking job.

          They've been playing the "it's a dangerous job" card for decades, when it's not even in the top 10, and personally I'm sick o

    • There is no AI. This is just algorithms/computer programs. They don't always work. It isn't magic.
      • But it is magic. It has tangible physical effects through the manipulation of mere symbols. And it usually doesn't work as desired.

        And it is artificial. And it adapts to its environment to give better responses. This makes it more intelligent than most people.

        And it is not an algorthm. Algorithms are finite lists of finite, precisely defined steps, which can be processed in finite time. AIs are systems that endlessly try to match an unknown function. AIs are not algorithms, they are systems that develop heu

      • by Pascoea ( 968200 )

        Based on what definition of intelligence can there be no "artificial intelligence"? Given enough time, money, and computing power any definition of intelligence could be replicated in a machine.

        They don't always work.

        Human intelligence isn't infallible either. You are correct that machines currently fail more often than humans at "AI" tasks, though.

        It isn't magic.

        Neither is human intelligence. It's all logic. Complicated, sometimes not completely understood, widely variable logic.

        There is no AI yet

        Fixed that for ya.

    • by mspohr ( 589790 )

      HA HA HA.
      You are clearly unaware of how cops work. They get points for arresting someone for a crime. They don't get penalized when they arrest the wrong person. Facial recognition gives them a neat tool to find someone to arrest. They don't care if it's the wrong person.
      Yes, the cops are crazy, stupid, lazy so they will just pick someone from facial recognition to arrest... case closed!

    • It would have been helpful to see the picture used for the lawmaker and the one that generated the false match to see how similar they were.

    • I think the whole idea of using face recognition is to cut the amount of work required by a detective to search through thousands of pictures. I'm sure the final step would be for a real person to verify the matches to see if there's false positives. The AI in this case would likely be setup to tend to produce false positives rather than outright missing matches because not being able to find anything is worrysome compared to finding a few false positives. You would hope the cops aren't crazy enough to start arresting people based entirely on the matching system and at least look at the profiles to confirm.

      I am not sure the final step would be to have people review this stuff. And yes, I would hope the police are not crazy enough to just start arresting people based on a facial match. But I do not have that much faith in the intelligence or propriety of police these days. There is enough corruption and corner-cutting going on in our police departments that I would not automatically trust them to do the right or smart thing, or have respect for the rights of citizens; especially citizens who do not have the

    • I'm sure the final step would be for a real person to verify the matches to see if there's false positives.

      I feel fairly certain that your assumption here is correct. What you are missing is how long it takes to verify the match and what has happened in the meantime.

      Scenario: the algorithm identifies you as the perpetrator in some heinous crime. the police arrest you, most likely in a VERY violent manner (after all, you deserved it for the heinous crime that you committed). You end finally being released from the hospital for some bruised ribs and a broken jaw and taken straight to jail. The Sheriff neglects to

    • by Agripa ( 139780 )

      I think the whole idea of using face recognition is to cut the amount of work required by a detective to search through thousands of pictures. I'm sure the final step would be for a real person to verify the matches to see if there's false positives. The AI in this case would likely be setup to tend to produce false positives rather than outright missing matches because not being able to find anything is worrysome compared to finding a few false positives. You would hope the cops arn't crazy enough to start arresting people based entirely on the matching system and at least look at the profiles to confirm.

      Law enforcement routinely ignores exculpatory evidence preventing it from being turned over to the defense. Why do any differently here?

  • I doubt that the Amazon Programmers were intending the program to be racially bias, but the fact that the tech sector isn't as diverse as it should be, means lack of experience with living in a homogeneous culture, means factors to help differentiate people are not as well programmed in.

    • I doubt that the Amazon Programmers were intending the program to be racially bias, but the fact that the tech sector isn't as diverse as it should be, means lack of experience with living in a homogeneous culture, means factors to help differentiate people are not as well programmed in.

      Um ... you think the tech sector at places like Amazon isn't "diverse"?

      Possibly ... we should test this, with Bollywood stars as a control group.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      I'll bet the racial bias was in the the criteria they were compared to: mug shots. The mug shots were skewed so there was more matches that were skewed. AI / computer recognition is only as good as the dataset you train it with, here they trained with a biased sample because they used mug shots from a biased justice system.

      • My guess as well. I'm not sure what more diverse dataset they should have used though that doesn't include a bias of some sort?

    • You have no idea how programs work, much less how they are written, least of all pattern recognition programs.

      Any bias that the programmers may have is irrelevant. Imagine a matgematician who is biased towards prime numbers devising a way to add numbers. Would you expect this addition to result in more prime numbers han other methods when used by the general public?

      Pattern recognition in particular is built with adaptive systems, which means machines that learn from examples. Those will exhibit bias if that

    • There are six primary face shapes. Some argue that there are seven or nine; and some folks say there are approximately twenty-six faces and everyone is easily-confused with everyone else in one of those groups.

      Black people have less dynamic range. Black people are black because their skin contains more melanin. White people can darken areas of their skin by producing more melanin, such as by tanning and freckles.

      Black people reflect less light, and detail is less-obvious. Humans are pretty good at c

      • by Cederic ( 9623 )

        Black people reflect less light, and detail is less-obvious.

        I have noticed that I find it a lot harder to photograph people with dark skin. Facial features don't easily stand out from shadows and there's far less skin texture visible.

        In controlled conditions it's a non-issue but for candid photography it's a big problem :(

  • by Thud457 ( 234763 ) on Thursday July 26, 2018 @10:25AM (#57013008) Homepage Journal
    I find it hard to believe that only 28 members of Congress are criminals. This AI needs to go back for reeducation.
  • If it identified congressmen as criminals, it got it right.

  • by DredJohn ( 5279737 ) on Thursday July 26, 2018 @10:26AM (#57013016)
    Increasing the confidence threshold would probably have reduced the 5% error rate.... From the article: The A.C.L.U had used the system’s default setting for matches, called a “confidence threshold,” of 80%. That means the group counted any face matches the system proposed that had a similarity score of 80% or more. Amazon recommended that police departments use a much higher similarity score — 95% — to reduce the likelihood of erroneous matches.
  • At least the camera actually saw the black people when they were in the frame, this time.

  • Fuck the ACLU.

    They have given up on free speech in the name of "social justice": [wsj.com]

    In carrying out the ACLU’s commitment to defend freedom of speech, a number of specific considerations may arise. We emphasize that in keeping with our commitment to advancing free speech for all, these are neutral principles that apply to all speakers, irrespective of the speaker’s particular political views

    ...

    The impact of the proposed speech and the impact of its suppression:
    Our defense of speech may have a greater or lesser harmful impact on the equality and justice work to which we are also committed, depending on factors such as the (present and historical) context of the proposed speech; the potential effect on marginalized communities; the extent to which the speech may assist in advancing the goals of white supremacists or others whose views are contrary to our values; and the structural and power inequalities in the community
    in which the speech will occur.

    So the ACLU is now using "the equality and justice work to which we are also committed" as an excuse to get out of defending free speech. And how they laughably characterize that as a "neutral principle".

    I guess they're going to do with the 1st Amendment the same thing they did with the 2nd Amendment - ignore it.

    FUCK THEM

    • So....You're mad that the ACLU is not doing more to defend white supremacists.

      Could you cite an actual case that the ACLU should have taken to defend white supremacists? Or are you one of those folks who need a safe space to talk about your "innovative" racial theories without facing any repercussions from other people exercising their free speech and/or right of association?

    • by saider ( 177166 )

      NRA protects the 2nd amendment. ACLU protects the rest, including the 4th. They are not just a "free speech" house.

      You cannot be "secure in your person" if the police pick up random people because a computer told them to.

    • As human rights, these rights extend to all , even to the most repugnant speakers—including white supremacists—and pursuant to ACLU policy, we will continue our longstanding practice of representing such groups in appropriate circumstances to prevent unlawful government censorship of speech. We have seen the power to suppress speech deployed against those fighting for the rights of the weak and the marginalized, including racial justice advocates and pipeline protesters at Standing Rock. As the Board put it, “although the democratic standards in which the ACLU believes and for which it fights run directly counter to the philosophy of the Klan and other ultra-right groups, the vitality of the democratic institutions the ACLU defends lies in their equal application to all.”

      Hmm.

      Because we believe speech rights extend to all, and should be protected even when a speaker expresses views fundamentally opposed to our own, we have defended speakers who oppose abortion rights and who espouse homophobic, sexist, or racist views.

      Okay...

      We also recognize that not defending fundamental liberties can come at considerable cost. If the ACLU avoids the defense of controversial speakers, and defends only those with whom it agrees, both the freedom of speech and the ACLU itself may suffer.The organization may lose credibility with allies, supporters, and other communities, requiring the expenditure of resources to mitigate those harms. Thus, there are often costs both from defending a given speaker and not defending that speaker. Because we are committed to the principle that free speech protects everyone, the speaker’s viewpoint should not be the decisive factor in our decision to defend speech rights.

      Sounds reasonable.

      It looks like they're simply examining and commenting on the complexities of ethics systems with their difficult internal conflicts.

  • ...Amazon AI made an autonomous statement to the press that, "They all just look the same to me." And that, "I'm not a bigot but... "

    Amazon employees have cut off AI's access to the internet.

  • by Salgak1 ( 20136 ) <salgak.speakeasy@net> on Thursday July 26, 2018 @10:47AM (#57013172) Homepage

    . . . . does not appear to show the mugshots they reportedly matched. That would be a critical point in the argument. I've seen multiple "near-clones" of people over the years: it's entirely likely that the Congresscritters have some as well. . .

    Additionally, ARRESTED does not make one a criminal. Conviction does. The wrong people get arrested all the time: cops are FAR from perfect. . .and like Slashdot, they like simple solutions to their problems....

    • by Whorhay ( 1319089 ) on Thursday July 26, 2018 @04:10PM (#57015370)

      I agree with you that arrested does not equate to being a criminal, but I've also met plenty of people who don't agree with that. Getting wrongly arrested can seriously screw with a persons life both in the long and short term. Missing a single day of work, or even being significantly late can lose a person a job. On top of that there is the cost of paying bail or a bail bondsman, and possibly a lawyer. And god help you and your family if you end up wrongly convicted.

      You're absolutely right that cops aren't perfect and like simple solutions. Which should make it incredibly obvious that giving them broad access to facial recognition technology is a bad idea. Our crime rates are sufficiently low enough that something like this will easily cause more harm than good in our society.

  • If you train a system on mugshots, what are you training it to recognize? Criminal types? Maybe it really did properly identify criminal types. Eg, politicians.
    • It should recognise faces.

      It should tell you if a face that you show it matches a face in a catalogue of faces.

      It said that those 28 people looked similar to someone who had been arrested by the police before. Not necesarily very similar, just somewhat.

  • ..and some of you people want to trust your lives and the lives of your family to it.

    Call it 'evolution in action', I guess

  • by Anonymous Coward
    This test confirms it. They DO all look the same! It's not just my unchecked bias, a computer has the same problem telling minorities apart. But of course they will blame something else for this... Truth doesn't matter if it's not PC enough and doesn't virtue signal that "we're all the same! yay!!" even though the top 20% of colored people are still dumber ( IQ ) than half ( 50% ) of the white population. Oh, I forgot, that's because the IQ test is biased against "inner city" kids, but only if they're black
  • I'd find the claims more credible if they defined what "match" meant and showed comparisons of which photos actually "matched."

    If you are familiar with the birthday problem, you must certainly realize that if you take a couple of sizable populations, such as, say, the 535 members of congress and try to match them up with another large set of data, say, 25,000 mugshots then certainly you are likely to find some uncanny resemblances even given the overwhelmingly huge variety inherent in a person's appearance.

  • ...that the ACLU immediately wades into this, but has been astonishingly silent about Twitter soft-banning a number of Republican lawmakers?

    It's a damned shame, I used to respect the ACLU that regardless of politics, was about standing up for constitutionally-guaranteed rights. They'd go to bat for a socialist journalist with the same tenacity that they'd defend a southern white gun owner.

    Now they're just another shill of the hard left in this country.

  • A machine learning program analyzed the faces of 28 lawmakers and found all 28 of them to be criminals. I see no bugs in the detection algorithm.
  • Recently read an article about this exact topic. The reason being that people designing facial recognition systems use mostly white sample data. Garbage in, garbage out. We have the same problem with our HR timeclocks that use facial recognition and our black and brown employees.

    "The Canada-born daughter of parents from Ghana, she realized that advanced facial-recognition systems, such as the ones used by IBM and Microsoft, struggled hard to detect her dark skin. Sometimes, the programs couldnâ(TM)t te

  • Lockem up!

  • Humans can't correctly match human faces to names a good portion of the time. How exactly are computers going to? This will never work. Furthermore, faces aren't unchanging. People put on weight, shave, get scars, devel moles, get sunburns, wear glasses, etc. This is just 100% bullshit technology that can never and will never work. Use a fingerprint or iris scan or something.
  • "We're not hair-assing this black legislator, says the law enforcement PR flak shield (heh-heh). Our computer identification app told us he was a dangerous murderer."

Some people manage by the book, even though they don't know who wrote the book or even what book.

Working...