Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Government United States

FTC Issues Stern Warning: Biased AI May Break the Law (protocol.com) 82

The Federal Trade Commission has signaled that it's taking a hard look at bias in AI, warning businesses that selling or using such systems could constitute a violation of federal law. From a report: "The FTC Act prohibits unfair or deceptive practices," the post reads. "That would include the sale or use of -- for example -- racially biased algorithms." The post also notes that biased AI can violate the Fair Credit Reporting Act and the Equal Credit Opportunity Act. "The FCRA comes into play in certain circumstances where an algorithm is used to deny people employment, housing, credit, insurance, or other benefits," it says. "The ECOA makes it illegal for a company to use a biased algorithm that results in credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance." The post mirrors comments made by acting FTC chair Rebecca Slaughter, who recently told Protocol of her intention to ensure that FTC enforcement efforts "continue and sharpen in our long, arduous and very large national task of being anti-racist."
This discussion has been archived. No new comments can be posted.

FTC Issues Stern Warning: Biased AI May Break the Law

Comments Filter:
  • You can't really prove with some obtuse MLP that you selected based on an allowed criteria which just happens to be correlated with all the non-allowed criteria.

    You need humans capable of either lying or being intentionally blind to the truth of Bayesian mathematics (aka common sense) to do that.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      What's an Obtuse My Little Pony?

    • Easy, just don't feed the prohibited criteria into the algorithm that makes those decisions.

      You can still use that data for other types of analyses, just not for decisions that could result in unfair discrimination.

      • by godrik ( 1287354 ) on Tuesday April 20, 2021 @04:10PM (#61295254)

        Well, turns out that is not enough. When you train your machine learning model, if the training data is biased, then the machine learning model will learn that bias. You don't need the "race" field to be there. The information is latent and already encoded in lots of other field such as zip code, first name, name of the high school, zip code of the college you attended, ...

        It turns out machine learning model are VERY good at rediscovering these latent variable. And if your input data has a strong bias, then the algorithm will discover that latent variable.

        There was a paper a couple of years ago that predicted the result of sentencing phases of trials based on somewhat anonymized/deracialized data. Turned out that one of the dimension in the middle of machine learning model was race of the defendant.

        They did a similar study on salary based on resumes, and they could recover both race and gender.

        • And this is saying that before you sell AI, you need to do those sorts of tests. You can't just throw some data at it, get it working, and ship it out the door.

          I read this as a notification that companies need to do their due diligence and ensure that their software is fit for purpose before shipping it out the door. Failure to do that will result in the FTC clucking a bit and asking for some scraps.

        • So, it's bad because it reflects actual conditions or it's bad because you can't actually exclude the prohibited factors? Either way it sounds like they're trying to "wag the dog" by going after a product of current conditions and expecting the conditions to change as a result. Sadly, not a new mistake, but one that illustrates the overall approach of many who claim to be reformers.
      • It's cute that you think that's enough to get you off.

      • Even if you don't feed the prohibited criteria, if the non-prohibited criteria correlates well with the prohibited criteria you could still see liability. Even if you weren't aware of the correlation, it would be nearly impossible to explain to a jury how your ML algorithm produced the same biased results as if you had used the prohibited criteria.

        In most discrimination cases, the defendants are seldom asked, "Do you hate ${protected characteristic}?*" Instead, the prosecution/plaintiff shows how stati

        • by cayenne8 ( 626475 ) on Tuesday April 20, 2021 @04:37PM (#61295332) Homepage Journal
          Is it discrimination if it is true?

          I mean, what if you use AI or whatever...and use the widest attitude of training data, non-biased to the maximum extent, etc....and it finds that there are behavioral and cultural differences that happen to align along racial, and/or sexual lines?

          I mean, there ARE differences between people and cultures in this world that do influence behaviors and expected results.

          Is reporting the truth, even painful...now illegal?

          Man, we're in 1984 full blown....it just took a bit longer than expected I guess.

          Truth is no longer truth.

          • Discrimination refers to two different words. One is a technical term, and the other is a casual term. Actually, there are more homonyms, like the legal word, but the confusion arises in asking if one word is equal to another different word of the same spelling.

          • by truedfx ( 802492 ) on Tuesday April 20, 2021 @06:18PM (#61295640)
            It's pretty obvious that there are differences between people, it has been illegal to use those to discriminate if that means you're actually discriminating based on a protected characteristic for a long time, and that isn't full-blown 1984, it actually becomes quite obvious if you start to look at some concrete scenarios. A trivial example is parental leave. This by itself is not a protected characteristic, but if you have any policy based on how much parental leave people take, then that policy is almost certainly going to result in gender-based discrimination. Another example are certain clothing choices. This by itself is not a protected characteristic, but if you have policy based on the presence or absence of headwear, then that policy is almost certainly going to result in religious discrimination. There are plenty of other examples you can think of. People know that policies shouldn't be based on that unless they have a very good reason, but AIs don't. If you only filter out explicitly protected characteristics, you would still be feeding indirect indicators of protected characteristics into the model and you would still end up with AI that models any historic biases.
          • Re: (Score:1, Troll)

            by gillbates ( 106458 )

            Discrimination means whatever the woke want it to mean.

            There was a certain N word which originally meant a stupid or foolish person, but was used so frequently to disparage one particular race that it became a racial epithet. Today, there's an R word that originally meant someone who discriminates on the basis of race, but has been applied almost exclusively against a certain race that it, too, has become a racial epithet. And it no longer retains its original meaning, but instead, in its most frequent

            • Snowflake, quite your whining, put on your big-boy pants, and go face the world like a man.

              Everything in this post is you whining that you're a victim like a fucking child. Yep, the world is changing and you're not happy with it. Boo hoo. Being a crybaby about it on the internet does not make you look good.

              Adapt or die.

              • Why should anyone adapt to demands for lethal changes? If there is a group trying to eviscerate the Constitution in order to establish a totalitarian dictatorship, shouldn't I resist their demands instead of "adapt" to them? Should I adapt to lies or point out that when they say "anti-racist" what they really mean is "a racist Communist dictatorship, but with my ethnic minority in charge" (this is what Prof. Kendi calls for in his work)?

                No, I'd prefer to tell you to go f-k yourself instead. You want ch

      • by Z00L00K ( 682162 )

        That's usually not how an AI works. It is usually working through a success/fail strategy and if it's praised for a certain behavior and punished for another it will learn what's "right". However this can be abused [wikipedia.org].

        An AI isn't very smart either. So it can lead to situations where a minor "bad" maneuver early could be used to avoid a huge problem later. E.g. you are on a path the wrong side of a ridge where it's "costly" to walk over to the other side to find the parallel path. But if you continue your path

      • I could make a solid guess on someone's race, religion, and political beliefs based on their postcode. You say you're from Detroit, I'll assume you're black. Say you're from South America, I'll assume you're Latino and Catholic. Say you're from a major city, I'll assume you vote left.

        Don't forget that many social issues have disproportionate representation among certain minorities. Absent parentage for example affects over 60% of Black Americans and 50% of American Indians, compared to 24% of White American

    • by laird ( 2705 ) <lairdp@gm a i l.com> on Tuesday April 20, 2021 @04:13PM (#61295260) Journal

      No, it doesn't just prohibit biased inputs, it prohibits biased outcomes. So if a neural net makes indirect inferences that result in (for example) black people not being allowed to finance their houses the same as comparable white people, that's a biased outcome. So it means not just avoiding putting race as an input, but doing sensitivity analysis to make sure that it doesn't use some other factor which leads to bias, such as down-scoring people who live in the black part of town more than people who live in the white part of town. Similar logic to the Voting Rights Act - the goal of the VRA was not just to prevent obvious racism but also any mechanism that had racist impact, even if it was phrased in non-racist ways. (e.g. allowing voting with forms of ID used more by white voters, and prohibiting voting with forms of ID used more by black voters, to use a real-world example). Biases can often be implicit rather than explicit, so there's a good reason that they require a fair outcome.

      • Comparable is not an objective standard.

        You really need those lying humans to sell your unbiased bias.

      • Exactly. If you take the case of credit, the algorithms are probably designed to figure out who, for instance, is a worse credit risk. In contrast the law is designed to prevent discriminating against people of certain groups even if they are a worse credit risk. They get a bit of a free pass because of their race. In essence the laws force discrimination under the guise of preventing it. However, the computers have a hard time coping with it. Machine learning isn’t designed to intentionally per
        • by Anonymous Coward

          Exactly. If you take the case of credit, the algorithms are probably designed to figure out who, for instance, is a worse credit risk. In contrast the law is designed to prevent discriminating against people of certain groups even if they are a worse credit risk. They get a bit of a free pass because of their race

          At least for credit, they get around that by normalizing against groupings of bad risk and only adjust scale.

          That is, they do have data showing some groups are a larger risk than others, and those groups flesh out to match one of the prohibited discrimination classes.
          So they intentionally grant credit no matter what the risk level is, it isn't denied based on this, so isn't legally discriminatory.

          Instead they discriminate based on how much information they have on a person to determine a risk.
          Since "we know

          • "So they intentionally grant credit no matter what the risk level is, it isn't denied based on this, so isn't legally discriminatory."

            Yes, it is because it increases the amount of risk the bank currently carries on its books, thus triggering a reduction in credit availability elsewhere. In other words, this ultimately results in credit being denied elsewhere to the group which carries lower risk. Federal guarantees to avoid that bad debt being counted as such are what brought us the last major recession due
            • by laird ( 2705 )

              You're forgetting that a racial bias gives an unfair bias towards (for example) white borrowers, who would be extended credit when they were bad credit risks, before black borrowers who were better credit risks, because the algorithm was distorted by the racial bias. So the bank should be better off, not worse, by giving credit based on actual risk and not implicit racial factors. Remember, the the goal isn't to give credit to people who are bad risks, the goal is to give credit to people who are good risks

              • "You're forgetting that a racial bias gives an unfair bias towards (for example) white borrowers"

                No I'm not and no it doesn't. Algorithms don't have 'racial bias' and this thread is explicitly talking about decisions based on financial criteria.

                "For example, if someone lives in "the black part of town" then a neural network could learn that because other people in that part of town might be bad risks, to downgrade the credit of anyone from that part of town, unfairly penalizing someone who has earned good c
                • First of all if people in that part of town have a higher chance of defaulting the algorithm doesn't become more accurate by assuming a bunch of loaded social science reasoning such as you are doing is the cause and ignoring that. It becomes more accurate by accounting for that in proportion to actual outcomes. Modern learning algorithms are able to and do correlate a value like that relative to many other values, including 'good credit.' The neural net will account for people from that neighborhood with go

                  • "If I'm a member of a group that on average has less wealth then if the algorithm can discover that I'm a member of that group through some correlation of features, it might penalize me."

                    It MIGHT if doing so correlates to better outcomes than actually looking at your individual wealth otherwise it won't because it lacks any sort of bias that would lead it to do so. You are talking about black and white outcomes derived from loose correlations to averages instead what algorithms like this actually do which i
                  • "This type of analysis doesn't have to be used to force equality of outcomes. It can be about not allowing black boxes that are able to use hidden correlations to decide peoples fates."

                    I agree. There are plenty of reasons we should be afraid of these kind of technologies controlling our world and our fates. But that isn't the argument you and others are making here.

                    "Even if the algorithm is more accurate doesn't mean it's fair."

                    What could be more fair? Your example gave an invalid correlation being used, th
                    • What could be more fair? Your example gave an invalid correlation being used, this would result in lower accuracy, a valid COMBINATION of criteria which are correctly weighted would give a more accurate result.

                      No my example was meant to be a correct correlation. For example, assume the machine learning algorithm has found that it is more likely I am part of a particular group based on my purchasing habits. Since, this group has lower than average wealth, the algorithm takes this prior information (in a

        • by laird ( 2705 )

          No, the goal isn't to give credit to people who are bad risks, the goal is to give credit to people who are good risks, but who were unfairly given bad credit ratings due to explicit biases or implicit biases. For example, if someone lives in "the black part of town" then a neural network could learn that because other people in that part of town might be bad risks, to downgrade the credit of anyone from that part of town, unfairly penalizing someone who has earned good credit because they are living in the

          • How many people would actually fall through that crack? How many good credit risks are living in neighborhoods where the average credit risk is really bad?

            Say you have a zip code that covers a large trailer park and exactly one mansion. The average person in that zip code may have a piss-poor credit score, but the person who owns the mansion has excellent credit. It isn't based on just zip codes, there are a number of far more important factors involved, like income and a history of paying your debts.

      • 'So if a neural net makes indirect inferences that result in (for example) black people not being allowed to finance their houses the same as comparable white people'

        At that point you'd be biasing AGAINST some white people in favor of lesser qualified black people in order to attain your equitable outcome. THAT is illegal. Race isn't just a protected class, it is protected because it is a logically invalid class. Discriminating for an equitable distribution of protected classes is no more valid than discrim
      • Look, if you remove bias from the input but still get biased outcomes, then the problem is somewhere else.

        Any why would you be trying to forcibly equalize outcomes anyhow? Forcing pre-determined outcomes renders all individual efforts and differences moot. Works better for ants than for humans, what with individuality and free will being real things, and equity not.

        Now, if you're sure that variations in outcome are driven by some unfounded bias, (i.e., "people who look like that are bad"), but expl

    • It is only a matter of time before someone trains a neural net to evaluate the output of other neural nets for racism. It will be difficult for a company to dispute the judgement that their neural net is racist when that judgement is based on the output of their own neural netI.

      IMHO, the only real solution is to ban companies (and governmental entities) from using neural nets to make life altering decisions about human beings.

      It's OK to use a neural net to decide who gets the spam, not who gets the l
      • It is only a matter of time before someone trains a neural net to evaluate the output of other neural nets for racism. It will be difficult for a company to dispute the judgement that their neural net is racist when that judgement is based on the output of their own neural netI.

        When that day comes, the problems will all be solved.

        Why?

        Because right now the differences are so gigantic that you can see them from space. Literally, a country in Africa was using satellite imagery to determine which neighborhoods to send targeted COVID relief to, because the visual difference between poverty and wealth was so apparent. I don't doubt you could do that in the US as well. Map race and population density onto a satellite image, and you'll clearly see the lines between dense shared housing in

    • by stikves ( 127823 )

      It will actually be quite difficult to have an "expected non-biased outcome" without giving sensitive information as inputs.

      I will be frank. If you have an AI for giving out mortgage loans, it will look pretty racist if it only has access to financial information. It will not only reject minorities more often, but they will be eligible for lower amounts even when they were approved.

      There is a simple reason for that. When you look at the income distributions for different groups, there are big gaps. So a hig

      • by laird ( 2705 )

        The answer is to not give race as an input into the algorithm, but to use race in analyzing the outputs of the algorithm. If two groups of people with the same fundamentals get different scores depending on their race, then the algorithm has an implicit racial bias. Data analysts do this sort of sensitivity analysis all the time.

    • easy.
      measure the results

  • Move fast, break things are coming to an end I think

    Throwing stuff against the wall to see what sticks/works has GOT to stop.

    • Move fast, break things are coming to an end I think

      Think that all you want. The budget, says otherwise.

      Throwing stuff against the wall to see what sticks/works has GOT to stop.

      Kind of hard to complain about that tactic when it's Chapter Two in the Learn to Code for Shits, Giggles, and Profit. handbook.

      Perhaps what has GOT to stop, is assuming people know any better.

  • Finally (Score:3, Insightful)

    by unixcorn ( 120825 ) on Tuesday April 20, 2021 @03:55PM (#61295196)

    Now there is a reason for anyone/everyone to stop calling any software that appears to make a decision AI. As a software developer, I have yet to see any evidence of a true artificial intelligence yet the buzz word seems to be labeling anything as such. Without consciousness, software can't be biased. The coders or those who make the business rules, well, that's another story.

    • It is a Hard Problem[tm] to create an AI that groks subtext in the time dimension. The unfortunate tendency today, as more and more people become semi-literate in the Internet Age is to focus on individual words. Some word Good, some other word Bad. Luckily, the attention span of your average political gonk seems to be getting shorter and shorter, so once the Problem is solved we will see an improvement, provided we can avoid shrinking the acceptable diction down to a singularity in the meantime.

    • by jythie ( 914043 )
      Stop? Hate to break it to you, but these kinds of systems have been "AI" since the 60s or 70s according to people who work in the field. That it fails to meet some philosophical or sci-fi definition dreamed up by people in other fields is irrelevant.
      • Ahem... this guy worked in the field. https://www.amazon.com/Silicon-Invention-Microprocessor-Science-Consciousness-ebook/dp/B08W742297

        Also, you have the order of operations backwards with the definitions having been changed to accommodate systems which only exhibit complex behavior as a subsequent lowering of the bar. This has been hashed out around here hundreds of times.

         
      • That some "people who work in the field" misuse terminology doesn't make that terminology correct. You don't get to redefine common words just because it gives you a boner over your project.
      • I think what he meant was "don't label the software as 'biased'
        while giving the people that developed it a pass".
        IBM's debacle with TAY indicated a problem with IBM's testing, not with the software being 'biased'.

    • "AI" will be able to be "true AI" just as soon as an abacus becomes the sack of potatoes it represents. Symbolic representations do not and never will have self-awareness, we are projecting our awareness onto them and anthromorphizing them. Remember those little 'answer a question' things the girls would make from paper as children? Or magic 8 balls? Or just a palm reading or horoscope? Or even getting immersed in a game, book, or movie experience? This is what we are doing we utilize symbolic machines runn
  • But it won't tell black people jokes. It was trained that way since birth.

  • by gurps_npc ( 621217 ) on Tuesday April 20, 2021 @04:24PM (#61295294) Homepage

    I always despise when a company,or worse, the government does an investigation and says:

    "The employee followed policy." Especially when they say it as if this means they are not a criminal.

    Most members of Purdue Pharma followed policy when they pushed opiods. Most members of the KKK followed policy when they terrorized and murdered. Most members of the NYC Police department when they raided the Stonewall Inn.

    That is not an excuse, in fact it makes it worse.

    If your policy is to commit a crime, that is an indictment of the policy, not a vindication of the employee.

    You look at the results and see if that were wrong. Once that is done, then you decide whether to blame the employee or the company.

    When you find the employee followed policy, that means their BOSSES ARE THE PROBLEM.

    • by Actually, I do RTFA ( 1058596 ) on Tuesday April 20, 2021 @04:32PM (#61295314)

      Actually, Person X followed policy is an important statement. It means exactly what you said - Person X probably shouldn't be punished, but their boss's boss's boss should be.

      • Depends. Would a reasonable person in an abstracted context understand that what they did was illegal, despite it being policy?

        Then you're still culpable. Also, in the eyes of the law, not knowing the law isn't a defense, so policy or not, the whole chain gets to swing.

        • I said "Person X probably shouldn't be punished" Were you killing people with a handgun? Yes. Did a corporate lawyer explain to you it was legal? You should probably be fine for dumping waste X in Y.

          Some people, like engineers, we hold to higher standards. But I'm fine giving people the benefit of a doubt - as long as we hold the people who gave the orders responsible. If that means they need to testify against their bosses, that's one thing. But no a bank teller shouldn't go to jail for applying an

        • Don't forget that a person who follows company policy is by default being extorted and threatened into it, at risk of losing their job or getting blacklisted from the industry.

      • I have a counter-proposal. Everyone from the person who created the thing on up to anyone above them who even knew they were creating the thing shares culpability.

        You can't solve the problem by having only the worker be culpable, but you can't solve it by having only the bosses take responsibility either.

  • by Wolfier ( 94144 ) on Tuesday April 20, 2021 @05:08PM (#61295420)

    Bias in the statistical sense or Bias in the social sense?

    I'm certain that studies that say "people of certain ethnic descent are more vulnerable to ailment X" are usually not called "racist", as they seek to discover additional help needed by some people.

    However, now imagine there's another study that say "driver affected by ailment X is the cause of car crashes".

    Put the 2 together logically, you'll end up with "drivers of certain ethnic descent is more likely to cause car crashes". So, 2 non-racist statements will result in a result that is perceived as "racist".

    I think any AI that arrives at this conclusion cannot be blamed. Instead, AI should allow studies of its reasonings such that the decision processes, inner node weights, etc. can be identified, and be judged on a case-by-case basis whether it is a an "actual mathematical bias" or merely a "logical conclusion that is taboo".

    • by Anonymous Coward

      According to the students of "Critical Theory", all science and data are created to support oppression and marginalization. Only Critical Theory can see through to the reality, that there are no differences between people except those created by the ruling elite to oppress the masses. If this sounds Marxist, that's because it is indeed founded in Marxism, but not the oppressive kind that every Marxist regime in the last 110 years has engaged in. It's the "true Marxism", much as Scientology is the true scien

  • by jythie ( 914043 ) on Tuesday April 20, 2021 @05:51PM (#61295546)
    I think the real lesson here is not if AI systems are trained with bias or not, but that these opaque unauditable messes are being used in situations where they are making consequential decisions. To me, all this really says is that if companies can not figure out how their system is getting results, they should be in legal trouble for using those outputs.
  • by misnohmer ( 1636461 ) on Tuesday April 20, 2021 @06:44PM (#61295730)

    Car insurance, life insurance, probably other insurance have been discriminating based on sex for ages, i.e. male rates are different than female rates. If that is legal, why does it matter if it's implemented in some software or using paper tables and slide rules?

    • Exactly. There ARE patterns and differences between people.

      Women and men ARE different. Just look inside their underpants.

      For example, there *is* more crime and violence among refugees. But not because "they are an inferior race" or some other bullshit. But only because if you come from a place with only war and terror and horrors to shape you and traumatize you, you turn into somebody like that too!
      That does not mean I hate anyone like that or that I would "discriminate" in that non-literal sense in which

      • So in any case, if we cure the trauma and/or the illness, would you murder again? Of course not!

        Yeah, unfortunately for that logic, you can only determine your failure to cure by the murder of another person.

        I personally do not have a cavalier attitude for the testing scenario.

  • by Anonymous Coward

    It correlates well with productivity, education, health, income, and correlates very well *against* crime, drug addiction, gambling, teen pregnancy, and majoring in gender studies.

    Ooops! It also correlates with being Asian, and against being blacks. Too bad reality is biased.

  • continue and sharpen in our long, arduous and very large national task of being anti-racist”, FTC chair Rebecca Slaughter

    This is obviously someone with an agenda. Similarly to when Anita Sarkeesian “investigated” sexism and misogyny in gaming and lo-and-behold she discovered exactly that. The subsequent controversy caused the end of a number of careers and at least one suicide.

    On August 5, Commissioner Rebecca Kelly Slaughter of the Federal Trade Commission received wid
  • I wonder WHO gets to determine if there is bias present or not. What I perceive as bias may not match your perception of bias. If reality is "biased" is this legislation tantamount to trying to declare Pi = e = 3?

    {o.o}

  • The most clueless term uttered in all of history forever.

    PROTIP: Bias is the entire point and only function of a neural net. Even the "spherical horse on a sinusoidal trajectory" one the fraudsters keep calling "AI".

    You mean "not YOUR bias".
    Aka not gosestepping down the exact same path as your personal social conditioning and deluded beliefs of fake social norms that effectively are just discrimination and bullying again. Which you selfishly believe *must* apply to all the lifeforms in the universe. Even th

  • So AI system looks at data from arrests and sees that black men are far more likely to commit crime.

    Is this "bias" or is this just objective fact from the data?

    Certainly angry activists will insist RACISM! in the same sense they're crying that facial recognition is 'racist' setting aside the physical reality that dark things are harder to discern. (Facial recognition has issues with very white faces as well, seemingly working best for moderately-colored skin like most hispanics.)

  • Truth will set you free? Nope, Truth will send you to jail.

  • Seriously, is there a cabal of white-supremecist AI engineers that's infiltrated tech companies industry-wide with the goal of building racist AIs?

    In my time in tech, I've witnessed a decent amount of misogyny, occasional snobbery about universities and degrees, bits of homophobia and transphobia here and there, some of what could be called ableism, and a vast amount of insensitivity and general dickishness; especially towards anybody perceived as less than clueful. But outside of the swastika and GNAA shi

Keep up the good work! But please don't ask me to help.

Working...