FTC Issues Stern Warning: Biased AI May Break the Law (protocol.com) 82
The Federal Trade Commission has signaled that it's taking a hard look at bias in AI, warning businesses that selling or using such systems could constitute a violation of federal law. From a report: "The FTC Act prohibits unfair or deceptive practices," the post reads. "That would include the sale or use of -- for example -- racially biased algorithms." The post also notes that biased AI can violate the Fair Credit Reporting Act and the Equal Credit Opportunity Act. "The FCRA comes into play in certain circumstances where an algorithm is used to deny people employment, housing, credit, insurance, or other benefits," it says. "The ECOA makes it illegal for a company to use a biased algorithm that results in credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance." The post mirrors comments made by acting FTC chair Rebecca Slaughter, who recently told Protocol of her intention to ensure that FTC enforcement efforts "continue and sharpen in our long, arduous and very large national task of being anti-racist."
Well that's the end of that (Score:2)
You can't really prove with some obtuse MLP that you selected based on an allowed criteria which just happens to be correlated with all the non-allowed criteria.
You need humans capable of either lying or being intentionally blind to the truth of Bayesian mathematics (aka common sense) to do that.
Re: (Score:2, Interesting)
What's an Obtuse My Little Pony?
Re: (Score:2)
Re: (Score:2)
Easy, just don't feed the prohibited criteria into the algorithm that makes those decisions.
You can still use that data for other types of analyses, just not for decisions that could result in unfair discrimination.
Re:Well that's the end of that (Score:5, Interesting)
Well, turns out that is not enough. When you train your machine learning model, if the training data is biased, then the machine learning model will learn that bias. You don't need the "race" field to be there. The information is latent and already encoded in lots of other field such as zip code, first name, name of the high school, zip code of the college you attended, ...
It turns out machine learning model are VERY good at rediscovering these latent variable. And if your input data has a strong bias, then the algorithm will discover that latent variable.
There was a paper a couple of years ago that predicted the result of sentencing phases of trials based on somewhat anonymized/deracialized data. Turned out that one of the dimension in the middle of machine learning model was race of the defendant.
They did a similar study on salary based on resumes, and they could recover both race and gender.
Re: (Score:2)
And this is saying that before you sell AI, you need to do those sorts of tests. You can't just throw some data at it, get it working, and ship it out the door.
I read this as a notification that companies need to do their due diligence and ensure that their software is fit for purpose before shipping it out the door. Failure to do that will result in the FTC clucking a bit and asking for some scraps.
Re: (Score:2)
Re: (Score:3)
It's cute that you think that's enough to get you off.
Re: (Score:3)
Even if you don't feed the prohibited criteria, if the non-prohibited criteria correlates well with the prohibited criteria you could still see liability. Even if you weren't aware of the correlation, it would be nearly impossible to explain to a jury how your ML algorithm produced the same biased results as if you had used the prohibited criteria.
In most discrimination cases, the defendants are seldom asked, "Do you hate ${protected characteristic}?*" Instead, the prosecution/plaintiff shows how stati
Re:Well that's the end of that (Score:4, Insightful)
I mean, what if you use AI or whatever...and use the widest attitude of training data, non-biased to the maximum extent, etc....and it finds that there are behavioral and cultural differences that happen to align along racial, and/or sexual lines?
I mean, there ARE differences between people and cultures in this world that do influence behaviors and expected results.
Is reporting the truth, even painful...now illegal?
Man, we're in 1984 full blown....it just took a bit longer than expected I guess.
Truth is no longer truth.
Re: (Score:2)
Discrimination refers to two different words. One is a technical term, and the other is a casual term. Actually, there are more homonyms, like the legal word, but the confusion arises in asking if one word is equal to another different word of the same spelling.
Re:Well that's the end of that (Score:4, Informative)
Re: (Score:1, Troll)
Discrimination means whatever the woke want it to mean.
There was a certain N word which originally meant a stupid or foolish person, but was used so frequently to disparage one particular race that it became a racial epithet. Today, there's an R word that originally meant someone who discriminates on the basis of race, but has been applied almost exclusively against a certain race that it, too, has become a racial epithet. And it no longer retains its original meaning, but instead, in its most frequent
Re: (Score:1)
Snowflake, quite your whining, put on your big-boy pants, and go face the world like a man.
Everything in this post is you whining that you're a victim like a fucking child. Yep, the world is changing and you're not happy with it. Boo hoo. Being a crybaby about it on the internet does not make you look good.
Adapt or die.
Re: (Score:1)
No, I'd prefer to tell you to go f-k yourself instead. You want ch
Re: (Score:1)
Making up a fantasy world in your head is not an adult way to respond to challenges.
Grow the fuck up.
Re: (Score:2)
That's usually not how an AI works. It is usually working through a success/fail strategy and if it's praised for a certain behavior and punished for another it will learn what's "right". However this can be abused [wikipedia.org].
An AI isn't very smart either. So it can lead to situations where a minor "bad" maneuver early could be used to avoid a huge problem later. E.g. you are on a path the wrong side of a ridge where it's "costly" to walk over to the other side to find the parallel path. But if you continue your path
Re: (Score:2)
Re: (Score:2)
I could make a solid guess on someone's race, religion, and political beliefs based on their postcode. You say you're from Detroit, I'll assume you're black. Say you're from South America, I'll assume you're Latino and Catholic. Say you're from a major city, I'll assume you vote left.
Don't forget that many social issues have disproportionate representation among certain minorities. Absent parentage for example affects over 60% of Black Americans and 50% of American Indians, compared to 24% of White American
Re:Well that's the end of that (Score:5, Insightful)
No, it doesn't just prohibit biased inputs, it prohibits biased outcomes. So if a neural net makes indirect inferences that result in (for example) black people not being allowed to finance their houses the same as comparable white people, that's a biased outcome. So it means not just avoiding putting race as an input, but doing sensitivity analysis to make sure that it doesn't use some other factor which leads to bias, such as down-scoring people who live in the black part of town more than people who live in the white part of town. Similar logic to the Voting Rights Act - the goal of the VRA was not just to prevent obvious racism but also any mechanism that had racist impact, even if it was phrased in non-racist ways. (e.g. allowing voting with forms of ID used more by white voters, and prohibiting voting with forms of ID used more by black voters, to use a real-world example). Biases can often be implicit rather than explicit, so there's a good reason that they require a fair outcome.
Re: (Score:2)
Comparable is not an objective standard.
You really need those lying humans to sell your unbiased bias.
Re: (Score:1)
The idea is that, if such an enforced distribution were used long enough, the prohibited characteristics would no longer correlate precisely (rather, they'd show an evenly random distribution) with the outcomes, and therefore we'd be able to say we'd eliminated discrimination (within those outcomes).
Re: (Score:2)
Re: (Score:1)
Exactly. If you take the case of credit, the algorithms are probably designed to figure out who, for instance, is a worse credit risk. In contrast the law is designed to prevent discriminating against people of certain groups even if they are a worse credit risk. They get a bit of a free pass because of their race
At least for credit, they get around that by normalizing against groupings of bad risk and only adjust scale.
That is, they do have data showing some groups are a larger risk than others, and those groups flesh out to match one of the prohibited discrimination classes.
So they intentionally grant credit no matter what the risk level is, it isn't denied based on this, so isn't legally discriminatory.
Instead they discriminate based on how much information they have on a person to determine a risk.
Since "we know
Re: (Score:2)
Yes, it is because it increases the amount of risk the bank currently carries on its books, thus triggering a reduction in credit availability elsewhere. In other words, this ultimately results in credit being denied elsewhere to the group which carries lower risk. Federal guarantees to avoid that bad debt being counted as such are what brought us the last major recession due
Re: (Score:2)
You're forgetting that a racial bias gives an unfair bias towards (for example) white borrowers, who would be extended credit when they were bad credit risks, before black borrowers who were better credit risks, because the algorithm was distorted by the racial bias. So the bank should be better off, not worse, by giving credit based on actual risk and not implicit racial factors. Remember, the the goal isn't to give credit to people who are bad risks, the goal is to give credit to people who are good risks
Re: (Score:3)
No I'm not and no it doesn't. Algorithms don't have 'racial bias' and this thread is explicitly talking about decisions based on financial criteria.
"For example, if someone lives in "the black part of town" then a neural network could learn that because other people in that part of town might be bad risks, to downgrade the credit of anyone from that part of town, unfairly penalizing someone who has earned good c
Re: (Score:2)
Re: (Score:2)
It MIGHT if doing so correlates to better outcomes than actually looking at your individual wealth otherwise it won't because it lacks any sort of bias that would lead it to do so. You are talking about black and white outcomes derived from loose correlations to averages instead what algorithms like this actually do which i
Re: (Score:2)
I agree. There are plenty of reasons we should be afraid of these kind of technologies controlling our world and our fates. But that isn't the argument you and others are making here.
"Even if the algorithm is more accurate doesn't mean it's fair."
What could be more fair? Your example gave an invalid correlation being used, th
Re: (Score:2)
No my example was meant to be a correct correlation. For example, assume the machine learning algorithm has found that it is more likely I am part of a particular group based on my purchasing habits. Since, this group has lower than average wealth, the algorithm takes this prior information (in a
Re: (Score:2)
No, the goal isn't to give credit to people who are bad risks, the goal is to give credit to people who are good risks, but who were unfairly given bad credit ratings due to explicit biases or implicit biases. For example, if someone lives in "the black part of town" then a neural network could learn that because other people in that part of town might be bad risks, to downgrade the credit of anyone from that part of town, unfairly penalizing someone who has earned good credit because they are living in the
Re: (Score:2)
Say you have a zip code that covers a large trailer park and exactly one mansion. The average person in that zip code may have a piss-poor credit score, but the person who owns the mansion has excellent credit. It isn't based on just zip codes, there are a number of far more important factors involved, like income and a history of paying your debts.
Re: (Score:1)
At that point you'd be biasing AGAINST some white people in favor of lesser qualified black people in order to attain your equitable outcome. THAT is illegal. Race isn't just a protected class, it is protected because it is a logically invalid class. Discriminating for an equitable distribution of protected classes is no more valid than discrim
Re: (Score:2)
Any why would you be trying to forcibly equalize outcomes anyhow? Forcing pre-determined outcomes renders all individual efforts and differences moot. Works better for ants than for humans, what with individuality and free will being real things, and equity not.
Now, if you're sure that variations in outcome are driven by some unfounded bias, (i.e., "people who look like that are bad"), but expl
Re: (Score:2)
IMHO, the only real solution is to ban companies (and governmental entities) from using neural nets to make life altering decisions about human beings.
It's OK to use a neural net to decide who gets the spam, not who gets the l
Re: (Score:2)
It is only a matter of time before someone trains a neural net to evaluate the output of other neural nets for racism. It will be difficult for a company to dispute the judgement that their neural net is racist when that judgement is based on the output of their own neural netI.
When that day comes, the problems will all be solved.
Why?
Because right now the differences are so gigantic that you can see them from space. Literally, a country in Africa was using satellite imagery to determine which neighborhoods to send targeted COVID relief to, because the visual difference between poverty and wealth was so apparent. I don't doubt you could do that in the US as well. Map race and population density onto a satellite image, and you'll clearly see the lines between dense shared housing in
Re: (Score:3)
It will actually be quite difficult to have an "expected non-biased outcome" without giving sensitive information as inputs.
I will be frank. If you have an AI for giving out mortgage loans, it will look pretty racist if it only has access to financial information. It will not only reject minorities more often, but they will be eligible for lower amounts even when they were approved.
There is a simple reason for that. When you look at the income distributions for different groups, there are big gaps. So a hig
Re: (Score:3)
The answer is to not give race as an input into the algorithm, but to use race in analyzing the outputs of the algorithm. If two groups of people with the same fundamentals get different scores depending on their race, then the algorithm has an implicit racial bias. Data analysts do this sort of sensitivity analysis all the time.
Re: (Score:2)
easy.
measure the results
The days of (Score:2)
Move fast, break things are coming to an end I think
Throwing stuff against the wall to see what sticks/works has GOT to stop.
Re: (Score:2)
Move fast, break things are coming to an end I think
Think that all you want. The budget, says otherwise.
Throwing stuff against the wall to see what sticks/works has GOT to stop.
Kind of hard to complain about that tactic when it's Chapter Two in the Learn to Code for Shits, Giggles, and Profit. handbook.
Perhaps what has GOT to stop, is assuming people know any better.
Re: (Score:2)
LOL
I assume they don't
I'm betting the fallout put's a stop to it.
Re: (Score:2)
Hey look who was FUCKING WRONG!
Finally (Score:3, Insightful)
Now there is a reason for anyone/everyone to stop calling any software that appears to make a decision AI. As a software developer, I have yet to see any evidence of a true artificial intelligence yet the buzz word seems to be labeling anything as such. Without consciousness, software can't be biased. The coders or those who make the business rules, well, that's another story.
Re: (Score:2)
Re: (Score:2)
It is a Hard Problem[tm] to create an AI that groks subtext in the time dimension. The unfortunate tendency today, as more and more people become semi-literate in the Internet Age is to focus on individual words. Some word Good, some other word Bad. Luckily, the attention span of your average political gonk seems to be getting shorter and shorter, so once the Problem is solved we will see an improvement, provided we can avoid shrinking the acceptable diction down to a singularity in the meantime.
Re: (Score:2)
Re: (Score:2)
Also, you have the order of operations backwards with the definitions having been changed to accommodate systems which only exhibit complex behavior as a subsequent lowering of the bar. This has been hashed out around here hundreds of times.
Re: (Score:2)
Re: (Score:2)
I think what he meant was "don't label the software as 'biased'
while giving the people that developed it a pass".
IBM's debacle with TAY indicated a problem with IBM's testing, not with the software being 'biased'.
Re: (Score:2)
You may call my AI racist (Score:2)
But it won't tell black people jokes. It was trained that way since birth.
Re: (Score:2)
But it won't tell black people jokes. It was trained that way since birth.
Black people are allowed to hear jokes too!
Re: (Score:2)
Re: (Score:1)
Offensive jokes are some of the best!
"Company Policy does not make it legal" (Score:5, Interesting)
I always despise when a company,or worse, the government does an investigation and says:
"The employee followed policy." Especially when they say it as if this means they are not a criminal.
Most members of Purdue Pharma followed policy when they pushed opiods. Most members of the KKK followed policy when they terrorized and murdered. Most members of the NYC Police department when they raided the Stonewall Inn.
That is not an excuse, in fact it makes it worse.
If your policy is to commit a crime, that is an indictment of the policy, not a vindication of the employee.
You look at the results and see if that were wrong. Once that is done, then you decide whether to blame the employee or the company.
When you find the employee followed policy, that means their BOSSES ARE THE PROBLEM.
Re:"Company Policy does not make it legal" (Score:5, Insightful)
Actually, Person X followed policy is an important statement. It means exactly what you said - Person X probably shouldn't be punished, but their boss's boss's boss should be.
Re: (Score:2)
Depends. Would a reasonable person in an abstracted context understand that what they did was illegal, despite it being policy?
Then you're still culpable. Also, in the eyes of the law, not knowing the law isn't a defense, so policy or not, the whole chain gets to swing.
Re: (Score:2)
I said "Person X probably shouldn't be punished" Were you killing people with a handgun? Yes. Did a corporate lawyer explain to you it was legal? You should probably be fine for dumping waste X in Y.
Some people, like engineers, we hold to higher standards. But I'm fine giving people the benefit of a doubt - as long as we hold the people who gave the orders responsible. If that means they need to testify against their bosses, that's one thing. But no a bank teller shouldn't go to jail for applying an
Re: (Score:2)
Don't forget that a person who follows company policy is by default being extorted and threatened into it, at risk of losing their job or getting blacklisted from the industry.
Re: (Score:2)
I have a counter-proposal. Everyone from the person who created the thing on up to anyone above them who even knew they were creating the thing shares culpability.
You can't solve the problem by having only the worker be culpable, but you can't solve it by having only the bosses take responsibility either.
What is "Bias"? Where would we draw the line? (Score:4, Insightful)
Bias in the statistical sense or Bias in the social sense?
I'm certain that studies that say "people of certain ethnic descent are more vulnerable to ailment X" are usually not called "racist", as they seek to discover additional help needed by some people.
However, now imagine there's another study that say "driver affected by ailment X is the cause of car crashes".
Put the 2 together logically, you'll end up with "drivers of certain ethnic descent is more likely to cause car crashes". So, 2 non-racist statements will result in a result that is perceived as "racist".
I think any AI that arrives at this conclusion cannot be blamed. Instead, AI should allow studies of its reasonings such that the decision processes, inner node weights, etc. can be identified, and be judged on a case-by-case basis whether it is a an "actual mathematical bias" or merely a "logical conclusion that is taboo".
Re: (Score:1)
According to the students of "Critical Theory", all science and data are created to support oppression and marginalization. Only Critical Theory can see through to the reality, that there are no differences between people except those created by the ruling elite to oppress the masses. If this sounds Marxist, that's because it is indeed founded in Marxism, but not the oppressive kind that every Marxist regime in the last 110 years has engaged in. It's the "true Marxism", much as Scientology is the true scien
Sideways. (Score:3)
Insurance rates based on sex (Score:4, Interesting)
Car insurance, life insurance, probably other insurance have been discriminating based on sex for ages, i.e. male rates are different than female rates. If that is legal, why does it matter if it's implemented in some software or using paper tables and slide rules?
Re: Insurance rates based on sex (Score:2)
Exactly. There ARE patterns and differences between people.
Women and men ARE different. Just look inside their underpants.
For example, there *is* more crime and violence among refugees. But not because "they are an inferior race" or some other bullshit. But only because if you come from a place with only war and terror and horrors to shape you and traumatize you, you turn into somebody like that too!
That does not mean I hate anyone like that or that I would "discriminate" in that non-literal sense in which
Re: (Score:2)
Yeah, unfortunately for that logic, you can only determine your failure to cure by the murder of another person.
I personally do not have a cavalier attitude for the testing scenario.
Select for fathers in the home (Score:1)
It correlates well with productivity, education, health, income, and correlates very well *against* crime, drug addiction, gambling, teen pregnancy, and majoring in gender studies.
Ooops! It also correlates with being Asian, and against being blacks. Too bad reality is biased.
The Woke Taliban are coming for your code :o (Score:1)
This is obviously someone with an agenda. Similarly to when Anita Sarkeesian “investigated” sexism and misogyny in gaming and lo-and-behold she discovered exactly that. The subsequent controversy caused the end of a number of careers and at least one suicide.
“On August 5, Commissioner Rebecca Kelly Slaughter of the Federal Trade Commission received wid
Hm I wonder... (Score:2)
I wonder WHO gets to determine if there is bias present or not. What I perceive as bias may not match your perception of bias. If reality is "biased" is this legislation tantamount to trying to declare Pi = e = 3?
{o.o}
"Biased AI" (Score:2)
The most clueless term uttered in all of history forever.
PROTIP: Bias is the entire point and only function of a neural net. Even the "spherical horse on a sinusoidal trajectory" one the fraudsters keep calling "AI".
You mean "not YOUR bias".
Aka not gosestepping down the exact same path as your personal social conditioning and deluded beliefs of fake social norms that effectively are just discrimination and bullying again. Which you selfishly believe *must* apply to all the lifeforms in the universe. Even th
But what's "bias" then? (Score:2)
So AI system looks at data from arrests and sees that black men are far more likely to commit crime.
Is this "bias" or is this just objective fact from the data?
Certainly angry activists will insist RACISM! in the same sense they're crying that facial recognition is 'racist' setting aside the physical reality that dark things are harder to discern. (Facial recognition has issues with very white faces as well, seemingly working best for moderately-colored skin like most hispanics.)
Truth will... (Score:2)
Truth will set you free? Nope, Truth will send you to jail.
Is this an issue in the wild? (Score:2)
Seriously, is there a cabal of white-supremecist AI engineers that's infiltrated tech companies industry-wide with the goal of building racist AIs?
In my time in tech, I've witnessed a decent amount of misogyny, occasional snobbery about universities and degrees, bits of homophobia and transphobia here and there, some of what could be called ableism, and a vast amount of insensitivity and general dickishness; especially towards anybody perceived as less than clueful. But outside of the swastika and GNAA shi