US FTC Leaders Will Target AI That Violates Civil Rights Or Is Deceptive 30
Leaders of the U.S. Federal Trade Commission said on Tuesday the agency would pursue companies who misuse artificial intelligence to violate laws against discrimination or be deceptive. Reuters reports: In a congressional hearing, FTC Chair Lina Khan and Commissioners Rebecca Slaughter and Alvaro Bedoya were asked about concerns that recent innovation in artificial intelligence, which can be used to produce high quality deep fakes, could be used to make more effective scams or otherwise violate laws. Bedoya said companies using algorithms or artificial intelligence were not allowed to violate civil rights laws or break rules against unfair and deceptive acts. "It's not okay to say that your algorithm is a black box" and you can't explain it, he said.
Khan agreed the newest versions of AI could be used to turbocharge fraud and scams and any wrongdoing would "should put them on the hook for FTC action." Slaughter noted that the agency had throughout its 100 year history had to adapt to changing technologies and indicated that adapting to ChatGPT and other artificial intelligence tools were no different. The commission is organized to have five members but currently has three, all of whom are Democrats.
Khan agreed the newest versions of AI could be used to turbocharge fraud and scams and any wrongdoing would "should put them on the hook for FTC action." Slaughter noted that the agency had throughout its 100 year history had to adapt to changing technologies and indicated that adapting to ChatGPT and other artificial intelligence tools were no different. The commission is organized to have five members but currently has three, all of whom are Democrats.
So KKK AI? (Score:2)
Humans wouldn't do that. /s
Re:So KKK AI? (Score:5, Insightful)
It isn't illegal to lie or be deceptive.
The standard for libel and defamation is far more than just "false".
Since it is generally legal for a human to lie, it should be legal for an AI to do the same.
Where lying is illegal, we already have laws to cover that.
We don't need a two-tier justice system.
Re: So KKK AI? (Score:1)
Fraud is most certainly illegal.
Re: (Score:3)
This is exactly correct if I remember how it works in the US, in a slander/libel/defamation case you have to be able to prove actual pecuniary damages except for a short list of per se situations where the alleged conduct is already illegal for other reasons.
IANAL though, and not even American heh.
It's a tool (Score:3, Informative)
Re: (Score:1)
Re: (Score:1)
Re: (Score:1)
Re: (Score:2)
Say you have a hammer, you sell it to hammer people's skulls in. You are at fault.
Counterpoint. Guns exist. And so far, I don't see any restrictions on mass-murder variations on them either.
You the company are responsible, because you are *selling* a product that is stating that as confidently fact, and causing damages.
Unless you specifically market it as fact based, that argument makes no sense. Saying that the product markets itself during use is a bit of a stretch. It's like getting sued for what a Ouija board tells people.
Re: (Score:1)
Re: (Score:3)
I think the FTC is saying just the opposite. Don't blame the tool if you're caught engaging in deceptive practices or violating civil rights.
If you're caught discriminating against certain protected groups in rental agreements, you can't blame the AI saying "we didn't know it was excluding all those people from renting our units." Saying the AI is a black box and you don't understand how it works but it "magically" gives you discriminatory results isn't a valid defense.
Re: (Score:3)
In reality, what will happen is they will claim that there are disparities of outcome, for example marketing apartments to rich people who happen to be more likely to be white, and use that disparity as the basis of a "disparate impact" civil rights violation, despite there being no discriminatory intent whatsoever. This is because most AI systems learn statistics, however those statistics are allegedly racist.
Re: (Score:2)
Statistics are not racist - they are just numbers. But they can reveal trends and tendencies that are racist. (Rental rejections from the same disposable-income group by ethnicity)
Relying on AI algorithms without understanding the selection criteria can lead to civil rights violations, despite there being no discriminatory intent whatsoever. However the opposite can also be true. Careful wording of the selection criteria to make the filtering seem innocent while ultimately discriminating against those t
Re: (Score:2)
That is not how disparate impact liability is supposed to work though, disparate impact relies on intent, to use race neutral policies to achieve racist ends.
Re: (Score:3)
I'm going to hire based on merit only. I will not filter my candidates by anything more than that they live in the same area so that they can come into the office as needed, and their ability to do the job as described (competence), but I will hire the most proficient person encountered. Based on a lot of different policies in effect right now, I'm (at a minimum) racist, sexist, and ableist because where I live has disparities.
This is the world we live in now.
Re: (Score:2)
Statistics are not racist - they are just numbers.
Not exactly true. The method gathering or presenting the numbers is part of what creates the skew.
But you're right. Blaming a black box isn't a defense. The headline reads the opposite - that the AI tool will be the target rather than the companies blindly using it.
Re: (Score:2)
Don't blame the tool if you're caught engaging in deceptive practices or violating civil rights.
The same already applies if your employee is a tool. Not hard to agree here.
Re: (Score:2)
Yeah, this strikes me as a "HEY! WE WANT IN ON THE AI FUN TOO!" move. Just enforce the laws that exist to combat discrimination. The tool used to cause discrimination means nothing to the law.
In short: It's not the tool. It's the tool that's using the tool.
Ah well, power likes to create more reasons to enforce their power. I suppose that's the way it's always going to be.
Re: (Score:2)
However, most AIs are being fed real-world data which tends to be racist and sexist.
Using an AI for employment screening is almost certainly illegal as the AI will find all of the indirect indicators that the applicant is a white male (captain of the lacrosse team, played football, went to a prep school, etc.) and recommend white males at a rate close to that of if it had access to the applicant's gender and race and screened on gender and race before anything else.
Also using AI for risk assessment of cred
How about targetting (Score:5, Interesting)
humans who are deceptive, in particular: politicians, news, business leaders - many of whom lie to promote their own aims.
Re: (Score:2)
Re: (Score:2)
We have been doing that for decades. It's not getting us where we need to be, so the focus has shifted to systemic problems.
So, all of it? (Score:2)
Or at least all of the "AI" chatbots?
Three musketeers (Score:2)
given modern examples (Score:4, Interesting)
What determines "racism"?
If, say, an automatic algorithm picks a news story to highlight about hundreds of teens rampaging in downtown Chicago smashing cars, lighting things on fire, shooting each other, swarming and viciously beat a white woman - and mentions that the group was 99% black is that "racism" or honesty?
Is (very obviously) not mentioning that - as per major media outlets today - likewise racism, or merely omitting a non-salient detail?
The algorithms are a black box (Score:1)
What is the context of this statement?
Re: (Score:1)
You had legions of journalists poking and proding at the AI trying to be offended, then when they lead it on to say something controversal, aha! AI is racist, bigot, homophobe... and it needs regulation by the FTC.
Authoritarian politicians want to regulate
Feminists and Al Sharptons want to be offended.
Same old s--t.
FTC views AI as a threat. AI could analyze their framework of regulation and spot problems, or dramatically increase oversight- bribery, corruption, everything we assume politicians and unelected
Re: (Score:2)
I don't know why she said it, but it is correct that the reason some neural-net type algorithm gives you some output is indeed a black box. The algorithm itself is not a black box, but there is no human-understandable explanation of why it gives some answer.
commerce (Score:2)
FTC normally regulates deception in commerce, such as false advertising, bait and switch, etc. What Khan would like to do is to turn the FTC into a Ministry of Truth, where any utterance is considered fair game for her to regulate. Meanwhile, actual deception in commerce, especially at the consumer level, is so rampant that it is considered normal, and totally ignored by the FTC.