Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Government AI Technology

US FTC Leaders Will Target AI That Violates Civil Rights Or Is Deceptive 30

Leaders of the U.S. Federal Trade Commission said on Tuesday the agency would pursue companies who misuse artificial intelligence to violate laws against discrimination or be deceptive. Reuters reports: In a congressional hearing, FTC Chair Lina Khan and Commissioners Rebecca Slaughter and Alvaro Bedoya were asked about concerns that recent innovation in artificial intelligence, which can be used to produce high quality deep fakes, could be used to make more effective scams or otherwise violate laws. Bedoya said companies using algorithms or artificial intelligence were not allowed to violate civil rights laws or break rules against unfair and deceptive acts. "It's not okay to say that your algorithm is a black box" and you can't explain it, he said.

Khan agreed the newest versions of AI could be used to turbocharge fraud and scams and any wrongdoing would "should put them on the hook for FTC action." Slaughter noted that the agency had throughout its 100 year history had to adapt to changing technologies and indicated that adapting to ChatGPT and other artificial intelligence tools were no different. The commission is organized to have five members but currently has three, all of whom are Democrats.
This discussion has been archived. No new comments can be posted.

US FTC Leaders Will Target AI That Violates Civil Rights Or Is Deceptive

Comments Filter:
  • Humans wouldn't do that. /s

    • Re:So KKK AI? (Score:5, Insightful)

      by ShanghaiBill ( 739463 ) on Wednesday April 19, 2023 @06:56AM (#63461142)

      It isn't illegal to lie or be deceptive.

      The standard for libel and defamation is far more than just "false".

      Since it is generally legal for a human to lie, it should be legal for an AI to do the same.

      Where lying is illegal, we already have laws to cover that.

      We don't need a two-tier justice system.

      • Fraud is most certainly illegal.

      • This is exactly correct if I remember how it works in the US, in a slander/libel/defamation case you have to be able to prove actual pecuniary damages except for a short list of per se situations where the alleged conduct is already illegal for other reasons.

        IANAL though, and not even American heh.

  • It's a tool (Score:3, Informative)

    by alecdacyczyn ( 9294549 ) on Tuesday April 18, 2023 @10:17PM (#63460650)
    Trying to regulate what a tool can and cannot be used for is totally impractical. The code for training smaller language models is already out of the bottle. And let's hope they understand the difference between lying and being wrong. ChatGPT is far from factually reliable.
    • Comment removed based on user account deletion
      • Alright. Then they should clarify that this is not about the FTC regulating the external alignment of AI systems, but rather about you being responsible for what an AI does on your behalf in exactly the same way that you are responsible for what a human employee does on your behalf.
      • Say you have a hammer, you sell it to hammer people's skulls in. You are at fault.

        Counterpoint. Guns exist. And so far, I don't see any restrictions on mass-murder variations on them either.

        You the company are responsible, because you are *selling* a product that is stating that as confidently fact, and causing damages.

        Unless you specifically market it as fact based, that argument makes no sense. Saying that the product markets itself during use is a bit of a stretch. It's like getting sued for what a Ouija board tells people.

    • I think the FTC is saying just the opposite. Don't blame the tool if you're caught engaging in deceptive practices or violating civil rights.

      If you're caught discriminating against certain protected groups in rental agreements, you can't blame the AI saying "we didn't know it was excluding all those people from renting our units." Saying the AI is a black box and you don't understand how it works but it "magically" gives you discriminatory results isn't a valid defense.

      • In reality, what will happen is they will claim that there are disparities of outcome, for example marketing apartments to rich people who happen to be more likely to be white, and use that disparity as the basis of a "disparate impact" civil rights violation, despite there being no discriminatory intent whatsoever. This is because most AI systems learn statistics, however those statistics are allegedly racist.

        • Statistics are not racist - they are just numbers. But they can reveal trends and tendencies that are racist. (Rental rejections from the same disposable-income group by ethnicity)

          Relying on AI algorithms without understanding the selection criteria can lead to civil rights violations, despite there being no discriminatory intent whatsoever. However the opposite can also be true. Careful wording of the selection criteria to make the filtering seem innocent while ultimately discriminating against those t

          • That is not how disparate impact liability is supposed to work though, disparate impact relies on intent, to use race neutral policies to achieve racist ends.

            • by Erioll ( 229536 )

              I'm going to hire based on merit only. I will not filter my candidates by anything more than that they live in the same area so that they can come into the office as needed, and their ability to do the job as described (competence), but I will hire the most proficient person encountered. Based on a lot of different policies in effect right now, I'm (at a minimum) racist, sexist, and ableist because where I live has disparities.

              This is the world we live in now.

          • Statistics are not racist - they are just numbers.

            Not exactly true. The method gathering or presenting the numbers is part of what creates the skew.

            But you're right. Blaming a black box isn't a defense. The headline reads the opposite - that the AI tool will be the target rather than the companies blindly using it.

      • Don't blame the tool if you're caught engaging in deceptive practices or violating civil rights.

        The same already applies if your employee is a tool. Not hard to agree here.

    • Yeah, this strikes me as a "HEY! WE WANT IN ON THE AI FUN TOO!" move. Just enforce the laws that exist to combat discrimination. The tool used to cause discrimination means nothing to the law.

      In short: It's not the tool. It's the tool that's using the tool.

      Ah well, power likes to create more reasons to enforce their power. I suppose that's the way it's always going to be.

    • by micheas ( 231635 )

      However, most AIs are being fed real-world data which tends to be racist and sexist.

      Using an AI for employment screening is almost certainly illegal as the AI will find all of the indirect indicators that the applicant is a white male (captain of the lacrosse team, played football, went to a prep school, etc.) and recommend white males at a rate close to that of if it had access to the applicant's gender and race and screened on gender and race before anything else.

      Also using AI for risk assessment of cred

  • How about targetting (Score:5, Interesting)

    by Alain Williams ( 2972 ) <addw@phcomp.co.uk> on Wednesday April 19, 2023 @03:23AM (#63460926) Homepage

    humans who are deceptive, in particular: politicians, news, business leaders - many of whom lie to promote their own aims.

    • Aliexpress sends bait and switch ads with misleading prices for accessories but not the product with the price - nothing seems to get acted on. AI cannot be regulated in a text sense, and parodies are allowed. Ask ChatGPT, what did my local politician do that he hopes I don't find out where my income is xxxxx. What did he do about unaffordable rents? I think the Indian solution: Vote the incumbent out is the only thing that works.
    • by AmiMoJo ( 196126 )

      We have been doing that for decades. It's not getting us where we need to be, so the focus has shifted to systemic problems.

  • Or at least all of the "AI" chatbots?

  • So three Democrats are now stifling AI development to enforce law and order. There is something wrong with that statement.
  • by argStyopa ( 232550 ) on Wednesday April 19, 2023 @07:01AM (#63461152) Journal

    What determines "racism"?

    If, say, an automatic algorithm picks a news story to highlight about hundreds of teens rampaging in downtown Chicago smashing cars, lighting things on fire, shooting each other, swarming and viciously beat a white woman - and mentions that the group was 99% black is that "racism" or honesty?

    Is (very obviously) not mentioning that - as per major media outlets today - likewise racism, or merely omitting a non-salient detail?

  • "It's not okay to say that your algorithm is a black box" and you can't explain it, he said.

    What is the context of this statement?

    • You had legions of journalists poking and proding at the AI trying to be offended, then when they lead it on to say something controversal, aha! AI is racist, bigot, homophobe... and it needs regulation by the FTC.
      Authoritarian politicians want to regulate
      Feminists and Al Sharptons want to be offended.
      Same old s--t.
      FTC views AI as a threat. AI could analyze their framework of regulation and spot problems, or dramatically increase oversight- bribery, corruption, everything we assume politicians and unelected

    • I don't know why she said it, but it is correct that the reason some neural-net type algorithm gives you some output is indeed a black box. The algorithm itself is not a black box, but there is no human-understandable explanation of why it gives some answer.

  • FTC normally regulates deception in commerce, such as false advertising, bait and switch, etc. What Khan would like to do is to turn the FTC into a Ministry of Truth, where any utterance is considered fair game for her to regulate. Meanwhile, actual deception in commerce, especially at the consumer level, is so rampant that it is considered normal, and totally ignored by the FTC.

"Our vision is to speed up time, eventually eliminating it." -- Alex Schure

Working...