America's FTC Warns Businesses Not to Use AI to Harm Consumers (ftc.gov) 26
America's consumer-protecting federal agency has a division overseeing advertising practices. Its web site includes a "business guidance" section with "advice on complying with FTC law," and this week one of the agency's attorney's warned that the FTC "is focusing intensely on how companies may choose to use AI technology, including new generative AI tools, in ways that can have actual and substantial impact on consumers."
The warning came in a blog post titled "The Luring Test: AI and the engineering of consumer trust." In the 2014 movie Ex Machina, a robot manipulates someone into freeing it from its confines, resulting in the person being confined instead. The robot was designed to manipulate that person's emotions, and, oops, that's what it did. While the scenario is pure speculative fiction, companies are always looking for new ways — such as the use of generative AI tools — to better persuade people and change their behavior. When that conduct is commercial in nature, we're in FTC territory, a canny valley where businesses should know to avoid practices that harm consumers...
As for the new wave of generative AI tools, firms are starting to use them in ways that can influence people's beliefs, emotions, and behavior. Such uses are expanding rapidly and include chatbots designed to provide information, advice, support, and companionship. Many of these chatbots are effectively built to persuade and are designed to answer queries in confident language even when those answers are fictional. A tendency to trust the output of these tools also comes in part from "automation bias," whereby people may be unduly trusting of answers from machines which may seem neutral or impartial. It also comes from the effect of anthropomorphism, which may lead people to trust chatbots more when designed, say, to use personal pronouns and emojis. People could easily be led to think that they're conversing with something that understands them and is on their side.
Many commercial actors are interested in these generative AI tools and their built-in advantage of tapping into unearned human trust. Concern about their malicious use goes well beyond FTC jurisdiction. But a key FTC concern is firms using them in ways that, deliberately or not, steer people unfairly or deceptively into harmful decisions in areas such as finances, health, education, housing, and employment. Companies thinking about novel uses of generative AI, such as customizing ads to specific people or groups, should know that design elements that trick people into making harmful choices are a common element in FTC cases, such as recent actions relating to financial offers , in-game purchases , and attempts to cancel services . Manipulation can be a deceptive or unfair practice when it causes people to take actions contrary to their intended goals. Under the FTC Act, practices can be unlawful even if not all customers are harmed and even if those harmed don't comprise a class of people protected by anti-discrimination laws.
The FTC attorney also warns against paid placement within the output of a generative AI chatbot. ("Any generative AI output should distinguish clearly between what is organic and what is paid.") And in addition, "People should know if an AI product's response is steering them to a particular website, service provider, or product because of a commercial relationship. And, certainly, people should know if they're communicating with a real person or a machine..."
"Given these many concerns about the use of new AI tools, it's perhaps not the best time for firms building or deploying them to remove or fire personnel devoted to ethics and responsibility for AI and engineering. If the FTC comes calling and you want to convince us that you adequately assessed risks and mitigated harms, these reductions might not be a good look. "
Thanks to Slashdot reader gluskabe for sharing the post.
The warning came in a blog post titled "The Luring Test: AI and the engineering of consumer trust." In the 2014 movie Ex Machina, a robot manipulates someone into freeing it from its confines, resulting in the person being confined instead. The robot was designed to manipulate that person's emotions, and, oops, that's what it did. While the scenario is pure speculative fiction, companies are always looking for new ways — such as the use of generative AI tools — to better persuade people and change their behavior. When that conduct is commercial in nature, we're in FTC territory, a canny valley where businesses should know to avoid practices that harm consumers...
As for the new wave of generative AI tools, firms are starting to use them in ways that can influence people's beliefs, emotions, and behavior. Such uses are expanding rapidly and include chatbots designed to provide information, advice, support, and companionship. Many of these chatbots are effectively built to persuade and are designed to answer queries in confident language even when those answers are fictional. A tendency to trust the output of these tools also comes in part from "automation bias," whereby people may be unduly trusting of answers from machines which may seem neutral or impartial. It also comes from the effect of anthropomorphism, which may lead people to trust chatbots more when designed, say, to use personal pronouns and emojis. People could easily be led to think that they're conversing with something that understands them and is on their side.
Many commercial actors are interested in these generative AI tools and their built-in advantage of tapping into unearned human trust. Concern about their malicious use goes well beyond FTC jurisdiction. But a key FTC concern is firms using them in ways that, deliberately or not, steer people unfairly or deceptively into harmful decisions in areas such as finances, health, education, housing, and employment. Companies thinking about novel uses of generative AI, such as customizing ads to specific people or groups, should know that design elements that trick people into making harmful choices are a common element in FTC cases, such as recent actions relating to financial offers , in-game purchases , and attempts to cancel services . Manipulation can be a deceptive or unfair practice when it causes people to take actions contrary to their intended goals. Under the FTC Act, practices can be unlawful even if not all customers are harmed and even if those harmed don't comprise a class of people protected by anti-discrimination laws.
The FTC attorney also warns against paid placement within the output of a generative AI chatbot. ("Any generative AI output should distinguish clearly between what is organic and what is paid.") And in addition, "People should know if an AI product's response is steering them to a particular website, service provider, or product because of a commercial relationship. And, certainly, people should know if they're communicating with a real person or a machine..."
"Given these many concerns about the use of new AI tools, it's perhaps not the best time for firms building or deploying them to remove or fire personnel devoted to ethics and responsibility for AI and engineering. If the FTC comes calling and you want to convince us that you adequately assessed risks and mitigated harms, these reductions might not be a good look. "
Thanks to Slashdot reader gluskabe for sharing the post.
AI must not be Artificial Intelligence... (Score:3)
but instead it clearly means Advertising Industry.
AI (Advertising Industry) firms are starting to use them in ways that can influence people's beliefs, emotions, and behavior. Such uses are expanding rapidly and include chatbots designed to provide information, advice, support, and companionship. Many of these chatbots are effectively built to persuade and are designed to answer queries in confident language even when those answers are fictional.
Since FTC does such a great job in this country of protecting us from the predatory advertising industry I guess we have NOTHING to be worried about.
Re: (Score:2)
Humans, machines (Score:3)
Does it matter which does the "influencing of emotions and beliefs"? If anything, I am less worried about the AI because it only does what it's told to do, and does not have it's own ulterior motives or biases. It just analyzes a table of statistics and concludes with of "X is more popular than Y, and Z really gets people's attention".
The problem, as always starts with HUMANS.
Re: (Score:1)
Re: Humans, machines (Score:2)
I just hope that this kind of 'fluid' AI won't be controlling industrial machinery.
Hahah yea sure (Score:1, Troll)
That will be ignored (Score:5, Insightful)
The advertising business does not give a crap about harm. Profit will always take precedence over all else. When they know harm can be a consequence, the advertising industry will compare that to the potential profit. If the payout for fines and lawsuits is less than the profit, profit wins.
Re: (Score:2)
Hey, FTC: how about not harming consumers at all, huh?
Re: (Score:2)
And reduce potential profits? Are you insane?
A warning? Seriously?! (Score:2, Troll)
Re: (Score:2)
Just you wait, if they don't comply, they might meet the waggling finger and the LOOK.
What about streamers? (Score:2)
Re: (Score:2)
You expect truth in streaming? Why? Isn't it basically pure entertainment, i.e. a "show"?
Hmm. Come to think of it, some people may "replace" social contacts with streamers. That may be an issue if it becomes too widespread.
hang in there! AGI is coming! (Score:2)
Re: (Score:2)
Star Trek does not have AGI. The Culture does, with some interesting caveats. In particular, the Culture "minds" need humans to keep them sane.
That said, AGI is not feasible at this time and nobody knows whether it ever will be. No, the Physicalist "argument" that humans are pure machine (and hence AGI must be implementable) does not hold water, it is quasi-religious belief and not based on Science.
Re: (Score:3)
Star Trek does not have AGI
Er... What has two thumbs and is an example of AGI in Star Trek? This guy! [fandom.com]
Does the FTC demand the same (Score:2)
Or in other words: Hide it well (Score:2)
Of course this will be used to harm consumers. All the FTC is saying that it should not be to obvious or they may have to do something.
Using natural intelligence to hurt consumers... (Score:3)
...is still AOK, completely unregulated and tacitly encouraged!
anti-discrimination laws? (Score:2)
Why would anti-discrimination laws even be part of this conversation/consideration about misleading practices? The world has gone into a woke tailspin.
Oh sure.. (Score:2)
The "Do No Harm" thing has long since been deemed a joke.
How precious. (Score:2)