

As Russia and China 'Seed Chatbots With Lies', Any Bad Actor Could Game AI the Same Way (detroitnews.com) 19
"Russia is automating the spread of false information to fool AI chatbots," reports the Washington Post. (When researchers checked 10 chatbots, a third of the responses repeated false pro-Russia messaging.)
The Post argues that this tactic offers "a playbook to other bad actors on how to game AI to push content meant to inflame, influence and obfuscate instead of inform," and calls it "a fundamental weakness of the AI industry." Chatbot answers depend on the data fed into them. A guiding principle is that the more the chatbots read, the more informed their answers will be, which is why the industry is ravenous for content. But mass quantities of well-aimed chaff can skew the answers on specific topics. For Russia, that is the war in Ukraine. But for a politician, it could be an opponent; for a commercial firm, it could be a competitor. "Most chatbots struggle with disinformation," said Giada Pistilli, principal ethicist at open-source AI platform Hugging Face. "They have basic safeguards against harmful content but can't reliably spot sophisticated propaganda, [and] the problem gets worse with search-augmented systems that prioritize recent information."
Early commercial attempts to manipulate chat results also are gathering steam, with some of the same digital marketers who once offered search engine optimization — or SEO — for higher Google rankings now trying to pump up mentions by AI chatbots through "generative engine optimization" — or GEO.
Our current situation "plays into the hands of those with the most means and the most to gain: for now, experts say, that is national governments with expertise in spreading propaganda." Russia and, to a lesser extent, China have been exploiting that advantage by flooding the zone with fables. But anyone could do the same, burning up far fewer resources than previous troll farm operations... In a twist that befuddled researchers for a year, almost no human beings visit the sites, which are hard to browse or search. Instead, their content is aimed at crawlers, the software programs that scour the web and bring back content for search engines and large language models. While those AI ventures are trained on a variety of datasets, an increasing number are offering chatbots that search the current web. Those are more likely to pick up something false if it is recent, and even more so if hundreds of pages on the web are saying much the same thing...
The gambit is even more effective because the Russian operation managed to get links to the Pravda network stories edited into Wikipedia pages and public Facebook group postings, probably with the help of human contractors. Many AI companies give special weight to Facebook and especially Wikipedia as accurate sources. (Wikipedia said this month that its bandwidth costs have soared 50 percent in just over a year, mostly because of AI crawlers....) Last month, other researchers set out to see whether the gambit was working. Finnish company Check First scoured Wikipedia and turned up nearly 2,000 hyperlinks on pages in 44 languages that pointed to 162 Pravda websites. It also found that some false information promoted by Pravda showed up in chatbot answers.
"They do even better in such places as China," the article points out, "where traditional media is more tightly controlled and there are fewer sources for the bots." (The nonprofit American Sunlight Project calls the process "LLM grooming".)
The article quotes a top Kremlin propagandist as bragging in January that "we can actually change worldwide AI."
The Post argues that this tactic offers "a playbook to other bad actors on how to game AI to push content meant to inflame, influence and obfuscate instead of inform," and calls it "a fundamental weakness of the AI industry." Chatbot answers depend on the data fed into them. A guiding principle is that the more the chatbots read, the more informed their answers will be, which is why the industry is ravenous for content. But mass quantities of well-aimed chaff can skew the answers on specific topics. For Russia, that is the war in Ukraine. But for a politician, it could be an opponent; for a commercial firm, it could be a competitor. "Most chatbots struggle with disinformation," said Giada Pistilli, principal ethicist at open-source AI platform Hugging Face. "They have basic safeguards against harmful content but can't reliably spot sophisticated propaganda, [and] the problem gets worse with search-augmented systems that prioritize recent information."
Early commercial attempts to manipulate chat results also are gathering steam, with some of the same digital marketers who once offered search engine optimization — or SEO — for higher Google rankings now trying to pump up mentions by AI chatbots through "generative engine optimization" — or GEO.
Our current situation "plays into the hands of those with the most means and the most to gain: for now, experts say, that is national governments with expertise in spreading propaganda." Russia and, to a lesser extent, China have been exploiting that advantage by flooding the zone with fables. But anyone could do the same, burning up far fewer resources than previous troll farm operations... In a twist that befuddled researchers for a year, almost no human beings visit the sites, which are hard to browse or search. Instead, their content is aimed at crawlers, the software programs that scour the web and bring back content for search engines and large language models. While those AI ventures are trained on a variety of datasets, an increasing number are offering chatbots that search the current web. Those are more likely to pick up something false if it is recent, and even more so if hundreds of pages on the web are saying much the same thing...
The gambit is even more effective because the Russian operation managed to get links to the Pravda network stories edited into Wikipedia pages and public Facebook group postings, probably with the help of human contractors. Many AI companies give special weight to Facebook and especially Wikipedia as accurate sources. (Wikipedia said this month that its bandwidth costs have soared 50 percent in just over a year, mostly because of AI crawlers....) Last month, other researchers set out to see whether the gambit was working. Finnish company Check First scoured Wikipedia and turned up nearly 2,000 hyperlinks on pages in 44 languages that pointed to 162 Pravda websites. It also found that some false information promoted by Pravda showed up in chatbot answers.
"They do even better in such places as China," the article points out, "where traditional media is more tightly controlled and there are fewer sources for the bots." (The nonprofit American Sunlight Project calls the process "LLM grooming".)
The article quotes a top Kremlin propagandist as bragging in January that "we can actually change worldwide AI."
All AI is based on false data (Score:1)
Most humans believe false things, as in, a majority of what a human believes is false. All AI is trained on human knowledge, therefore a lot of what it spews out is likely to be BS.
Re: (Score:2)
Re: (Score:2)
"a majority of what a human believes is false."
False. Yet you believe it.
The entire purpose of animals evolving to have brains is defeated if this were true, there's an existence proof that says you're wrong. But when you're wrong about so many things it's easy, from your ignorant perspective, to believe that claim.
Re: (Score:1)
You do not need to have correct premesis to reach the right conclusion, and most of the world believe in some variety of magic sky daddy, as the justification of their moral framework.
Re: (Score:2)
Most humans believe false things, as in, a majority of what a human believes is false
These two things are not logically equivalent.
Re: (Score:2)
Interesting opening, but I'm having trouble understanding what you mean. Most of what each of us believes is true. That's how we stay alive from minute to minute. If you didn't accurately believe you can open the door of your home, you ain't goin' anywhere.
One possible interpretation is that you mean most people cannot accurately articulate valid reasons for believing things, whether those things are true or false. If that is your intention, that it at least partly conforms to the perspective of The Enigma
Re: (Score:2)
Most humans believe false things, as in, a majority of what a human believes is false. All AI is trained on human knowledge, therefore a lot of what it spews out is likely to be BS.
Interesting.
Please give us some examples of things that you believe that are also false.
A close look at your post could be a good place to start.
sos dd (Score:2)
imagine (Score:3)
Imagine an intelligence that could determine truth rather than having to rely on being told what is true, imagine an intelligence with some sort of ability to reason, perhaps a "reasoning" AI. Wonder if the geniuses at openAI might think of something like that?
Funny how words get thrown around, yet we all know they are all lies.
We won't have actual AI as long as it is vulnerable to this kind of pathetic manipulation, but we all know the power and the money is in this very manipulation so don't hold your breath on that problem being solved. Musk isn't investing in truth, that's not going to gain him more wealth and power.
Also, don't we all know to teach our children right from wrong? Yet somehow we don't know to teach AI the same? We accept NOT curating the information the AI's train on, or worse yet making sure they get trained on misinformation and lies? Just how intelligent are these smartest people in the world?
Re: imagine (Score:2)
Imagine an intelligence that could determine truth rather than having to rely on being told what is true, imagine an intelligence with some sort of ability to reason
This is something most humans literally cannot do.
Look around, and think about what you're saying here. Criticizing AI for lacking this capability, because you yourself want or need to be told what is true.
The problem isn't that machines/AI/LLM etc. lack reasoning. It's that you require someone else to do it for you in the first place.
Re: imagine (Score:2)
All humans can do this, with the exceptionof a tiny percentage of people with a mental disability or mental illness.
That millions of people with average intelligence choose to be incurious is an artifact of there being little survival advantage to being a rational being. And that laziness is reineforced by a culture of antiintectualism, where people who are making too much of a fuss about reason, truth, or science are marked as agitators and disruptive to the order of society.
Re: 20+ years.... (Score:2)
Would it irritate your if we are generally aware of our ignorance and apathy, but refuse to unalive ourselves because we prefer to trigger your rage?
Russian misinformation - Haaar! (Score:2, Insightful)
What Russian lies would that be. That the Hunter Biden Laptop was Russian disinformation. That Russia started a totally unprovoked war in Ukraine. That Covid started in a wet market down the road from the Wuhan Institute of Virology. That the NIH wasn't financing gain-of-function research on bat v
Re: (Score:2)
I mean sometimes it is pretty black and white. One reason I totally support maximum US support of Ukraine is it is very morally justifiable and the US is at it's best when the lines are so clear.
The dance of fairy facts you have support to justify the invasion all end in just a return to pre-20th century diplomacy. Russia's got beef and they want the land, so why not just take it?
It don't need to be russia or china or.. (Score:3)
There is a large mass of people that hate being used to train an AI without their consent, and these people can (and rightfully so) spike the data to purposefully ruin the AI in retaliation.
To not mention things like "tar pits" made to stick the scraping robots and all that.
Astroturf - AI version. (Score:2)
So current AI training procedures - which amount to "read all the internet you can" - fall for astroturf campaigns. Why am I not surprised?