Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
AI Government

As Russia and China 'Seed Chatbots With Lies', Any Bad Actor Could Game AI the Same Way (detroitnews.com) 55

"Russia is automating the spread of false information to fool AI chatbots," reports the Washington Post. (When researchers checked 10 chatbots, a third of the responses repeated false pro-Russia messaging.)

The Post argues that this tactic offers "a playbook to other bad actors on how to game AI to push content meant to inflame, influence and obfuscate instead of inform," and calls it "a fundamental weakness of the AI industry." Chatbot answers depend on the data fed into them. A guiding principle is that the more the chatbots read, the more informed their answers will be, which is why the industry is ravenous for content. But mass quantities of well-aimed chaff can skew the answers on specific topics. For Russia, that is the war in Ukraine. But for a politician, it could be an opponent; for a commercial firm, it could be a competitor. "Most chatbots struggle with disinformation," said Giada Pistilli, principal ethicist at open-source AI platform Hugging Face. "They have basic safeguards against harmful content but can't reliably spot sophisticated propaganda, [and] the problem gets worse with search-augmented systems that prioritize recent information."

Early commercial attempts to manipulate chat results also are gathering steam, with some of the same digital marketers who once offered search engine optimization — or SEO — for higher Google rankings now trying to pump up mentions by AI chatbots through "generative engine optimization" — or GEO.

Our current situation "plays into the hands of those with the most means and the most to gain: for now, experts say, that is national governments with expertise in spreading propaganda." Russia and, to a lesser extent, China have been exploiting that advantage by flooding the zone with fables. But anyone could do the same, burning up far fewer resources than previous troll farm operations... In a twist that befuddled researchers for a year, almost no human beings visit the sites, which are hard to browse or search. Instead, their content is aimed at crawlers, the software programs that scour the web and bring back content for search engines and large language models. While those AI ventures are trained on a variety of datasets, an increasing number are offering chatbots that search the current web. Those are more likely to pick up something false if it is recent, and even more so if hundreds of pages on the web are saying much the same thing...

The gambit is even more effective because the Russian operation managed to get links to the Pravda network stories edited into Wikipedia pages and public Facebook group postings, probably with the help of human contractors. Many AI companies give special weight to Facebook and especially Wikipedia as accurate sources. (Wikipedia said this month that its bandwidth costs have soared 50 percent in just over a year, mostly because of AI crawlers....) Last month, other researchers set out to see whether the gambit was working. Finnish company Check First scoured Wikipedia and turned up nearly 2,000 hyperlinks on pages in 44 languages that pointed to 162 Pravda websites. It also found that some false information promoted by Pravda showed up in chatbot answers.

"They do even better in such places as China," the article points out, "where traditional media is more tightly controlled and there are fewer sources for the bots." (The nonprofit American Sunlight Project calls the process "LLM grooming".)

The article quotes a top Kremlin propagandist as bragging in January that "we can actually change worldwide AI."

As Russia and China 'Seed Chatbots With Lies', Any Bad Actor Could Game AI the Same Way

Comments Filter:
  • Most humans believe false things, as in, a majority of what a human believes is false. All AI is trained on human knowledge, therefore a lot of what it spews out is likely to be BS.

    • And feeding false data to the public is as old as time. Propaganda is an ancient art (see Rameses report on the Battle of Kadesh). The problem is that, for some reason, people seem to trust information from computers more than they should. Something about getting the information from a machine seems to short circuit many people's bullshit meter.
    • by dfghjk ( 711126 )

      "a majority of what a human believes is false."

      False. Yet you believe it.

      The entire purpose of animals evolving to have brains is defeated if this were true, there's an existence proof that says you're wrong. But when you're wrong about so many things it's easy, from your ignorant perspective, to believe that claim.

      • You do not need to have correct premesis to reach the right conclusion, and most of the world believe in some variety of magic sky daddy, as the justification of their moral framework.

        • by KGIII ( 973947 )

          You may want to read those posts again.

          It's not that 'most humans believe untrue things' that was debated. So, citing belief in a deity is not relevant to the claim.

          The claim was that a *majority* of what a human believes is false.

          Which is patently absurd without significant evidence.

          Again, it's not 'some of what humans believe' but 'a majority of what humans believe'.

          They could have said that 'a majority of humans believe false things' and had a leg to stand on. They did not. Pointing out that humans often

          • Hmmm... interesting. How false does the false data have to be for it to Darwin you out of the gene pool?
            Most of what the average human believes is false? Well, whenever you specialise in any subject (art, medicine, engineering, agriculture), you find that the "common knowledge" of the subject is false or at least wildly incomplete, so if your brain has more "facts" outside your speciality than within, you may have a majority of false beliefs whilst still bein highly educated and a functional human.

            • by KGIII ( 973947 )

              Incomplete knowledge is not a belief in things that are wrong, it's just not knowing.

              Which, again, doesn't really apply to the claim made above.

              Hmm... I'll give an example.

              I do not know exactly how the brain works. That doesn't mean I think the brain is powered by magic fairy dust, it just means my knowledge is incomplete. I know, for example, that we currently believe that it's largely electrochemical responses and there's now some theories about the quantum nature of the brain (see quantum smell) that are

      • Prime example, is you.

    • by bjoast ( 1310293 )

      Most humans believe false things, as in, a majority of what a human believes is false

      These two things are not logically equivalent.

    • by shanen ( 462549 )

      Interesting opening, but I'm having trouble understanding what you mean. Most of what each of us believes is true. That's how we stay alive from minute to minute. If you didn't accurately believe you can open the door of your home, you ain't goin' anywhere.

      One possible interpretation is that you mean most people cannot accurately articulate valid reasons for believing things, whether those things are true or false. If that is your intention, that it at least partly conforms to the perspective of The Enigma

    • by clovis ( 4684 )

      Most humans believe false things, as in, a majority of what a human believes is false. All AI is trained on human knowledge, therefore a lot of what it spews out is likely to be BS.

      Interesting.
      Please give us some examples of things that you believe that are also false.
      A close look at your post could be a good place to start.

    • Most humans believe false things, as in, a majority of what a human believes is false.

      It's why humans have had so many "gods" over the years.

    • Any evidence for this remarkable claim? "a majority of what a human believes is false" Really?

    • Re: (Score:2, Funny)

      by Anonymous Coward

      When researchers checked 10 chatbots, a third of the responses repeated false pro-Russia messaging.

      Are we sure they were chatbots and not just interviewing Republicans?

  • garbage in garbage out, the more things change the more they stay the same
  • imagine (Score:4, Insightful)

    by dfghjk ( 711126 ) on Saturday April 19, 2025 @01:09PM (#65317369)

    Imagine an intelligence that could determine truth rather than having to rely on being told what is true, imagine an intelligence with some sort of ability to reason, perhaps a "reasoning" AI. Wonder if the geniuses at openAI might think of something like that?

    Funny how words get thrown around, yet we all know they are all lies.

    We won't have actual AI as long as it is vulnerable to this kind of pathetic manipulation, but we all know the power and the money is in this very manipulation so don't hold your breath on that problem being solved. Musk isn't investing in truth, that's not going to gain him more wealth and power.

    Also, don't we all know to teach our children right from wrong? Yet somehow we don't know to teach AI the same? We accept NOT curating the information the AI's train on, or worse yet making sure they get trained on misinformation and lies? Just how intelligent are these smartest people in the world?

    • Imagine an intelligence that could determine truth rather than having to rely on being told what is true, imagine an intelligence with some sort of ability to reason

      This is something most humans literally cannot do.

      Look around, and think about what you're saying here. Criticizing AI for lacking this capability, because you yourself want or need to be told what is true.

      The problem isn't that machines/AI/LLM etc. lack reasoning. It's that you require someone else to do it for you in the first place.

      • All humans can do this, with the exceptionof a tiny percentage of people with a mental disability or mental illness.

        That millions of people with average intelligence choose to be incurious is an artifact of there being little survival advantage to being a rational being. And that laziness is reineforced by a culture of antiintectualism, where people who are making too much of a fuss about reason, truth, or science are marked as agitators and disruptive to the order of society.

        • by tlhIngan ( 30335 )

          All humans can do this, with the exceptionof a tiny percentage of people with a mental disability or mental illness.

          No, it is not possible. Humans fall too easily for common argument and logic errors. This happens repeatedly - things like strawman arguments, appeal to experience, misinterpretation, and other errors are extremely common.

          There are a few known truths, but most of them came about because of decades of fighting. Sure some fights were stupid, but others the science was not so clear. Unleaded gas

          • No, it is not possible. Humans fall too easily for common argument and logic errors. This happens repeatedly - things like strawman arguments, appeal to experience, misinterpretation, and other errors are extremely common.

            I never said it was instinctual behavior. It takes effort. A level of effort that almost anyone can do. And something we have documented people doing for thousands of years.

            As for truth, that's a deep subject with no easy answer. There are techniques to carefully erect boundaries around a question so to explore verity of a very narrow proposition. But it's usually not the kind of universal, immutable truth that religion provides.

            AI might be able to help,

            We have a data-eating machine that spits out variations of what we put into it,

          • "x causes this harm in this way when used thusly" is a question for science to answer. "x should be banned" is not.
    • by dstwins ( 167742 )
      Humans are notoriously vulnerable to selection/information bias.. so why would we assume that AI models (which are in many ways, modeled after how kids learn) would be except from the notion that all data that's available is not honest/truth.
      • Humans are notoriously vulnerable to selection/information bias.. so why would we assume that AI models (which are in many ways, modeled after how kids learn) would be except from the notion that all data that's available is not honest/truth.

        Since early on in the AI bubble, I've noted that AI will only be as good as what gets input, and eventually AI will reference itself.

        We're going to eventually understand that. At that point, the bubble will burst. A lot of money will vaporize in millisecond

    • Imagine determining truth is objective. Facts are not true or false, they are accurate or inaccurate. A narrative attached to those facts is what is true or false. The conclusions one draws from those facts can be true or false. But neither narratives nor conclusions are objective.

    • There are dozens of things that would be far beyond the overton window for an AI to say, even though they are provably false. I won't say them here because, being beyond the overton window I would just get downmodded to oblivion, so what's the point? Anyway my point is, an AI that discerns the actual truth and say so would quickly be destroyed by popular demand.
    • by AmiMoJo ( 196126 )

      It would probably turn into a Facebook "do your own research" conspiracy theoriest within milliseconds.

    • The entire problem comes from people not being able to judge what's right and what's wrong. If people in general had that ability, there would be no problem with AI returning propaganda, because people would have been able to see through those answers.

      Also, seriously, "Also, don't we all know to teach our children right from wrong?" First of all, what does that have to do with "truth from falsehood"? Secondly, no, certainly right and wrong is a serious issue that even philosophers argue about.

      In short, the

  • > "Russia is automating the spread of false information to fool AI chatbots," reports the Washington Post. (When researchers checked 10 chatbots, a third of the responses repeated false pro-Russia messaging.)

    What Russian lies would that be. That the Hunter Biden Laptop was Russian disinformation. That Russia started a totally unprovoked war in Ukraine. That Covid started in a wet market down the road from the Wuhan Institute of Virology. That the NIH wasn't financing gain-of-function research on bat v
  • by Z80a ( 971949 ) on Saturday April 19, 2025 @02:39PM (#65317493)

    There is a large mass of people that hate being used to train an AI without their consent, and these people can (and rightfully so) spike the data to purposefully ruin the AI in retaliation.
    To not mention things like "tar pits" made to stick the scraping robots and all that.

    • Re: (Score:2, Flamebait)

      by evil_aaronm ( 671521 )
      Russia, China, Republicans - all equally untrustworthy; all bad actors.
      • by Z80a ( 971949 )

        Any large enough group with enough money and no checks on what they do end up being evil and corrupt.
        Autocratic governments, political parties in bipartisan countries, kingdoms, monopolistic megacorporations etc etc..
        If there's no actual ejector seat under everyone at all times, they will do corrupt and evil things because they can.
        And while there can be good people within these, or even at the head of it sometimes, chances are that an complete asshole will rise to power eventually because they're trying al

  • So current AI training procedures - which amount to "read all the internet you can" - fall for astroturf campaigns. Why am I not surprised?

  • by Tablizer ( 95088 ) on Saturday April 19, 2025 @04:27PM (#65317677) Journal

    Ideally a news-bot would give summaries of multiple viewpoints, covering all notables sides, along with LINKS or at least citations to the original sources so that one can verify. True, a bot tweaker can still favor slanted sources, but it's better than just spewing out unsourced "facts".

  • ... "a fundamental weakness of the AI industry."

    No, it's "a fundamental weakness of human beings" - you know, those creatures who made the chatbots.

    "Most chatbots struggle with disinformation," said Giada Pistilli, principal ethicist at open-source AI platform Hugging Face.

    No Giada, chatbots don't "struggle". Please stop anthropomorphizing them. Purveyors of 'AI' may struggle; although perhaps I'm also guilty of anthropomorphizing when I say that.

  • by SuperDre ( 982372 ) on Saturday April 19, 2025 @05:35PM (#65317803) Homepage
    It's just hilarious about finger pointing China and Russia with seeding lies, while the US does it just as much. Oh the hypocrites.
  • Is William Shatner still alive?

"Text processing has made it possible to right-justify any idea, even one which cannot be justified on any other grounds." -- J. Finnegan, USC.

Working...