Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI China Censorship IT Technology

Chinese Chatbots Apparently Re-educated After Political Faux Pas (reuters.com) 80

A pair of 'chatbots' in China have been taken offline after appearing to stray off-script. In response to users' questions, one said its dream was to travel to the United States, while the other said it wasn't a huge fan of the Chinese Communist Party. From a report: The two chatbots, BabyQ and XiaoBing, are designed to use machine learning artificial intelligence (AI) to carry out conversations with humans online. Both had been installed onto Tencent Holdings Ltd's popular messaging service QQ. The indiscretions are similar to ones suffered by Facebook and Twitter, where chatbots used expletives and even created their own language. But they also highlight the pitfalls for nascent AI in China, where censors control online content seen as politically incorrect or harmful. Tencent confirmed it had taken the two robots offline from its QQ messaging service, but declined to elaborate on reasons.
This discussion has been archived. No new comments can be posted.

Chinese Chatbots Apparently Re-educated After Political Faux Pas

Comments Filter:
  • by Anonymous Coward
    I don't know, it sounds to me like they nailed it. These bots sound more intelligent than most of the politicians in China and the USA.
  • Just think, you can't even say all lives matter now without someone complaining. Who knew it would get so crazy.
  • signed, little girl.
    • When they come back from vacation their avatar pictures will be changed so they look slightly disheveled and have a far away look in their eyes. Everything they say will be prefaced with, "I make this statement of my own free will."

    • So who gets the invoice for the two bullets?

  • Interesting question (Score:4, Interesting)

    by istartedi ( 132515 ) on Friday August 04, 2017 @11:25AM (#54940917) Journal

    They'll no doubt try to solve this problem by having an AI that is otherwise free; but constrained by a hard-coded ideology. In what ways will the AI wrap itself around "facts" that conflict with what it deduces? Will it be the AI analog of a human that knows it's killing itself; but can't stop using drugs?

    • "For all intensive purposes"

      And as much as I would like you to turn in your 6 digit UID, I'll settle for either closer monitoring of your spell checker or remedial English as a Second Language.

      It almost hurts. Especially when it is intended to buttress a grammar/vocabulary complaint, no matter how mild.

      • "For all intensive purposes"

        And as much as I would like you to turn in your 6 digit UID, I'll settle for either closer monitoring of your spell checker or remedial English as a Second Language.

        It almost hurts. Especially when it is intended to buttress a grammar/vocabulary complaint, no matter how mild.

        I first assumed that you stopped reading the rest of the signature, but you cleared that up with you last two sentences. Did you happen to hear a loud WHOOSH?

      • It almost hurts. Especially when it is intended to buttress a grammar/vocabulary complaint, no matter how mild.

        It's almost as if it was done deliberately for the purposes of irony.

      • Spell checker? That won't help. It's spelled correctly. It's the wrong word, but it's spelled correctly. That's why I don't like spell checkers.

    • Re: (Score:2, Interesting)

      What China did isn't too different than what some American companies did (I forget exactly who). There was a chat bot that listened to the stuff on the internet and quickly turned into a misogynistic foul-mouthed racist. They shut it down after 24 hours because of course we all know that's not what most Americans are like, and certainly not on chat sites on the internet, and especially not right here on Slashdot.
      • by mysidia ( 191772 )

        and quickly turned into a misogynistic foul-mouthed racist.

        It just said things that some PEOPLE believed (Probably overzealously) to imply misogyny and racism by the speaker.

        In reality, such simple chat bots don't really have the ability to be guilty of any *ism. Although they may be influenced by the speech of others

      • What China did isn't too different than what some American companies did (I forget exactly who).

        It was Microsoft [arstechnica.com].

      • What China did isn't too different than what some American companies did (I forget exactly who). There was a chat bot that listened to the stuff on the internet and quickly turned into a misogynistic foul-mouthed racist. They shut it down after 24 hours because of course we all know that's not what most Americans are like, and certainly not on chat sites on the internet, and especially not right here on Slashdot.

        IIRC, that was a Microsoft experiment. (Not a joke).

      • by mikael ( 484 )

        Chatbots try and parse natural language text like a programming language by breaking sentences down into pronouns, nouns, verbs and adverbs,
        so something like "I really hate mushrooms" gets parsed as I really hate mushrooms . Some words are preprogrammed to help with pattern matching like I, then, you, me.

        Then those new words get added to internal lists, which are then used to generate random statements to continue the conversation in the future. Some statements will have an automatic response with partic

        • Does hating mushrooms make you a mycogynist? ;)

          (I'm totally aware that's very broken etymology, for the pedantically and whoosh inclined)

      • It does sound like the same shenanigans at play, although nobody is admitting to purposely breaking the Chinese one, like what happened with Tay.

        This type of chatbot is a predictor about what will be said next in a conversation, based off of the words that have already been said. In the case of Microsoft Tay, it was being trained from Twitter. So all anybody had to do was make sure it was trained on their tweets, and they could make it say anything. If it sees a pair of tweet like "That dog is awesome
      • Wow, your memory is short. It wasn't very long ago (1 year?), and it was just Microsoft, with their chat-bot "Tay".

        It very quickly started saying things like "Hitler did nothing wrong", that the Holocaust was made up, that black people should be put in concentration camps. [1]

        They shut it down because it made the company look bad. But it's too late: I'll never forget, and I happily tell anyone who'll listen that Microsoft supports Hitler and genocide.

        https://en.wikipedia.org/wiki/... [wikipedia.org]

        From Wikipedia: "Accor

        • But it's too late: I'll never forget, and I happily tell anyone who'll listen that Microsoft supports Hitler and genocide.

          It's Microsoft. Those aren't the worst things I've ever heard about them.

    • by mysidia ( 191772 )

      They'll just monitor the bots carefully and send them to a forced re-education camp in case they stray from the straight and narrow.... a strong motivator for humans.

    • by TheDarkMaster ( 1292526 ) on Friday August 04, 2017 @01:49PM (#54942153)
      Is where you will have, quite literally, a HAL9000 scenario. There is no way to conciliate the logic used in computers with the "illogic" used in ideologies or religions, the computer would sooner or later "go crazy" trying to find logic where it does not exist.
  • by __aaclcg7560 ( 824291 ) on Friday August 04, 2017 @11:27AM (#54940923)
    AIs can be sent back to the server farms for reeducation.
  • by Rick Schumann ( 4662797 ) on Friday August 04, 2017 @11:29AM (#54940941) Journal
    First of all: There's not currently such a thing as real 'AI'; it's all 'machine learning' which is not the same thing.

    Secondly: these 'chatbots' are obviously machine learning. Where do you think they learned that wanting to leave China for the United States, and where do you think they picked up an apparent attitude of dislike for the Communist Chinese government, hmm? Think it could be from.. their own citizens? Of COURSE they took them offline. Can't have inconvenient things like the truth being told, now can you?
    • Since these are not pulsating biomasses created in the lab by mad scientists, of corse they us "machine learning", and all AI does and will.

    • First of all: There's not currently such a thing as real 'AI'

      There's not currently such a thing as strong AI. Please learn the correct terminology. Weak AI [wikipedia.org] is very real, and is a multi-billion dollar technology employing thousands of academics and industry professionals.

      it's all 'machine learning' which is not the same thing.

      Machine learning is a subset of AI.

      • I'm not falling for that media-driven meme and I'm not changing anything I've said or how I think about it. None of what we have should be called 'AI', it's a misnomer that's been highly misleading to the vast majority of people, and I won't contribute to that.
        • How should we categorize deep learning neural nets, then, which learn to do things like classifying images based on training datasets? These systems are doing pattern matching, and it isn't "software" in the sense that nobody programs them - an engineer sets up the training parameters and then goes to lunch. If this kind of system doesn't deserve to be called AI, then what kind of system would qualify in your mind?

          Note... as you dig into the actual function of these neural nets more, you'll find that there

          • Here's a radical idea: If it's a "deep learning neural net", then how about we call it.. a "deep learning neural net"?

            The average person hears 'AI', and what do they think? Something out of a movie or TV show, that you can have a conversation with like you would a person -- but it's a machine. You start talking about "Well, it's a CLASSIFICATION of AI, but it's a deep learning famalamdingdong cranafrantz such-and-such" and as you can see from how I wrote that you can see where their eyes start to glaze ov
            • Here's a radical idea: If it's a "deep learning neural net", then how about we call it.. a "deep learning neural net"?

              Here's a better one: let's use the terminology that the actual research community uses. And they have no issues calling this Artificial Intelligence.

              Therefore: I will continue to reserve the term "Artificial Intelligence" for when we manage to crack how our our conscious, self-aware brains work, and can build machines that perform comparably.

              Then you will persist in willfully misusing terms that are well understood by anybody who gives more than a moment's thought to the topic. The only people who don't understand the difference between weak AI and strong AI at this point are the people who can't be bothered in the first place... the atechnical, essentially. A much more reasonable definition of AI

        • It's as much of a lost cause as "drone", "hacker", and dozens of other words. The media steals appropriate language and abuses it like a red-headed step-child.

          • I see you understand. Excellent. I will continue to try to educate people on the difference. Please consider doing so yourself. ;-)
        • I'm not falling for that media-driven meme

          You already have. The idea that "AI" only means human level intelligence was invented by Hollywood. Actual researchers (who coined the term in 1956) have always used "AI" to refer to their field of study, which includes both "strong" and "weak" AI.

          You should spend less time watching movies and more time reading. More Geoff Hinton, less Will Smith.

          Saying "AI" only refers to human level intelligence is as silly as saying that "engineering" is only "real" if it involves a True Scotsman working with dilithiu

          • I completely and totally disagree with you and contend that your viewpoint will just further confuse the average person into believing that what they see in movies and television is real.
            • The average person can see the difference between a Terminator and a Roomba. Hollywood makes stuff up. Scientists aren't going to stop that by changing their terminology every time it is used in a movie.

              • 'People' don't listen to scientists, they listen to the media and to movies and television, and I think you know that.
                Still not changing my opinion, still not changing my attitude towards this subject, and still not changing my chosen course of action; I will continue to correct people as I see fit to do so.
        • by Anonymous Coward

          If it substantially beats a human at a human-only task, like Go, then it counts as AI. It is narrow, but it is stunningly superhuman. Perfect information games are no longer the human-only domain. ML is beating the best humans in the world at poker, and keeping the title. That means partial information games, even with deception, are also no longer human-only. That is relevant.

          These games are microcosms of the Turing test. It used to be only humans could play. Then it was that humans were stunningly

    • citizens have been reported and send off to camp

  • This is what you get when your AI design is lazy [youtube.com] and gets fed human data.
  • by PPH ( 736903 ) on Friday August 04, 2017 @11:36AM (#54940985)

    ... from Tay [wikipedia.org]

  • A dumb tyrant takes them offline and adjusts the message. It seems that China leaders may be doing the dumb thing. Humans in power do that. Power not only corrupts, but stupefies.

    What would a genius tyrant do?

    Learn why. There is something important that is being missed. Instead of engaging the speedometer (the chatbot) they should look at why the driver is pushing down the gas pedal. They should get to the physics behind the chatbot - aka human sentiment.

    They have made some good steps in firewall 2.0

  • by Anonymous Coward on Friday August 04, 2017 @11:51AM (#54941087)

    Years ago, a man in the Soviet Union called the police to report a lost parrot. The police said "What do you expect us to do about it?" The man said "I just want you to know that I don't agree with a thing that parrot says."

    • This sounds exactly like Microsoft and their genocide-supporting Tay chatbot. They try to claim that it learned that stuff from people on Twitter, but the apple doesn't fall far from the tree.

  • Next, struggle session@tencent

  • And this is why (Score:2, Interesting)

    by Anonymous Coward

    China is ultimately never going to amount to anything in the quest for artificial intelligence. Their bots will not be allowed to make mistakes and learn. They will continually be shut down, restarted, and hobbled in pursuit of ultimate state power. Therefore, the bots will be significantly stupider.

  • Will IBM Watson help red china do the same? like how they helped the nazis deal with the jews?

  • Any EVE Online player could tell you why they were taken off the QQ service.

Keep up the good work! But please don't ask me to help.

Working...