Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI The Almighty Buck The Courts

Air Canada Found Liable For Chatbot's Bad Advice On Plane Tickets 72

An anonymous reader quotes a report from CBC.ca: Air Canada has been ordered to pay compensation to a grieving grandchild who claimed they were misled into purchasing full-price flight tickets by an ill-informed chatbot. In an argument that appeared to flabbergast a small claims adjudicator in British Columbia, the airline attempted to distance itself from its own chatbot's bad advice by claiming the online tool was "a separate legal entity that is responsible for its own actions."

"This is a remarkable submission," Civil Resolution Tribunal (CRT) member Christopher Rivers wrote. "While a chatbot has an interactive component, it is still just a part of Air Canada's website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot." In a decision released this week, Rivers ordered Air Canada to pay Jake Moffatt $812 to cover the difference between the airline's bereavement rates and the $1,630.36 they paid for full-price tickets to and from Toronto bought after their grandmother died.
This discussion has been archived. No new comments can be posted.

Air Canada Found Liable For Chatbot's Bad Advice On Plane Tickets

Comments Filter:
  • by ArchieBunker ( 132337 ) on Friday February 16, 2024 @12:08AM (#64243726)

    Wonder how much Air Canada paid in attorney fees to avoid an $812 refund?

    • Re:Attorney fees (Score:5, Informative)

      by ShanghaiBill ( 739463 ) on Friday February 16, 2024 @12:20AM (#64243744)

      Wonder how much Air Canada paid in attorney fees to avoid an $812 refund?

      That's not the point. They didn't want to set a precedent for all the bad advice the chatbot gives in the future.

      • Re:Attorney fees (Score:5, Insightful)

        by penguinoid ( 724646 ) on Friday February 16, 2024 @02:21AM (#64243848) Homepage Journal

        That's not the point. They didn't want to set a precedent for all the profitable advice the chatbot gives in the future.

        • by sjames ( 1099 )

          Nor for the effects of going cheap on labor. They want to hire the cheapest idiot in the room (an AI) rather than pay properly for an employee who would know better.

        • by ceoyoyo ( 59147 )

          If you don't want to set a precedent, going to court is the worst thing you could do. This wasn't quite a court, so the precedent isn't binding, but it's a lot closer than giving someone you just ripped off the bereavement rate and apologizing.

      • Re: (Score:3, Insightful)

        That's not the point. They didn't want to set a precedent for all the bad advice the chatbot gives in the future.

        They also could have done that by refunding the difference before getting sued.

      • Whatever legal precedent is attached to a small claims court ruling is so slight as to effectively be non-existent for all practical purposes.

        Companies uniformly fight all small claims cases without so much as glancing as the specifics beforehand, because the individual cases aren't worth the corporate time it'd take to come to individual go/no-go decisions on fighting each one, and not fighting small claims at all is more expensive (both in the form of the default judgements and in the form of reputation f

        • by sjames ( 1099 )

          Small claims, yet, here we are reading about it on /.

          Now the whole world has seen their frankly idiotic argument that I would have been able to shoot down at age 13 just from idly reading my dad's law books.

      • by whitroth ( 9367 )

        You've got that right. No big company is willing to admit wrongdoing, even when they did it.

        Like the two years, and 33% of the money to the lawyer, I spent to get 98% of my late wife's life insurance, when *they* had outsourced processing of benefits documents, and two years in a row screwed it up. And they paid for an outside lawyer all that time, so you *know* they paid much more than they would have if they'd just said "oops".

      • Or that it gave in the past.

        They probably know of some real howlers, and don't want the courts to find out about them.

    • Re:Attorney fees (Score:5, Informative)

      by dskoll ( 99328 ) on Friday February 16, 2024 @12:46AM (#64243758) Homepage

      Air Canada is a shitty company that will do anything it can to avoid compensating customers, including trying to persuade customers to accept much less [www.cbc.ca] or taking them to court [nationalpost.com] after a CTA ruling.

    • Re:Attorney fees (Score:5, Informative)

      by CrappySnackPlane ( 7852536 ) on Friday February 16, 2024 @12:47AM (#64243760)

      Wonder how much Air Canada paid in attorney fees to avoid an $812 refund?

      From TFCourt-Document:

      5. Mr. Moffat is self-represented. Air Canada is represented by an employee.

      This was a small claims court, and reading through the finding, it looks like they just sent some middle manager armed with a boilerplate response, which most large companies have on hand for dealing with small claims cases. Boilerplate responses are mostly intended to be an auto-fire response that will suffice for 80% of the complete bullshit "you put ketchup on my Big Mac when I asked for no ketchup, I demand $1,000 restitution for emotional trauma" claims that come up, while not being so all-encompassing as to become tl;dr for the small claims judges. The middle manager is mostly there as a warm body and to answer any questions the judge might ask - being that this is small claims, the judge's questions are plain English sort of stuff, basically Judge Judy without the sass.

      Unfortunately for Air Canada, their boilerplate response is evidently somewhat out of date, and geared more towards situations where a customer gets crappy/unclear advice from a live agent:

      27. Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives – including a chatbot. It does not explain why it believes that is the case. In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions.

      So, their thoughtless boilerplate creates an incongruity in this particular situation, which the judge had a bit of fun with. It's abundantly clear that Air Canada wasn't really consciously making the argument, it was just a byproduct of their lazy approach to small claims cases.

      • Someone got $1000 for a bad burger? Fuck, I only got $970....

      • To be fair, from my understanding of small claims court, the company treating it too seriously would be counter-indicated.
        1. A company sending an actual lawyer is frowned upon, it's viewed as unfair, when the whole intent is avoiding the cost of lawyers. It's one thing if, say, you're suing your neighbor and he happens to be a lawyer, or if you're suing Dewey, Cheetum, and Howe, and Howe happened to be the one involved in the event you're suing over, but if you're, say, an apartment rental company and you

        • Correct, as alluded to by "while not being so all-encompassing as to become tl;dr for the small claims judges".

        • by nazrhyn ( 906126 )
          In case you're interested, while I can find results for a search for "counter-indicated", I have to add the quotes to keep it from being corrected to the word I think you mean to use which is "contraindicated". Just pointing it out in case you didn't know.
          • Eh, it's slashdot, I can't edit my posts, the meaning got across, and I can't see grammar errors without walking away for a while because otherwise my brain just papers over them, so it'd take too long. So the occasional mistake slips through.

            • by nazrhyn ( 906126 )
              Yeah no problem. Lots of ESL writers here, so there was a chance the information would be useful.
      • I don't know how small claims works in Canada but in at least some countries you're not allowed to send lawyers in to a hearing - the point of small claims is to get the little guy have a say, not to provide an alternative forum for lawyering someone to death.
      • Eh, your reasoning seems to hold water, but I can't help but remember all the times people posting here have made basically the same exact argument in regards to chatbots violating copyright laws. This was even happening here just last night. Strangely those guys are all oddly silent on this one...

      • by sjames ( 1099 )

        The problem is, they ARE legally responsible for what their employees say and do as part of their employment and they should know that.

        Otherwise they could go into a bar next to an AA meeting, hire the first drunk guy who says he can fly a plane and then disclaim responsibility for the inevitable crash just by saying that's all on the guy that crashed the plane.

    • Probably zero. For small claims as well as risk management and compliance reasons companies normally have legal teams on staff.

    • by nasch ( 598556 )

      A party cannot be represented by a lawyer in small claims court. So they may have paid a bit for advice, but probably nothing exorbitant.

    • The chatbot is a separate legal entity and Air Canada should sue it to recoup the attorney fees.

  • Good (Score:5, Insightful)

    by gweihir ( 88907 ) on Friday February 16, 2024 @01:18AM (#64243788)

    They deserve that. And it is another nail in the coffin of the myth that "AI" is more than just software running a computer. No idea why all these morons thing AI is somehow an "entity". It clearly is not. Now, AGI may be a different question, but nobody (besides the physicalist fanatics) knows whether that is even possible to build.

    • Re:Good (Score:4, Insightful)

      by ArsenneLupin ( 766289 ) on Friday February 16, 2024 @04:30AM (#64243926)

      No idea why all these morons thing AI is somehow an "entity". It clearly is not.

      And even if it was sentient, it was clearly working for the airline. So the airline should assume responsibility for its bad advice, the same way they would (should) if one of their live agents gave bad advice.

    • I wouldn't rate this as a nail, given that it was a small claims court. More like a staple. A cheap lightweight one.

    • by nasch ( 598556 )

      It depends what you mean by "possible". There is certainly no physical law preventing construction of AGI; humans are proof of that. Whether humans will actually succeed in constructing one is a different question. To me it seems inevitable. Either AI will continue improving indefinitely, or it will stop improving at some point short of AGI, and I cannot see any reason why the latter would occur.

      • by gweihir ( 88907 )

        You are just a physicalist fanatic and do not understand Science. Humans are not proof of anything, because it is not understood how a human mind works. _That_ is the actual Science here.

        • by nasch ( 598556 )

          it is not understood how a human mind works.

          As long as it's agreed that humans are a form of natural general intelligence, then it doesn't matter. They're proof that general intelligence is possible, and from a physics perspective, there's no difference between natural and artificial. We may end up creating AGI without ever proving how a mind works.

          • by gweihir ( 88907 )

            That is not understood until we know how it works. (I will gloss over your use of "natural" in an undefined, dishonest and manipulative way.) Learn how Science actually works. Arguments by elimination (which you essentially just used in the form of "What else could it be?") only works if you have a complete and totally accurate description of the system you are arguing about. We do not have that for physical reality. Stop making the same stupid mistakes the religious fuckups are using to "prove" the existen

    • by Tablizer ( 95088 )

      > myth that "AI" is more than just software running a computer. No idea why all these morons thing AI is somehow an "entity".

      I've worked with people with whom I question their sentience status. They seem mechanically driven by a few simple motivations, and only "reason" via simplistic canned slogans they've picked up at troll-sites. Vapid bastards and bastarettes.

      • by gweihir ( 88907 )

        I do not doubt that. The thing is however, humans get sentience status per default, because anything else has been abused far too much in the past.

        That the average person is basically only in possession of miniscule amounts of general intelligence and capability for insight is also not in dispute. And then you have those less capable than average. Hence the average person basically understands almost nothing and many do understand really nothing. I remember a number that says that only roughly 20% of all pe

  • You mean Jake? Are we at the point where an obviously male name should be referred to with an ambiguous pronoun? Jake is a dude's name, unless otherwise specified. Is Jake multiple people? Otherwise, it is *incorrect* to use "their" to refer to Jake here.

    • I can assure you, the only people who truly care about such things, are people who think some invisible (but tasty) dictator in space is calling all the shots.

      • by ceoyoyo ( 59147 )

        I don't think the FSM gives a shit. Do you mean Jesus? Because I've eaten a chunk myself and it was unpleasantly dry. The blood was okay, but nothing special.

    • Re: (Score:2, Funny)

      by freeze128 ( 544774 )
      Jake from State Farm? She sounds hideous!
    • Re: (Score:2, Troll)

      by itsme1234 ( 199680 )

      In short: yes, we're beyond that point, and this thing is out of control. Jake is "they" until otherwise explicitly specified. I've just received an email from Greg who needs to put in a different color in his signature that he's "he/him". But by default he would be "they". What to do when you want to use the "real"/multiple "they"? There's no more "they helped me" when multiple, you either enumerate the persons you mean or you say "Team X helped" or similar!

      Obligatory Larry David: https://www.youtube.com/w [youtube.com]

    • What's the matter, snowflake?

      Do you address Nikki Haley by her given pronouns? Her birth name is Nimarata Nikki Randhawa.

    • by beezly ( 197427 )

      English lacks a specific common-gender third-person singular pronoun and there are examples of "their", "they", "them" and "themselves" being used as singular third-person pronouns going back hundreds of years. It is certainly not incorrect.

      • by Anonymous Coward

        Someone knocks at the door. You ask "Who is it?". Someone calls. You're told the phone is for you. You ask "Who is it?"

        You do not think the entity knocking or calling is a dog, a chair, or an alien from outer space. You use 'it' to mean a 'human of unknown sex, age, defined characteristics'. "It" is the non-gender pronoun.

        All three words used to have an apostrophe, but it disappeared over time, and to reduce confusion:

        He's -> Hes - His -> male possessive
        Her's -> Hers - Hers -> female
        It's

        • by dskoll ( 99328 )

          "It" is not used to refer to a person because it has the connotation of referring to an inanimate object or, if an animate object, at most a non-human animal. Hence the singular "they" was co-opted to refer to a human in a non-gender-specific way.

          Language is not logical. If you don't like singular "they", you are free to file your complaint in triplicate to the Board that Oversees English.

        • Do you need a therapy dog to help with your triggered state?

  • by adrn01 ( 103810 ) on Friday February 16, 2024 @01:44AM (#64243810)
    Unless they actually were claiming that the chatbot belonged to another corporate entity, this might be the first case where an actual legal claim was made of a chatbot having legal personhood. Me thinks any programmers getting anywhere near either an AI or a chatbot need to have thought, very very carefully, about what happens if that bot or AI does something bad. What, exactly, in their employment terms, is their legal exposure?
    • by Tablizer ( 95088 ) on Friday February 16, 2024 @02:30AM (#64243850) Journal

      the airline attempted to distance itself from its own chatbot's bad advice by claiming the online tool was "a separate legal entity that is responsible for its own actions."

      What if the airline hired a human guide who lied to customers? I don't know Canadian law, but it seems the airline would likely still be responsible.

      Otherwise, the bot could lie about prices and nobody can be punished in practice. There's be no incentive against activating a lie-bot.

      "It's Bender's fault, we didn't know he was a liar when we hired him."

      • by Blymie ( 231220 )

        Indeed. You are responsible for the actions of your representative. In this case, a refund without any excessive damages.

        However! If you *know* your representative is acting incorrectly, and do nothing, and the behaviour continues, then it can move into punitive damages! If this chatbot is on their website, and the have not take steps to quickly correct the situation, they could be inline for 10x or more damages.

        • by Tablizer ( 95088 )

          So let's see if I'm interpreting this correctly. A company is only liable for the actual customer losses of a rogue employee (representative), UNLESS they A) didn't do sufficient background checks before hiring sensitive positions*, and/or B) ignored warning signs during employment, and/or C) encouraged the employee to lie/misbehave.

          If A, B, and/or C, the company is also subject to punitive damages, not just customer compensation costs.

          * Probably wouldn't be at issue here.

    • by Jiro ( 131519 )

      They didn't claim that the chatbot is a person. They claimed that they weren't responsible for what their agents say. The judge then inferred that that meant they were calling the chatbot a person.

      It's a fair inference under the circumstances, but they didn't actually say it outright.

      • They claimed that they weren't responsible for what their agents say.

        The only legal rationale for this claim is that agents are individual legal entities. Since Air Canada failed to offer any legal rationale to why they cannot be held responsible, the only possible argument they are making is that "the chatbot is a separate legal entity that is responsible for its own actions." There are no other logical alternatives.

        The judge then inferred that that meant they were calling the chatbot a person.

        Just because they didn't say it didn't mean they didn't imply it. Likewise, if you label someone a "bastard child" then you are implying their mother became pre

  • 1. Most airlines discontinued bereavement fares. 2. You know shady shit is going on when there is an industry of obscurity, intermediaries, and widely-varying prices for the same ticket on the same flight.
  • by Bruce66423 ( 1678196 ) on Friday February 16, 2024 @05:42AM (#64244022)

    In this case trying to get the idea of the AI as not being the responsibility of the site owner. They got smacked down, but don't be surprised they tried it...

  • by misnohmer ( 1636461 ) on Friday February 16, 2024 @08:16AM (#64244198)
    If Air Canada argument was to hold up in court, this would be a great precedent to take advantage of. Someone would start offering a downloadable AI which would order AC tickets online using your credit card, the give them to you in exchange for your time filling out a survey. After using those tickets, you claim fraud on the credit card, AI is charged with fraud, it turns out AI died of natural causes (it exited after a period of inactivity), Air Canada gets a chargeback, case closed. Next trip, new instance of AI is born, but as per Air Canada argument, just because you spawned it and gave it your credit card information and where you'd like to fly to, doesn't make you liable for what AI does with this information.
  • If the advice can be programmed to really screw over the customer, without any remorse, bazinga, fantastic. Sociopath "employees" for sociopath CEO's.
  • AI is really the greatest tool for the lazy person or company. What does general AI really save time in doing? At best, it can give accurate easy to find quotable advice, and at worst, well just crack the knuckles and get ready to laugh. Companies take a giant risk is letting AI do anything, because if that AI gets it wrong, you're liable. This case just proves that if your AI does something stupid, you're at fault, which is what we all assumed, this proves it.

    Remember that truck which got sold for $

As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality. -- Albert Einstein

Working...