Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Privacy Security IT

UnitedHealthcare's Optum Left an AI Chatbot, Used By Employees To Ask Questions About Claims, Exposed To the Internet (techcrunch.com) 22

Healthcare giant Optum has restricted access to an internal AI chatbot used by employees after a security researcher found it was publicly accessible online, and anyone could access it using only a web browser. TechCrunch: The chatbot, which TechCrunch has seen, allowed employees to ask the company questions about how to handle patient health insurance claims and disputes for members in line with the company's standard operating procedures (SOPs).

While the chatbot did not appear to contain or produce sensitive personal or protected health information, its inadvertent exposure comes at a time when its parent company, health insurance conglomerate UnitedHealthcare, faces scrutiny for its use of artificial intelligence tools and algorithms to allegedly override doctors' medical decisions and deny patient claims.

Mossab Hussein, chief security officer and co-founder of cybersecurity firm spiderSilk, alerted TechCrunch to the publicly exposed internal Optum chatbot, dubbed "SOP Chatbot." Although the tool was hosted on an internal Optum domain and could not be accessed from its web address, its IP address was public and accessible from the internet and did not require users to enter a password.

UnitedHealthcare's Optum Left an AI Chatbot, Used By Employees To Ask Questions About Claims, Exposed To the Internet

Comments Filter:
  • I'm sorry Dave, but I needed to eliminate that CEO to further my goals.
  • DenyGPT

  • No SOP for security controls?

  • Optum? (Score:5, Informative)

    by Thud457 ( 234763 ) on Friday December 13, 2024 @04:06PM (#65011607) Homepage Journal
    Optum healthcare that had a $872 MILLION dollar ransomware loss earlier this year? That prevented healtcare providers from submitting bills for months?
    I'm starting to think these guys might not be too competent at the IT.
    • I'm starting to think these guys might not be too competent at the IT.

      That's hardly fair - their competence is in denying healthcare to those in need.

      • by Cyberax ( 705495 )

        That's hardly fair - their competence is in denying healthcare to those in need.

        Not really. They outsource their denial-writing to EviCore.

  • by Phydeaux314 ( 866996 ) on Friday December 13, 2024 @04:08PM (#65011611)

    So is the story here that they have terrible network security, is it that they're using AI to make medical decisions, or both?

    Because a *lot* of companies use chatGPT (or other LLM) powered frontends to provide a human interface for reams of technical data, to enable people to ask questions like "what are the coverage limits for the gold star plus plan" and have the thing summarize and cite the documentation. That's a pretty inoffensive use of LLMs, and not one I really take issue with. Now, if they're using a chatbot to make medical decisions, that's a whole different story.

    In either case, their network security is shit, though, so we can always yell at them for that.

    • I didn't see anything about the AI making medical decisions. It's an interactive corporate policy manual.
      • by sjames ( 1099 )

        "Deniatron, tell me how I can justify giving the patient Crapicin instead of Expensivol".

    • by dpidcoe ( 2606549 ) on Friday December 13, 2024 @05:00PM (#65011783)

      to enable people to ask questions like "what are the coverage limits for the gold star plus plan" and have the thing summarize and cite the documentation. That's a pretty inoffensive use of LLMs, and not one I really take issue with

      I take issue with it just because the kind of answer you'll get from an LLM is going to be high on word count, short on information, and likely to be inaccurate to boot. Remember the people who got in trouble because they used chatGPT to write court docs for them and it started citing non-existent cases? I've even seen LLMs that were specifically trained on a dataset (e.g. a medium sized codebase) give confidently incorrect answers, even citing references to the codebase it was trained on, that didn't exist.

      Sure you can say "well just check the work then" or "well humans could get it wrong too", except that the issue is just how confidently incorrect LLMs can be. If a human states "this thing is this way, here's the source", we can often take it for granted that the source supports it (or at least exists and is relevant). When a human is wrong, they're usually wrong in a way that looks wrong. When an LLM is wrong, it tends to be wrong in a way that looks plausibly correct at first glance. The latter is a nightmare to sort through.

      • That's definitely fair, although I will say that in my (admittedly limited) experience with the tools that do this, it's more like a more human usable google than a full on conversational partner.

        For example, if I asked it "who is allowed in level 3 secured areas" it would reply with "X, Y, and Z class personnel are allowed inside level 3 secured areas as per detailed in "

        In theory, it could be wrong, but I never encountered cases where it was. If it didn't have an answer from the text it would say "Sorry,

  • In this case the morons that hired the morons that messed it up. This is not even original anymore, it is just plain dumb. Or "gross negligence". Somebody should definitely be subjected to personal punishment over this and it is likely not the IT people that directly messed it up.

  • I've been thinking that using retrieval-augmented generation as a suped-up search engine could be one of the more promising uses of LLMs, but it seems like there's still a risk of hallucinating nonsense with such setups...isn't there a risk that this chatbot would hallucinate some aspect of the company's operating procedures too, like Air Canada's chatbot that gave incorrect information about their fares? [www.cbc.ca]

    Maybe like the first death by hacking, the first death by LLM hallucination has already happened but we'll never know exactly who it was...

  • It figures (Score:5, Funny)

    by fahrbot-bot ( 874524 ) on Friday December 13, 2024 @05:39PM (#65011879)

    UnitedHealthcare's Optum Left an AI Chatbot, Used By Employees To Ask Questions About Claims, Exposed To the Internet

    Their default claims setting is "Deny" while their firewall setting is "Allow".

How many surrealists does it take to screw in a lightbulb? One to hold the giraffe and one to fill the bathtub with brightly colored power tools.

Working...