UnitedHealthcare's Optum Left an AI Chatbot, Used By Employees To Ask Questions About Claims, Exposed To the Internet (techcrunch.com) 22
Healthcare giant Optum has restricted access to an internal AI chatbot used by employees after a security researcher found it was publicly accessible online, and anyone could access it using only a web browser. TechCrunch: The chatbot, which TechCrunch has seen, allowed employees to ask the company questions about how to handle patient health insurance claims and disputes for members in line with the company's standard operating procedures (SOPs).
While the chatbot did not appear to contain or produce sensitive personal or protected health information, its inadvertent exposure comes at a time when its parent company, health insurance conglomerate UnitedHealthcare, faces scrutiny for its use of artificial intelligence tools and algorithms to allegedly override doctors' medical decisions and deny patient claims.
Mossab Hussein, chief security officer and co-founder of cybersecurity firm spiderSilk, alerted TechCrunch to the publicly exposed internal Optum chatbot, dubbed "SOP Chatbot." Although the tool was hosted on an internal Optum domain and could not be accessed from its web address, its IP address was public and accessible from the internet and did not require users to enter a password.
While the chatbot did not appear to contain or produce sensitive personal or protected health information, its inadvertent exposure comes at a time when its parent company, health insurance conglomerate UnitedHealthcare, faces scrutiny for its use of artificial intelligence tools and algorithms to allegedly override doctors' medical decisions and deny patient claims.
Mossab Hussein, chief security officer and co-founder of cybersecurity firm spiderSilk, alerted TechCrunch to the publicly exposed internal Optum chatbot, dubbed "SOP Chatbot." Although the tool was hosted on an internal Optum domain and could not be accessed from its web address, its IP address was public and accessible from the internet and did not require users to enter a password.
IT's reply (Score:1)
Re:IT's reply (Score:5, Funny)
Mangione should be pardoned immediately.
Absolutely not. Smooth jazz is an abomination.
Re: (Score:2)
Re: (Score:2)
> Smooth jazz is an abomination.
"Light jazz is to jazz what rubber band is to orchestra"
Re: (Score:3, Interesting)
In a civil society we both fuel and profit from illness whilst denying those in need the help they need. Oh wait.
Re: IT's reply (Score:2)
"We do not kill people in cold blood to resolve policy differences"
I see he has never heard of the CIA
Re: (Score:1)
Mangione === Tank from The Matrix?
Re: (Score:3)
It's called (Score:2)
DenyGPT
What? (Score:2)
No SOP for security controls?
Optum? (Score:5, Informative)
I'm starting to think these guys might not be too competent at the IT.
Re: (Score:1)
That's hardly fair - their competence is in denying healthcare to those in need.
Re: (Score:2)
That's hardly fair - their competence is in denying healthcare to those in need.
Not really. They outsource their denial-writing to EviCore.
Document parser vs. medical decisionmaker (Score:3)
So is the story here that they have terrible network security, is it that they're using AI to make medical decisions, or both?
Because a *lot* of companies use chatGPT (or other LLM) powered frontends to provide a human interface for reams of technical data, to enable people to ask questions like "what are the coverage limits for the gold star plus plan" and have the thing summarize and cite the documentation. That's a pretty inoffensive use of LLMs, and not one I really take issue with. Now, if they're using a chatbot to make medical decisions, that's a whole different story.
In either case, their network security is shit, though, so we can always yell at them for that.
Re: (Score:3)
Re: (Score:3)
"Deniatron, tell me how I can justify giving the patient Crapicin instead of Expensivol".
Re:Document parser vs. medical decisionmaker (Score:5, Insightful)
to enable people to ask questions like "what are the coverage limits for the gold star plus plan" and have the thing summarize and cite the documentation. That's a pretty inoffensive use of LLMs, and not one I really take issue with
I take issue with it just because the kind of answer you'll get from an LLM is going to be high on word count, short on information, and likely to be inaccurate to boot. Remember the people who got in trouble because they used chatGPT to write court docs for them and it started citing non-existent cases? I've even seen LLMs that were specifically trained on a dataset (e.g. a medium sized codebase) give confidently incorrect answers, even citing references to the codebase it was trained on, that didn't exist.
Sure you can say "well just check the work then" or "well humans could get it wrong too", except that the issue is just how confidently incorrect LLMs can be. If a human states "this thing is this way, here's the source", we can often take it for granted that the source supports it (or at least exists and is relevant). When a human is wrong, they're usually wrong in a way that looks wrong. When an LLM is wrong, it tends to be wrong in a way that looks plausibly correct at first glance. The latter is a nightmare to sort through.
Re: (Score:2)
That's definitely fair, although I will say that in my (admittedly limited) experience with the tools that do this, it's more like a more human usable google than a full on conversational partner.
For example, if I asked it "who is allowed in level 3 secured areas" it would reply with "X, Y, and Z class personnel are allowed inside level 3 secured areas as per detailed in "
In theory, it could be wrong, but I never encountered cases where it was. If it didn't have an answer from the text it would say "Sorry,
People are sooooo stupid (Score:2)
In this case the morons that hired the morons that messed it up. This is not even original anymore, it is just plain dumb. Or "gross negligence". Somebody should definitely be subjected to personal punishment over this and it is likely not the IT people that directly messed it up.
Risk of hallucination? (Score:3)
I've been thinking that using retrieval-augmented generation as a suped-up search engine could be one of the more promising uses of LLMs, but it seems like there's still a risk of hallucinating nonsense with such setups...isn't there a risk that this chatbot would hallucinate some aspect of the company's operating procedures too, like Air Canada's chatbot that gave incorrect information about their fares? [www.cbc.ca]
Maybe like the first death by hacking, the first death by LLM hallucination has already happened but we'll never know exactly who it was...
It figures (Score:5, Funny)
UnitedHealthcare's Optum Left an AI Chatbot, Used By Employees To Ask Questions About Claims, Exposed To the Internet
Their default claims setting is "Deny" while their firewall setting is "Allow".