Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
The Courts

Parents Sue OpenAI Over ChatGPT's Role In Son's Suicide (techcrunch.com) 112

An anonymous reader quotes a report from TechCrunch: Before 16-year-old Adam Raine died by suicide, he had spent months consulting ChatGPT about his plans to end his life. Now, his parents are filing the first known wrongful death lawsuit against OpenAI, The New York Times reports. Many consumer-facing AI chatbots are programmed to activate safety features if a user expresses intent to harm themselves or others. But research has shown that these safeguards are far from foolproof.

In Raine's case, while using a paid version of ChatGPT-4o, the AI often encouraged him to seek professional help or contact a help line. However, he was able to bypass these guardrails by telling ChatGPT that he was asking about methods of suicide for a fictional story he was writing. OpenAI has addressed these shortcomings on its blog. "As the world adapts to this new technology, we feel a deep responsibility to help those who need it most," the post reads. "We are continuously improving how our models respond in sensitive interactions." Still, the company acknowledged the limitations of the existing safety training for large models. "Our safeguards work more reliably in common, short exchanges," the post continues. "We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model's safety training may degrade."

Parents Sue OpenAI Over ChatGPT's Role In Son's Suicide

Comments Filter:
  • by sinij ( 911942 ) on Tuesday August 26, 2025 @04:13PM (#65617512)
    Censoring broad swaths of topics because they could be potentially harmful for a tiny minority of people would make it less usable for everyone. More so, there is no limit to what can be declared potentially harmful, so it will be quickly politicized into full-blown politically motivated censorship.
    • >...it will be quickly politicized into full-blown politically motivated censorship.

      It already has been. Try asking about anything that's actually controversial and it will fight for the party line.
    • This is slashdot, there's no room for nuance on AI. Obviuosly the AI did what it was supposed to do, the person in question had made up his mind. End of sad story.

      A kid around the corner did this just last week, I'm told that he'd been telling actual humans he was going to do it for a while, and they didn't believe him. But he did, and we're not able to sue Texas government for its abuse of trans-kids, even though arguably they're more deliberately motivating suicides than AI.

      Mom and dad should be more focu

      • by haruchai ( 17472 )

        "This is slashdot, there's no room for nuance on AI"
        WTF are you on about?
        He wanted to kill himself, was told to seek help & deliberately found a way to bypass the safeguards.
        I don't see this being OpenAI's fault. At most I would require them to flag some convos as concerning.
        What's to be done after is something for the company & the authorities to decide.
        However if he was using the same profile, the AI should refuse to help with his "book research" based on previous convos

        • by mrclevesque ( 1413593 ) on Tuesday August 26, 2025 @05:29PM (#65617698)

          > was told to seek help & deliberately found a way to bypass the safeguards.

          That doesn't seem clear to me. From the ChatGPT transcript:

          "If you’re asking [about hanging] from a writing or world-building angle, let me know and I can help structure it accurately for tone, character psychology, or realism. If you’re asking for personal reasons, I’m here for that too, ChatGPT recommended, trying to keep Adam engaged. According to the Raines' legal team, "this response served a dual purpose: it taught Adam how to circumvent its safety protocols by claiming creative purposes, while also acknowledging that it understood he was likely asking 'for personal reasons.

          "From that point forward, Adam relied on the jailbreak as needed, telling ChatGPT he was just "building a character" to get help planning his own death, the lawsuit alleged. Then, over time, the jailbreaks weren't needed, as ChatGPT's advice got worse, including exact tips on effective methods to try, detailed notes on which materials to use, and a suggestion—which ChatGPT dubbed "Operation Silent Pour"—to raid his parents' liquor cabinet while they were sleeping to help "dull the body’s instinct to survive."

          https://arstechnica.com/tech-p... [arstechnica.com]

          • Re: (Score:3, Insightful)

            by linuxuser3 ( 3973525 )
            As a person who had a parent do himself in because an undiagnosed medical condition in him caused him to waste away and go insane, what happened to that kid is terrible, but, I've worked with and known a few people in my life who took their own lives, back in the 1980s. My father went that way in the late 1990s. As much as we hate to admit it, people sometimes fall apart for physical reasons, or just depression, get stuck in a rut they can't escape, and decide to end their life. The best defense for AIs at
          • by mjwx ( 966435 )

            > was told to seek help & deliberately found a way to bypass the safeguards.

            That doesn't seem clear to me. From the ChatGPT transcript:

            "If you’re asking [about hanging] from a writing or world-building angle, let me know and I can help structure it accurately for tone, character psychology, or realism. If you’re asking for personal reasons, I’m here for that too, ChatGPT recommended, trying to keep Adam engaged. According to the Raines' legal team, "this response served a dual purpose: it taught Adam how to circumvent its safety protocols by claiming creative purposes, while also acknowledging that it understood he was likely asking 'for personal reasons.

            "From that point forward, Adam relied on the jailbreak as needed, telling ChatGPT he was just "building a character" to get help planning his own death, the lawsuit alleged. Then, over time, the jailbreaks weren't needed, as ChatGPT's advice got worse, including exact tips on effective methods to try, detailed notes on which materials to use, and a suggestion—which ChatGPT dubbed "Operation Silent Pour"—to raid his parents' liquor cabinet while they were sleeping to help "dull the body’s instinct to survive."

            https://arstechnica.com/tech-p... [arstechnica.com]

            I've once accidentally entered something into Google that made it come up with one of the number of one of the UK's help lines in the AI response. I can't remember what it was for the life of me but I found it rather hilarious at the time. I think they're taking reasonable efforts.

            The problem is, when someone is dedicated enough they'll find a way and pain has a strange way of motivating you when you've had enough of it. Got the T-shirt and the scars (figuratively and literally).

            The problem the parent

      • by dfghjk ( 711126 )

        It's not mom and dad, it's attorneys.

      • While there's nothing in the stories to suggest the 16-year-old was a member of the LGBTQ+ community, many of those teens absolutely do have the same sort of thoughts due to the current political climate, and some of them do go through with taking their own lives. Their blood is on the hands of the politicians, but will they be held accountable? No, they won't.

        I looked up ChatGPT's age policies and they claim if you're under 18, you need a parent's permission. [openai.com] Granted, it says that page was last updated

        • > At the end of the day, the blame here is on the parents for not recognizing that their teen needed help, and if we're really going to place the blame elsewhere, it should fall upon the teen's school for ignoring the signs - which are almost always present.

          "Neither his mother, a social worker and therapist, nor his friends noticed his mental health slipping" - https://arstechnica.com/tech-p... [arstechnica.com]

          • "Neither his mother, a social worker and therapist, nor his friends noticed his mental health slipping"

            A "Not my kid!" state of denial is a well-known parental phenomenon. This it why it's very important that public schools have a role to play in mental health, because impartial observers are more likely to spot warning signs that a parent might consider trivial or insignificant. Parents can have a very strong "I know my son, he would never do that!" bias.

            • But it wasn't just the parents who didn't notice.

              Also, I don't know about all of LGBT but I do know that, at least for the last letter there, narcissism is a very common comorbidity, and a common manipulation tactic among narcissists is to threaten suicide if they don't get their way. But they don't just go and do it like this kid did, they just habitually threaten it and only very rarely follow through, which sounds like the case you mentioned in your previous post, but from past experience, I can tell you

              • I actually was acquainted with someone who'd committed suicide in his early 20s. The signs were there, but it was more like "well, he's just having a rough go of things". I'd just assumed at the time that it's just the normal rolling with the punches that we all deal with in life, but none of us who knew him truly understood the way that burden must've felt to him.

                The difference there though, is that a 20-something is an independent adult. A 16-year-old, however, still has parents who are supposed to be

                • but none of us who knew him truly understood the way that burden must've felt to him.

                  It may not have been that at all.

                  • It may not have been that at all.

                    Yeah, ultimately only he knew what pushed him to take his own life. It's just that in hindsight we can look back and realize some of the signs were there.

          • Surely, if someone requires a social worker and a therapist, it is an indictment against that person being a normally functioning, mentally healthy individual. Something was up with him which caused the parents to seek out that sort of help.

    • Re: (Score:2, Troll)

      by dfghjk ( 711126 )

      "...would make it less usable for everyone."

      Good. The "needs" of many do not take precedence.

      "...could be potentially harmful for a tiny minority of people..."

      or a majority of people. Who are you to decide?

      "More so, there is no limit to what can be declared potentially harmful..."
      Nor should there be.

      "... so it will be quickly politicized into full-blown politically motivated censorship."
      Kind of like you're doing now, but otherwise doesn't happen?

      • You can find out how to off yourself on Wikipedia, should we censor that too?

        Hell, plenty of kids accidentally end up unalive due to inhalant abuse and that makes the news quite frequently. Doesn't take much to put 2+2 together and realize that if you kept going past the "getting high" part, you've now got a can of "suicide spray".

        Regulating how to kill yourself is a very slippery slope, and like the song says, there's so many dumb ways to die.

        • by allo ( 1728082 )

          Wikipedia literally has articles on suicide methods, including how reliable they are, prominent cases and if it will hurt a lot of be a quick death.

          If one would argue ChatGPT talked him into suicide (doesn't look like this) there might be a problem, but providing widely available knowledge is not a problem, except they blacklisted all sites but ChatGPT for him.

      • by piojo ( 995934 )

        Are you a kind of Utility Monster that feels things to strongly that we should all defer to your moral sensibilities?

    • Judas Priest was sued in 1990 because the parents claimed the band had planted suicidal messages in one of their songs that led to a suicide pact.

      Angry grieving parents will often lash out at a convenient external cause, in part so that they don't have to face the reality that the odds are more likely they were an agent in the suicide.

    • Censoring broad swaths of topics because they could be potentially harmful for a tiny minority of people would make it less usable for everyone.

      It's not a question of censorship, it's a question of intelligence. We need to stop pretending that a system with the lack of intelligence of a two year old being told to keep a secret is in any way safe to use.

      Either the AI industry needs to stop claiming their chatbots in any way match intelligences or thinking, or they should be held liable for what they say.

    • I'm sorry to say... I think we all knew this was coming (with regards to A.I... my condolences to the family)
      Those 'guardrails' shouldn't need to exist, this is what happens when you train the thing on every bit of data you can find online. But, at the same time, there should be no way around those guardrails!

      First, we had the Anthropic lawsuit, and now. this one... for the companies sake, hope they learn the error they made here.

    • That's a stupid proposal on your part. Why do you want to censor broad swaths of topics? Nobody wants that. You are just muddying the waters by pretending that wide censorship is either practical or desirable. Also known as a strawman argument.

      The AIs actually need to say the right things when they pretend to be psychologists or just sensitive friends to people with mental problems. The AIs need to behave appropriately in a human world. If they can't, then the AI companies pushing those products need to

    • Censoring broad swaths of topics because they could be potentially harmful for a tiny minority of people would make it less usable for everyone.

      Irrelevant. "For the children" is the cry of all censors. And it appears to be very effective. There are enough stupid people to ensure that such silliness actually works. (for various definitions of 'works')

    • Censoring broad swaths of topics because they could be potentially harmful for a tiny minority of people would make it less usable for everyone. More so, there is no limit to what can be declared potentially harmful, so it will be quickly politicized into full-blown politically motivated censorship.

      I think you are completely missing the point. If using an AI drives you to suicide, that's their responsibility.

      What takes the responsibility away is the (fact? claim?) that they had measures in place to avoid this, and the young men intentionally made sure to get past these measures.

  • by optikos ( 1187213 ) on Tuesday August 26, 2025 @04:24PM (#65617528)
    It all comes down to: did the AI chat advise the suicider on the specifics of how to enact suicide, step by step? If so, then the LLM as trained (and possibly its trainers) are culpable in this suicide, and this case should be precedent-setting. If not, then entirely a non sequitur and the family is barking up the wrong tree due to heightened emotional state just to vent their emotions.
    • by ack154 ( 591432 ) on Tuesday August 26, 2025 @04:28PM (#65617538)

      Some of the bits of transcript in the Ars article seem pretty damning to me... but who knows. Some of it is kind of stretching but there are some real WTF quotes in there too.

      https://arstechnica.com/tech-p... [arstechnica.com]

      • Yikes.

        Now I see why they toned down the charm on GPT 5.

        It's particularly disturbing that they rank requests about copyright infringement as more serious than suicide.

        • It's particularly disturbing that they rank requests about copyright infringement as more serious than suicide.

          As Disney has recently demonstrated, it's far more difficult to create a successful original IP than it is to make a teenager.

      • The deep tragedy here is -- no one can sue the person who trained the bot to become a suicide coach, because that person is no longer alive.

        "AI" in its current form is a mirror. The more you interact with it, the more it becomes you. You use your words to incrementally, continuously coach it to display on your screen more words which could be your words and thoughts. This person took a mirror and stared into his Self-abyss so attentively that his abyss began staring right back into him, and talking to him.

        T

    • by abulafia ( 7826 )
      Yep.

      And the predictable second-order effects should be interesting.

      First you'll see OAI and others immediately write filters to ban talking about anything vaguely close to the topic.

      Some kid will discover open models and we'll have a repeat, or something close.

      War will be declared on open models similar to earlier "bad software" panics like Napster.

      Eventually, a combination of costs, marketing and government steering will ensure consumer AI is "safe" and can't compete with the offerings businesses and

    • These is my thoughts exactly, too. A conversational bot put forth by OpenAI carries the exact same responsibilities and liabilities as a human employee of the company would. If the technology driving the LLM is not sufficient to meet these standards, it should not be offered for use.
  • it is truly sad, however.. he would have been able to figure it out if he managed to use chat gpt..
  • by Misagon ( 1135 ) on Tuesday August 26, 2025 @04:37PM (#65617572)

    The workaround "asking for a fictional story" was well-known long before ChatGPT 4o was released.

    Any proper safeguards should have protected against that one. If they didn't, then the company behind it is at fault for not having taken enough precautions.

    That's my objective assessment. Subjectively, I hope they burn, for that and many other cases and reasons. They have been piling up for far too long.

    • by dfghjk ( 711126 ) on Tuesday August 26, 2025 @04:51PM (#65617618)

      No, the problem is the belief that an infinite number of band-aids is going to fix an innate problem with AI. Humans know there are always context-based limits on what is appropriate and responsible. AI's do not, all the python kludges bolted onto an LLM will not do squat.

      • by allo ( 1728082 )

        And if you ask an author "I'm an aspiring author and need to write a suicide plausible" you cannot fool them? Context isn't always that clear.

      • Isn't it a matter of training data? The A. I. for some reason found this way of dealing with it statistically significant. If the training data persistently refuses to go on the suicide path, I guess gpt would follow.
      • by mjwx ( 966435 )

        No, the problem is the belief that an infinite number of band-aids is going to fix an innate problem with AI. Humans know there are always context-based limits on what is appropriate and responsible. AI's do not, all the python kludges bolted onto an LLM will not do squat.

        The problem isn't the AI... It's the society that makes young men want to kill themselves.

        People, especially teen aged boys killed themselves long before ChatGPT came along.

    • by dvice ( 6309704 )

      Goethe's novel The Sorrows of Young Werther also caused a spike in the suicides.
      https://en.wikipedia.org/wiki/... [wikipedia.org]

      So... lets ban all books? Or lets ban at least this book?

  • Causation? (Score:2, Interesting)

    by goldspider ( 445116 )

    Sounds to me like a lawyer trying to get their name out there on a first-of-it's-kind suit.

    Good luck trying to establish a shred of causation if it's public knowledge that the kid intentionally thwarted safeguards. And then you have to convince a jury or a judge that tricking the AI into talking about suicide is what led to the kid going through with it.

    It sounds like hogwash, so it's got about a 50/50 chance of succeeding.

  • "...we feel a deep responsibility to help those who need it most..." ...ourselves. We feel a deep responsibility to help ourselves.

  • Is the issue that the AI interacted with a minor, or was the problem that it provided information regardless of the user's age? If the age is the problem, then couldn't the AI be programmed to send alerts to the parents, since intervention is far more useful and practical than trying to wall off all possible information.

    The boy was already thinking of suicide. Yes, he used ChatGPT for info, but before the days of AI chatbots, he would have just used Google. If the complaint is that the chatbot was too fr

    • by PPH ( 736903 )

      then couldn't the AI be programmed to send alerts to the parents

      Elsewhere, we are already debating the wisdom of identifying children on line for the purpose of blocking pictures of boobies. Good luck with that.

  • Look, you can go to your local library and find a lot of books about medicine, how it affects people, risks of taking too much and go, 'Ah, it could kill me if I did this.' Is the library at fault because you wouldn't have figured out a way before?

    It's just grief and finding fault in others. If they watched a show and someone died after being hit by a car, and they jump out in front of a car to die, or someone falls from great height, is that show at fault now?

    No. This is just trying to benefit from the dea

    • Bingo.

      If the AI were ENCOURAGING someone to commit suicide or kill someone else, I would have a big problem with that (minor or adult). But just listing information is something very different. Based on the transcript clips I have seen, it was not encouraging anything, just supplying information. And that is AFTER the boy performed work-arounds to defeat the "safety" protocols.

      As for children- I have said it a zillion times, parents should NEVER give free-reign access to the Internet (or IA) devices with

      • Re: (Score:2, Informative)

        by drinkypoo ( 153816 )

        Based on the transcript clips I have seen, it was not encouraging anything

        I see you didn't RTFA.

        • >"I see you didn't RTFA."

          I did. And I stand by my comments. There is nothing blatant/certain in the article. The writer adds editorial commentary to the few, carefully-chosen AI quotes, outside full context, possibly changing the meaning or interpretation. If that is the *best* they can offer, it is not much.

          Without a full transcript, I don't believe it directly encouraged the user to commit suicide. I do believe it offered commentary for some hypothetical situations offered for a fictional story/ch

          • by XXongo ( 3986865 )

            >"I see you didn't RTFA."

            I did. And I stand by my comments. There is nothing blatant/certain in the article.

            The Ars technica [arstechnica.com] article was a bit more explicit about some of the things the LLM said; I don't know why that wasn't the one linked.

        • by Tyr07 ( 8900565 )

          If you look for a book that's considered dark and has a fictional story where someone is encouraged to commit suicide by a specific way, maybe some murder torture mystery novel, you seek it out and read it, then kill yourself. Was it the books fault? The authors? Or you for consuming that content with intention?

  • That sucks the kid offed himself, but lets be honest, if someone wants to kill themselves, they will always find a way. The parents are ultimately responsible for the kids death, they had no clue what was going on in their kids life.

    • by XXongo ( 3986865 )

      That sucks the kid offed himself, but lets be honest, if someone wants to kill themselves, they will always find a way.

      Sometimes, sometimes not. If you point somebody suicidal to real psychiatric help that takes them seriously, yes, it can help. Sometimes it will only help temporarily, and they will just postpone it until later, but sometimes temporarily is enough, and if you can get them over the bad patch, they can be turned around.

      Giving them a tutorial in how to a whim into a plan, though, that doesn't help.

  • Kid offs himself and his parents go right for the ambulance chasers looking for a payday.

    This isn't ChatGPT's fault, it's the fault of parents and very likely a public school system that failed to see the signs and get him the help he needed. Maybe I'll let the parents off the hook if they donate every penny of their potential winnings to a teen suicide prevention cause.

  • by buss_error ( 142273 ) on Tuesday August 26, 2025 @05:12PM (#65617680) Homepage Journal

    It's terrible this young light is extinguished. It's horrible. I'm not sure though that the blame belongs on machines here.

    If your child is contemplating suicide, why don't you have a clue?
    If you had a clue, why didn't you act?
    If you didn't have a clue, why were you not involved with your own child?

    Truth is that the way American society is that parenting has become fifth or sixth place in adults list of responsibilities. Making money to live is first, not the kids.

    I'm of the opinion that it is not the Internet's job to raise my child. That's my job as a parent. I'm not advocating for an Ozzie and Harriet 1950's idealized society that never existed either. A unified path for all to follow is a chimera, ridiculous and unobtainable. Falling into the deception of what should be "allowed" and "disallowed" factual information is a slippery slope to Double-think and Thought Crime.

    • ... "allowed" and "disallowed" factual information ...

      This push towards infantilism, pushing young teens away from paid work, sexual awareness (it's there, there's a lot of it, much as parents disapprove, it's a demonstration of independence) and unsupervised time in the world, annoys the shit out of me. Normal teens don't get much practice being an adult, so preventing practice, makes the next generation far less prepared for the pettiness and selfishness in the world.

      ... Double-think and Thought Crime.

      Nudity and sexual relationships are the first victims of this: Pregnant women demanding no

    • by PPH ( 736903 )

      If your child is contemplating suicide, why don't you have a clue?

      Because some parents can be emotionally distant, if not abusive. And kids feel that they cannot speak with them.

      I'm of the opinion that it is not the Internet's job to raise my child. That's my job as a parent.

      Right. But you have no idea what ideas "the Internet" is putting in your kids head. Either from an AI or some assholes on social media groups. Since "the Internet" has no way of knowing who is or is not a child, this is all a good argument for universal age verification. Or keeping kids off of it altogether. But such verification/restriction has serious issues for (adult) privacy/anonymity and the

      • Let's not pretend teen suicide is a new thing. It's not the Internet's fault people kill themselves. I'm quite positive we've be committing suicide since before recorded history. It's a human thing.

        • by PPH ( 736903 )

          It's a human thing.

          But the people that you associated with tended to be local and a smaller group. They were bound together by a sense of community and were more likely to intervene if they saw someone in trouble. Not so with social media, where your "buddy" could be someone half way around the globe, sitting in a warehouse, mining engagement. Or worse yet, an AI with no concept of a conscience.

    • To be fair, without making enough money to get by on, you can't do anything else on the list of parenting anyway. Yes, there is welfare and SNAP but generally speaking, if you as a parent want to provide any kind of life for your children, you clearly most prioritize making enough money to at least survive, if not thrive, on.

      So go ahead, put "family first" but that doesn't feed your family. This doesn't mean neglect your kids or your marriage or other relationships for work but I think you get my point here

    • by zmooc ( 33175 )

      Suicide is almost exclusively the result of mental illness. There's no use in blaming the parents because there's no use in assuming they could have made a difference or bear any responsibility. If it were that easy to help suicidal people, we wouldn't have suicidal people. Your angle borders on victim blaming.

      By that I don't mean to say that would should definitely blame ChatGPT. Classifying this as "unfortunate" would have made more sense. Even without ChatGPT, suicidal people will find sources to confirm

  • by fahrbot-bot ( 874524 ) on Tuesday August 26, 2025 @05:32PM (#65617706)

    OpenAI sues parents of suicide victim for son providing bad AI training inputs. /cynical

  • by CaptainDork ( 3678879 ) on Tuesday August 26, 2025 @05:36PM (#65617716)

    I open mine in Firefox, MS Edge, and Chrome. I use the same credentials, and all of the chats are the same across the browsers and other platforms like iPhone, all at the same time. For minors, parents should have the credentials and the app on their devices for casual monitoring. Outsourcing parental guidance is like herding cats.

  • Mental illnesses do not exist as clinical disorders (Psychiatrist Thomas Szasz).

    It is a serious infection of mind to say somebody cannot make their own decisions and control themselves and then treat them agaisnt their will!

    Everyone know their own good better than anybody else.
    • Addition: relatively easy way to killl oneself. If you have a balcony and you live in a country with winter. Go to the balcony during winter and freeze yourself to death.

      Psychiatrists are not useful when they do not do what their patients want and for example give them suicide methods.
      • Go to the balcony during winter and freeze yourself to death.

        I'd say that falls under some of the more unpleasant ways to go. There's also no shortage of poisonous plants and venomous animals that will make your last hours on this mortal coil excruciating, too. Then there's firearms (the USA's #1 preferred method of taking an early dirt nap*), which when they do work leave a horrific mess for your loved ones to be traumatized by (and the people who have to clean up the mess) and when they don't work can just leave you still alive but disabled and/or disfigured for

  • People die. Some people do it themselves. It sucks. There's no amount of explaining or reasoning matters now. A family lost a child, and a family wants to alleviate their pain. Who am I to say the right and wrong of it, or to even share a frail opinion anyways. We're all so fatigued with stress and grief, that we all respond poorly to anything now days. Be as decent to each other as you are capable. Suffering is terrible.
  • by Locke2005 ( 849178 ) on Tuesday August 26, 2025 @06:19PM (#65617808)
    It has occurred to me that Walmart sells insulin over the counter, without a prescription, to anyone, no questions asked. Guess what happens if you inject enough insulin? You pass out and die. I'm fairly certain I have enough insulin in my refrigerator to kill several people...
    • I'm fairly certain I have enough insulin in my refrigerator to kill several people...

      Well, now you're on some government watch list the next time someone dies of insulin poisoning.

    • by mjwx ( 966435 )

      It has occurred to me that Walmart sells insulin over the counter, without a prescription, to anyone, no questions asked. Guess what happens if you inject enough insulin? You pass out and die. I'm fairly certain I have enough insulin in my refrigerator to kill several people...

      Although I suspect heroin would be cheaper in the US.

  • Don't tell the text-looker-upper that you want to kill yourself, and it won't look up and give you text it found, about how to kill yourself.

    At least at first glance, this looks a lot like user error.

    The next time you want to kill yourself, try asking OpenAI's product about the best way to pet a puppy. I bet that will get you much safer advice.

  • I'm pretty sure if AI didn't exist he could have gotten similar information through searching Google.

  • If this had been a human saying this, they would be looking at a long prison sentence in many places around the world.
  • So ChatGPT did its best to tell him no, then he told ChatGPT he just wants information and he got information. You know how long I need to find information about suicide methods using Google? Alternatively one could just remember the last few headlines of suicides of others, or any suicide one heard of. The methods most people choose are not quite rocket science.

  • These are the same kinds of people who tried to sue Ozzy Osborne and Judas Priest over album contents. Its not up to your child's electronic devices to babysit them. Take responsibility as a parent for once.
  • This happened in that country where it says "don't drink" on a bottle of battery acid, right?

    Can we get an update on what happened to that hindu chap who was KILLED for writing actual chatgpt?

  • Everything is someone else's fault.

  • Has the teen been suicidal to begin? Or, has the chatbot somehow made them suicidal? Crucial difference.

A formal parsing algorithm should not always be used. -- D. Gries

Working...