Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Programming The Courts

Can AI Developers Be Held Liable for Negligence? (lawfaremedia.org) 123

Bryan Choi, an associate professor of law and computer science focusing on software safety, proposes shifting AI liability onto the builders of the systems: To date, most popular approaches to AI safety and accountability have focused on the technological characteristics and risks of AI systems, while averting attention from the workers behind the curtain responsible for designing, implementing, testing, and maintaining such systems...

I have previously argued that a negligence-based approach is needed because it directs legal scrutiny on the actual persons responsible for creating and managing AI systems. A step in that direction is found in California's AI safety bill, which specifies that AI developers shall articulate and implement protocols that embody the "developer's duty to take reasonable care to avoid producing a covered model or covered model derivative that poses an unreasonable risk of causing or materially enabling a critical harm" (emphasis added). Although tech leaders have opposed California's bill, courts don't need to wait for legislation to allow negligence claims against AI developers. But how would negligence work in the AI context, and what downstream effects should AI developers anticipate?

The article suggest two possibilities. Classifying AI developers as ordinary employees leaves employers then sharing liability for negligent acts (giving them "strong incentives to obtain liability insurance policies and to defend their employees against legal claims.") But AI developers could also be treated as practicing professionals (like physicians and attorneys). "{In this regime, each AI professional would likely need to obtain their own individual or group malpractice insurance policies." AI is a field that perhaps uniquely seeks to obscure its human elements in order to magnify its technical wizardry. The virtue of the negligence-based approach is that it centers legal scrutiny back on the conduct of the people who build and hype the technology. To be sure, negligence is limited in key ways and should not be viewed as a complete answer to AI governance. But fault should be the default and the starting point from which all conversations about AI accountability and AI safety begin.
Thanks to long-time Slashdot reader david.emery for sharing the article.
This discussion has been archived. No new comments can be posted.

Can AI Developers Be Held Liable for Negligence?

Comments Filter:
  • CEOs, sure (Score:5, Insightful)

    by evanh ( 627108 ) on Saturday September 28, 2024 @10:39PM (#64825201)

    Those making the business decisions are the ones responsible.

    • by xski ( 113281 )
      Whoa, now, that's just crazy talk.
    • I blame the humans who posted online content for 30 years. It's their ideas that percolate through AI today. /s
    • Those making the business decisions are the ones responsible.

      That’s odd. The CEOs keep pinning responsibility on someone named “Fiduciary”, who’s apparently one HELL of a demanding bitch.

    • Hmmm...I guess CEOs are useful for something after all!

    • Yes, and we need to end this practice of allowing tech companies to avoid liability when in other industries, the liability is clear.

  • Artificial Insurance.

    • One of the last human jobs will be to take on liability for AI, because AI has no skin, can't be punished for doing wrong. In critical scenarios we need humans to share the burden - thus - a job opportunity for us.
      • One of the last human jobs will be to take on liability for AI, because AI has no skin, can't be punished for doing wrong. In critical scenarios we need humans to share the burden - thus - a job opportunity for us.

        Traditionally this is done by goats, not humans.

  • AI is random noise, statistics, and a predictive algorithm that guesses at what the inputted information is.

    If there's a 90% chance of rain it's not like I'm mad when it stays a nice sunny day.

    AI is great when you straight up have the original data, and can compare it. Until recently, we couldn't even get pictures with people with the correct number of fingers. Why in the world would you blame the AI makers for mistakes? It's only as good as the data going in, if your data doesn't exist, the AI will guess.

    • I'd only blame the "AI maker" in the sense of it being the business or company making it and putting out unreasonable claims. Not the developers, who as somebody else said, may not have the decision to deploy it or not. Same deal if a team was developing a plain and the business side decided to push it out the door before all the problems had been worked out.

    • by dvice ( 6309704 )

      > We don't blame weather forcasts for being wrong

      The moment you assume people are rational, you lose.

      https://www.unilad.com/news/wo... [unilad.com]
      https://lauralaw.net/the-law-o... [lauralaw.net]

    • If someone guaranteed their weather forecasts, we would blame them for being wrong. So watch what the sales and marketing teams say (and how they say and in what context...). If they say, "We built an interesting thing that can suggest new ideas for you to test," that's quite different from, "Our product never errs on a cancer diagnosis."
  • What's he smoking? (Score:3, Insightful)

    by lsllll ( 830002 ) on Sunday September 29, 2024 @01:11AM (#64825389)

    You can't even get that form of liability coverage from developers who don't program AI, so you think an AI developer should put his/her head in the guillotine? Why don't you just let the developers take out liability insurance and call it aday?

    • by jvkjvk ( 102057 )

      You expect all the Open Source developers, as well as anyone else who wants to release software take out liability insurance?

      Nope, sorry!

      You'll get a lot of people anonymously releasing software then, on darknets if necessary.

      • by lsllll ( 830002 )

        That wasn't my intention at all. I don't think open source developers should have to get any sort of insurance and should have all sort of "I'm not liable under any circumstances" clauses in whatever software they release. My point was toward the developers that charge for software they sell and, more specifically, toward AI software, noting that even software that's based on concrete logic in programs such as our traditional software doesn't offer liability, let alone developers offering liability for wh

    • This is indeed where this would lead: developers having to take out liability insurance. This would in turn drive up the prices for software developers, and the price of software, without actually leading to any increase in quality. Further, it would set up a much more antagonistic relationship between developers and the businesses that employ them.

      No, just no.

      • If a company's AI libels somebody like this guy [the-decoder.com] by falsely claiming he was convicted of child abuse and exploiting dependents, was involved in a dramatic escape from a psychiatric hospital, and exploited grieving women as an unethical mortician, don't you think they should be liable?
        • Who are *they* who should be liable in your absurd scenario? The 1st year college grad who was just hired and has no idea what's going on? The mid-level coder who wrote the training algorithm? The senior developer who put together the architectural puzzle and made it all work as instructed? No, none of these should be liable, unless they individually made decisions that caused harm. And that is already the case. If you are a "legacy" programmer today, and you sabotage your company's software in a way that h

          • The company that owns the technology should be liable. Duh.
            • OK well I know this is slashdot, and nobody reads the articles, but the article is suggesting that individual developers should be held liable, not just the company they work for.

  • by Tablizer ( 95088 ) on Sunday September 29, 2024 @01:16AM (#64825399) Journal

    for spreadsheet errors, lost Outlook emails, etc. Why would dodgy AI be any different?

    • by jvkjvk ( 102057 )

      It's right in their EULA, that's why.

      They state is isn't fit for any purpose whatsoever, and it's your fault if you use it at all. At least that's the gist.

      It will be different for AI if they make it different.

      There's no logic necessary. Laws don't need to cleave neatly along non-law boundaries. They can define AI, so they can separate is as a category.

    • Businesses take out errors and omissions insurance all the time. This covers employees who make mistakes.
      They also take out product liability insurance which I think would be applicable to AI products. Whether or not an insurer will gamble on an AI product is unknown.

  • by Rosco P. Coltrane ( 209368 ) on Sunday September 29, 2024 @01:21AM (#64825403)

    Software vendors can release any old POS code and cause untold damage with zero consequences. Why would AI developers be treated any different?

    I content that both should be held liable. It's high time Microsoft should answer for bugs in Windows that allows malware to exfiltrate personal data for example, and officers of companies that release reckless software face jail time.

    Similarly, companies that use "algorithms" to moderate their sites, like Youtube or Amazon, should be held accountable for whatever their algorithms decide, and should not be allowed to hide behind them to shirk responsibility.

    But if you're not going to apply that standard to regular software, then you shouldn't apply it to AI either.

    • I would generally agree, especially on the "algorithm" issue which is totally out of control due to section 230.

      That said, bugs do happen and depending on the software and it's intended use, you might end up strangling a lot of business ideas.

      What if I publish some code on GitHub, am I liable for it? This needs clarity.

      AI is software.

  • ... a negligence-based approach ...

    I remember when Twitter became a 'news' source: Competing networks rushed to publish the latest meme, which usually lacked an eye-witness and/or facts. Will broadcast networks repeat the same pissing-contest using AI?

    We already know LLMs hallucinate and worse, they feed on their on own garbage, resulting in a toxic 'personality' such as Microsoft Tay. Still, reporting and journalism, is seen as the perfect job for LLMs, turning many words into few. Such LLMs will also have to include the bias of broad

    • Hey grandpa, Tay was from a different era. LLMs today can clean up some of the human shit we post on the internet. Take any /. thread full of smart asses and paste the text into a LLM -> presto .. a nice article pops out, not hallucinating, but grounded in human experience, and not biased by the news publisher. It's magic - humans provide the debunking and counter opinion, article provides the original claim, LLM provides the polish.
      • by jvkjvk ( 102057 )

        And also in that, you get all the uninformed opinions and anti-facts that have been posted in that Slashdot discussion.

        Nicely presented as facts, all polished and turd like, in this case.

  • ..But AI developers could also be treated as practicing professionals (like physicians and attorneys). "{In this regime, each AI professional would likely need to obtain their own individual or group malpractice insurance policies."

    Sure. They could treat AI developers like that..if they want to see those job titles come with seven-figure starting salaries.

    Sometimes we forget why we pay those “practicing” professionals so much money. A good part of a doctors salary funds pay-to-play insurance policies.

  • by eneville ( 745111 ) on Sunday September 29, 2024 @05:50AM (#64825745) Homepage

    If you make a tool to give information and it provides wrong information, then that is like a drill that doesn't, well, drill, you should get your money back if you bought that.

    • I'm not an attorney, but this one is easy. Provide a tool that where the tool maker promises a in good faith to give good quality answers. The tool maker says they are not liable for any actions taken based on the tools output.

      Now you need to prove that the tool maker did not act in good faith, you have to establish what a "quality answer" is, and your have to prove the answer was not "good"quality. Even if you do that, you can't sure for damages because by clicking through the EULA, you agreed the tool mak

      • Something like - "Drill: sometimes turns in the direction you want. Sometimes doesn't randomly disassemble the operator."

  • Sure, AI developers should be held accountable for negligence, but let's not forget the politicians who've been ignoring decades of scientific warnings about climate change. They're the ones creating the pressure cooker environment where radical, untested tech becomes the "solution" to crises they failed to address. If policymakers hadn’t consistently dropped the ball, maybe we wouldn’t be in a position where we're depending on AI—or any other cutting-edge technology—as a last-minute

  • 'The article suggest two possibilities. Classifying AI developers as ordinary employees leaves employers then sharing liability for negligent acts (giving them "strong incentives to obtain liability insurance policies and to defend their employees against legal claims.") But AI developers could also be treated as practicing professionals (like physicians and attorneys). "{In this regime, each AI professional would likely need to obtain their own individual or group malpractice insurance policies."'

    So yeah,

    • ...Physicians and attorneys are hired by clients directly, clients of AI do not hire the programmers at all...

      Physicians also are generally a single person hired by a single customer for a single problem. If my doctor misdiagnoses my skin condition, there's only two or three people involved, not thousands. If there are dozens of other doctors in the medical group they are clearly not responsible.

      I was going to say attorneys (and accountants) work the same way, and they often do. However, you do have large law firms with hundreds of lawyers working for a single client. I'm not sure the individual liability model is

    • Developers are not hardly held accountable ALREADY and it's always been that way as far as I've seen. The fact that nearly all the time many people are involved and organizations also makes it easy to diffuse blame like a corporation but the main protection is the terms of service/license agreements.

      How is this different than relying on a shared library or Windows that breaks your software? Developers will know about a big flaw and work around it; admitting clearly they know they are using flawed components

  • We have a supreme court that decided to equate corporations donating to politicians with "freedom of speech" rather than "bribery". They don't have to be nearly as weasel-worded to discard this tenuous argument as they did in Citizen United. Any court willing to do what they did in Citizen's United is going to protect businesses from this tenuous liability.
  • Who would have ever guessed a lawyer would be arguing for legal regimes in which everyone is liable for what everyone else does?

  • Seemed like a rich target story, but... 'Nuff said.

To stay youthful, stay useful.

Working...