Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Programming The Courts

Can AI Developers Be Held Liable for Negligence? (lawfaremedia.org) 32

Bryan Choi, an associate professor of law and computer science focusing on software safety, proposes shifting AI liability onto the builders of the systems: To date, most popular approaches to AI safety and accountability have focused on the technological characteristics and risks of AI systems, while averting attention from the workers behind the curtain responsible for designing, implementing, testing, and maintaining such systems...

I have previously argued that a negligence-based approach is needed because it directs legal scrutiny on the actual persons responsible for creating and managing AI systems. A step in that direction is found in California's AI safety bill, which specifies that AI developers shall articulate and implement protocols that embody the "developer's duty to take reasonable care to avoid producing a covered model or covered model derivative that poses an unreasonable risk of causing or materially enabling a critical harm" (emphasis added). Although tech leaders have opposed California's bill, courts don't need to wait for legislation to allow negligence claims against AI developers. But how would negligence work in the AI context, and what downstream effects should AI developers anticipate?

The article suggest two possibilities. Classifying AI developers as ordinary employees leaves employers then sharing liability for negligent acts (giving them "strong incentives to obtain liability insurance policies and to defend their employees against legal claims.") But AI developers could also be treated as practicing professionals (like physicians and attorneys). "{In this regime, each AI professional would likely need to obtain their own individual or group malpractice insurance policies." AI is a field that perhaps uniquely seeks to obscure its human elements in order to magnify its technical wizardry. The virtue of the negligence-based approach is that it centers legal scrutiny back on the conduct of the people who build and hype the technology. To be sure, negligence is limited in key ways and should not be viewed as a complete answer to AI governance. But fault should be the default and the starting point from which all conversations about AI accountability and AI safety begin.
Thanks to long-time Slashdot reader david.emery for sharing the article.

Can AI Developers Be Held Liable for Negligence?

Comments Filter:
  • Those making the business decisions are the ones responsible.

  • Artificial Insurance.

    • One of the last human jobs will be to take on liability for AI, because AI has no skin, can't be punished for doing wrong. In critical scenarios we need humans to share the burden - thus - a job opportunity for us.
  • AI is random noise, statistics, and a predictive algorithm that guesses at what the inputted information is.

    If there's a 90% chance of rain it's not like I'm mad when it stays a nice sunny day.

    AI is great when you straight up have the original data, and can compare it. Until recently, we couldn't even get pictures with people with the correct number of fingers. Why in the world would you blame the AI makers for mistakes? It's only as good as the data going in, if your data doesn't exist, the AI will guess.

    • I'd only blame the "AI maker" in the sense of it being the business or company making it and putting out unreasonable claims. Not the developers, who as somebody else said, may not have the decision to deploy it or not. Same deal if a team was developing a plain and the business side decided to push it out the door before all the problems had been worked out.

  • Establishing professional responsibility for bugs would start to create a world in which bugs were few and damages limited. Yes, it would lead to litigation. Yes it would require programmers to have insurance. Yes, it would raise costs. It would also make software a lot more reliable and hold people accountable when it failed.
    • by gweihir ( 88907 )

      Well, no. But other engineering fields provide guidelines. Essentially, if you pay for a professionally made thing, you have a reasonable expectation of it being fit for purpose and having no severe flaws. Bu there are degrees, even in established engineering disciplines and it depends on what you can reasonably expect from a product. The problem is "free" stuff, whether really free (FOSS) or just free do download and use. Obviously, even something you get for free should either be labelled "experimental, m

    • That assumes a static system where the variables are known at the time of construction. (A bridge for example.)

      There's a reason this requirement isn't in place for computers / IT: It's not a static system, and the variables introduced into it may not even exist for literal decades after it's brought into service. Placing liability on the designers of the system for a problem that may not exist until well after the original designers have retired isn't going to get you a lot of people volunteering to go in
  • You can't even get that form of liability coverage from developers who don't program AI, so you think an AI developer should put his/her head in the guillotine? Why don't you just let the developers take out liability insurance and call it aday?

  • for spreadsheet errors, lost Outlook emails, etc. Why would dodgy AI be any different?

  • Software vendors can release any old POS code and cause untold damage with zero consequences. Why would AI developers be treated any different?

    I content that both should be held liable. It's high time Microsoft should answer for bugs in Windows that allows malware to exfiltrate personal data for example, and officers of companies that release reckless software face jail time.

    Similarly, companies that use "algorithms" to moderate their sites, like Youtube or Amazon, should be held accountable for whatever th

  • ... a negligence-based approach ...

    I remember when Twitter became a 'news' source: Competing networks rushed to publish the latest meme, which usually lacked an eye-witness and/or facts. Will broadcast networks repeat the same pissing-contest using AI?

    We already know LLMs hallucinate and worse, they feed on their on own garbage, resulting in a toxic 'personality' such as Microsoft Tay. Still, reporting and journalism, is seen as the perfect job for LLMs, turning many words into few. Such LLMs will also have to include the bias of broad

    • Hey grandpa, Tay was from a different era. LLMs today can clean up some of the human shit we post on the internet. Take any /. thread full of smart asses and paste the text into a LLM -> presto .. a nice article pops out, not hallucinating, but grounded in human experience, and not biased by the news publisher. It's magic - humans provide the debunking and counter opinion, article provides the original claim, LLM provides the polish.
  • Associate Computer Science professors should be criminally charged when their students go on to write shitty code that ends up exposing PII (like FB keeping billions of passwords in the clear, or companies CONSTANTLY getting hacked).

    Instead of those companies bearing the burden of having good practices, and not have to get their customers "one free year of LifeLock" (a negative value given the information leaked or hacked) the people who were negligent in teaching the programmers should be charged with that

Know Thy User.

Working...