Can AI Developers Be Held Liable for Negligence? (lawfaremedia.org) 15
Bryan Choi, an associate professor of law and computer science focusing on software safety, proposes shifting AI liability onto the builders of the systems:
To date, most popular approaches to AI safety and accountability have focused on the technological characteristics and risks of AI systems, while averting attention from the workers behind the curtain responsible for designing, implementing, testing, and maintaining such systems...
I have previously argued that a negligence-based approach is needed because it directs legal scrutiny on the actual persons responsible for creating and managing AI systems. A step in that direction is found in California's AI safety bill, which specifies that AI developers shall articulate and implement protocols that embody the "developer's duty to take reasonable care to avoid producing a covered model or covered model derivative that poses an unreasonable risk of causing or materially enabling a critical harm" (emphasis added). Although tech leaders have opposed California's bill, courts don't need to wait for legislation to allow negligence claims against AI developers. But how would negligence work in the AI context, and what downstream effects should AI developers anticipate?
The article suggest two possibilities. Classifying AI developers as ordinary employees leaves employers then sharing liability for negligent acts (giving them "strong incentives to obtain liability insurance policies and to defend their employees against legal claims.") But AI developers could also be treated as practicing professionals (like physicians and attorneys). "{In this regime, each AI professional would likely need to obtain their own individual or group malpractice insurance policies." AI is a field that perhaps uniquely seeks to obscure its human elements in order to magnify its technical wizardry. The virtue of the negligence-based approach is that it centers legal scrutiny back on the conduct of the people who build and hype the technology. To be sure, negligence is limited in key ways and should not be viewed as a complete answer to AI governance. But fault should be the default and the starting point from which all conversations about AI accountability and AI safety begin.
Thanks to long-time Slashdot reader david.emery for sharing the article.
I have previously argued that a negligence-based approach is needed because it directs legal scrutiny on the actual persons responsible for creating and managing AI systems. A step in that direction is found in California's AI safety bill, which specifies that AI developers shall articulate and implement protocols that embody the "developer's duty to take reasonable care to avoid producing a covered model or covered model derivative that poses an unreasonable risk of causing or materially enabling a critical harm" (emphasis added). Although tech leaders have opposed California's bill, courts don't need to wait for legislation to allow negligence claims against AI developers. But how would negligence work in the AI context, and what downstream effects should AI developers anticipate?
The article suggest two possibilities. Classifying AI developers as ordinary employees leaves employers then sharing liability for negligent acts (giving them "strong incentives to obtain liability insurance policies and to defend their employees against legal claims.") But AI developers could also be treated as practicing professionals (like physicians and attorneys). "{In this regime, each AI professional would likely need to obtain their own individual or group malpractice insurance policies." AI is a field that perhaps uniquely seeks to obscure its human elements in order to magnify its technical wizardry. The virtue of the negligence-based approach is that it centers legal scrutiny back on the conduct of the people who build and hype the technology. To be sure, negligence is limited in key ways and should not be viewed as a complete answer to AI governance. But fault should be the default and the starting point from which all conversations about AI accountability and AI safety begin.
Thanks to long-time Slashdot reader david.emery for sharing the article.
Re: (Score:2)
This will simply result in endless litigation trying to score jackpots from deep pockets. Why don't we just put a warning label on every AI engine saying "use at your own risk, output may not be what you want". AI is in its infancy, it is going to have a HUGE number of errors in it. AFAIK no significant AI engine in existence can be blindly used without humans supervising its decisions. Consider AI as making suggestions, and the YOU decide whether or not to accept that suggestion.
Re: (Score:2)
Coders will insist on liability insurance as part of their employment package.
There will be less money for innovation and more money for lawyers and bureaucrats.
More development will move overseas.
Re: (Score:2)
A counter argument is that if the tech is known to be buggy and not ready for prime time (as you clearly believe to be the case), but the com
Re: You can already go to jail (Score:2)
The key is your software has to cause damages. If it does, then you can be sued.
Re: (Score:2)
How about a regulatory authority? (Score:2)
Ask 10 people and you'll get 10 different answers.
Ask the FAA and you'll get one answer, and it'll be a precise answer.
The FAA has a bunch of codified rules for software development and testing. As I understand it, the intent was that if the code was developed and tested according to FAA rules, then the manufacturer would not be held liable for problems; specifically, the airplane manufacturer could not be sued for building an unsafe plane, because the design processes were the best standards of the industry.
(And note that the NTSB and FAA analyze crashes
CEOs, sure (Score:2)
Those making the business decisions are the ones responsible.
Re: (Score:2)
The next big thing in AI (Score:2)
Artificial Insurance.
We don't blame weather forcasts for being wrong (Score:1)
AI is random noise, statistics, and a predictive algorithm that guesses at what the inputted information is.
If there's a 90% chance of rain it's not like I'm mad when it stays a nice sunny day.
AI is great when you straight up have the original data, and can compare it. Until recently, we couldn't even get pictures with people with the correct number of fingers. Why in the world would you blame the AI makers for mistakes? It's only as good as the data going in, if your data doesn't exist, the AI will guess.
All programmers should be liable (Score:2)
Re: (Score:2)
Well, no. But other engineering fields provide guidelines. Essentially, if you pay for a professionally made thing, you have a reasonable expectation of it being fit for purpose and having no severe flaws. Bu there are degrees, even in established engineering disciplines and it depends on what you can reasonably expect from a product. The problem is "free" stuff, whether really free (FOSS) or just free do download and use. Obviously, even something you get for free should either be labelled "experimental, m