Can AI Developers Be Held Liable for Negligence? (lawfaremedia.org) 123
Bryan Choi, an associate professor of law and computer science focusing on software safety, proposes shifting AI liability onto the builders of the systems:
To date, most popular approaches to AI safety and accountability have focused on the technological characteristics and risks of AI systems, while averting attention from the workers behind the curtain responsible for designing, implementing, testing, and maintaining such systems...
I have previously argued that a negligence-based approach is needed because it directs legal scrutiny on the actual persons responsible for creating and managing AI systems. A step in that direction is found in California's AI safety bill, which specifies that AI developers shall articulate and implement protocols that embody the "developer's duty to take reasonable care to avoid producing a covered model or covered model derivative that poses an unreasonable risk of causing or materially enabling a critical harm" (emphasis added). Although tech leaders have opposed California's bill, courts don't need to wait for legislation to allow negligence claims against AI developers. But how would negligence work in the AI context, and what downstream effects should AI developers anticipate?
The article suggest two possibilities. Classifying AI developers as ordinary employees leaves employers then sharing liability for negligent acts (giving them "strong incentives to obtain liability insurance policies and to defend their employees against legal claims.") But AI developers could also be treated as practicing professionals (like physicians and attorneys). "{In this regime, each AI professional would likely need to obtain their own individual or group malpractice insurance policies." AI is a field that perhaps uniquely seeks to obscure its human elements in order to magnify its technical wizardry. The virtue of the negligence-based approach is that it centers legal scrutiny back on the conduct of the people who build and hype the technology. To be sure, negligence is limited in key ways and should not be viewed as a complete answer to AI governance. But fault should be the default and the starting point from which all conversations about AI accountability and AI safety begin.
Thanks to long-time Slashdot reader david.emery for sharing the article.
I have previously argued that a negligence-based approach is needed because it directs legal scrutiny on the actual persons responsible for creating and managing AI systems. A step in that direction is found in California's AI safety bill, which specifies that AI developers shall articulate and implement protocols that embody the "developer's duty to take reasonable care to avoid producing a covered model or covered model derivative that poses an unreasonable risk of causing or materially enabling a critical harm" (emphasis added). Although tech leaders have opposed California's bill, courts don't need to wait for legislation to allow negligence claims against AI developers. But how would negligence work in the AI context, and what downstream effects should AI developers anticipate?
The article suggest two possibilities. Classifying AI developers as ordinary employees leaves employers then sharing liability for negligent acts (giving them "strong incentives to obtain liability insurance policies and to defend their employees against legal claims.") But AI developers could also be treated as practicing professionals (like physicians and attorneys). "{In this regime, each AI professional would likely need to obtain their own individual or group malpractice insurance policies." AI is a field that perhaps uniquely seeks to obscure its human elements in order to magnify its technical wizardry. The virtue of the negligence-based approach is that it centers legal scrutiny back on the conduct of the people who build and hype the technology. To be sure, negligence is limited in key ways and should not be viewed as a complete answer to AI governance. But fault should be the default and the starting point from which all conversations about AI accountability and AI safety begin.
Thanks to long-time Slashdot reader david.emery for sharing the article.
CEOs, sure (Score:5, Insightful)
Those making the business decisions are the ones responsible.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Those making the business decisions are the ones responsible.
That’s odd. The CEOs keep pinning responsibility on someone named “Fiduciary”, who’s apparently one HELL of a demanding bitch.
Re: (Score:2)
Hmmm...I guess CEOs are useful for something after all!
Re: (Score:2)
Yes, and we need to end this practice of allowing tech companies to avoid liability when in other industries, the liability is clear.
The next big thing in AI (Score:2)
Artificial Insurance.
Re: (Score:2)
Re: (Score:2)
We don't blame weather forcasts for being wrong (Score:2)
AI is random noise, statistics, and a predictive algorithm that guesses at what the inputted information is.
If there's a 90% chance of rain it's not like I'm mad when it stays a nice sunny day.
AI is great when you straight up have the original data, and can compare it. Until recently, we couldn't even get pictures with people with the correct number of fingers. Why in the world would you blame the AI makers for mistakes? It's only as good as the data going in, if your data doesn't exist, the AI will guess.
Re: (Score:2)
I'd only blame the "AI maker" in the sense of it being the business or company making it and putting out unreasonable claims. Not the developers, who as somebody else said, may not have the decision to deploy it or not. Same deal if a team was developing a plain and the business side decided to push it out the door before all the problems had been worked out.
Re: (Score:2)
> We don't blame weather forcasts for being wrong
The moment you assume people are rational, you lose.
https://www.unilad.com/news/wo... [unilad.com]
https://lauralaw.net/the-law-o... [lauralaw.net]
Re: (Score:2)
What's he smoking? (Score:3, Insightful)
You can't even get that form of liability coverage from developers who don't program AI, so you think an AI developer should put his/her head in the guillotine? Why don't you just let the developers take out liability insurance and call it aday?
Re: (Score:2)
You expect all the Open Source developers, as well as anyone else who wants to release software take out liability insurance?
Nope, sorry!
You'll get a lot of people anonymously releasing software then, on darknets if necessary.
Re: (Score:2)
That wasn't my intention at all. I don't think open source developers should have to get any sort of insurance and should have all sort of "I'm not liable under any circumstances" clauses in whatever software they release. My point was toward the developers that charge for software they sell and, more specifically, toward AI software, noting that even software that's based on concrete logic in programs such as our traditional software doesn't offer liability, let alone developers offering liability for wh
Re: (Score:2)
This is indeed where this would lead: developers having to take out liability insurance. This would in turn drive up the prices for software developers, and the price of software, without actually leading to any increase in quality. Further, it would set up a much more antagonistic relationship between developers and the businesses that employ them.
No, just no.
Re: (Score:2)
Re: (Score:2)
Who are *they* who should be liable in your absurd scenario? The 1st year college grad who was just hired and has no idea what's going on? The mid-level coder who wrote the training algorithm? The senior developer who put together the architectural puzzle and made it all work as instructed? No, none of these should be liable, unless they individually made decisions that caused harm. And that is already the case. If you are a "legacy" programmer today, and you sabotage your company's software in a way that h
Re: (Score:2)
Re: (Score:2)
OK well I know this is slashdot, and nobody reads the articles, but the article is suggesting that individual developers should be held liable, not just the company they work for.
Re: (Score:2)
Nobody ever sued Microsoft (Score:3, Insightful)
for spreadsheet errors, lost Outlook emails, etc. Why would dodgy AI be any different?
Re: (Score:3)
It's right in their EULA, that's why.
They state is isn't fit for any purpose whatsoever, and it's your fault if you use it at all. At least that's the gist.
It will be different for AI if they make it different.
There's no logic necessary. Laws don't need to cleave neatly along non-law boundaries. They can define AI, so they can separate is as a category.
Re: (Score:2)
Businesses take out errors and omissions insurance all the time. This covers employees who make mistakes.
They also take out product liability insurance which I think would be applicable to AI products. Whether or not an insurer will gamble on an AI product is unknown.
Why should they? (Score:3)
Software vendors can release any old POS code and cause untold damage with zero consequences. Why would AI developers be treated any different?
I content that both should be held liable. It's high time Microsoft should answer for bugs in Windows that allows malware to exfiltrate personal data for example, and officers of companies that release reckless software face jail time.
Similarly, companies that use "algorithms" to moderate their sites, like Youtube or Amazon, should be held accountable for whatever their algorithms decide, and should not be allowed to hide behind them to shirk responsibility.
But if you're not going to apply that standard to regular software, then you shouldn't apply it to AI either.
Re: (Score:2)
I would generally agree, especially on the "algorithm" issue which is totally out of control due to section 230.
That said, bugs do happen and depending on the software and it's intended use, you might end up strangling a lot of business ideas.
What if I publish some code on GitHub, am I liable for it? This needs clarity.
AI is software.
When Twitter became 'news' (Score:2)
I remember when Twitter became a 'news' source: Competing networks rushed to publish the latest meme, which usually lacked an eye-witness and/or facts. Will broadcast networks repeat the same pissing-contest using AI?
We already know LLMs hallucinate and worse, they feed on their on own garbage, resulting in a toxic 'personality' such as Microsoft Tay. Still, reporting and journalism, is seen as the perfect job for LLMs, turning many words into few. Such LLMs will also have to include the bias of broad
Re: (Score:2)
Re: (Score:2)
And also in that, you get all the uninformed opinions and anti-facts that have been posted in that Slashdot discussion.
Nicely presented as facts, all polished and turd like, in this case.
AI Developer payroll (Score:2)
..But AI developers could also be treated as practicing professionals (like physicians and attorneys). "{In this regime, each AI professional would likely need to obtain their own individual or group malpractice insurance policies."
Sure. They could treat AI developers like that..if they want to see those job titles come with seven-figure starting salaries.
Sometimes we forget why we pay those “practicing” professionals so much money. A good part of a doctors salary funds pay-to-play insurance policies.
Yes (Score:3)
If you make a tool to give information and it provides wrong information, then that is like a drill that doesn't, well, drill, you should get your money back if you bought that.
Re: Yes (Score:2)
I'm not an attorney, but this one is easy. Provide a tool that where the tool maker promises a in good faith to give good quality answers. The tool maker says they are not liable for any actions taken based on the tools output.
Now you need to prove that the tool maker did not act in good faith, you have to establish what a "quality answer" is, and your have to prove the answer was not "good"quality. Even if you do that, you can't sure for damages because by clicking through the EULA, you agreed the tool mak
Re: (Score:2)
Something like - "Drill: sometimes turns in the direction you want. Sometimes doesn't randomly disassemble the operator."
Don’t Forget the Real Culprits: Politicians (Score:2)
Sure, AI developers should be held accountable for negligence, but let's not forget the politicians who've been ignoring decades of scientific warnings about climate change. They're the ones creating the pressure cooker environment where radical, untested tech becomes the "solution" to crises they failed to address. If policymakers hadn’t consistently dropped the ball, maybe we wouldn’t be in a position where we're depending on AI—or any other cutting-edge technology—as a last-minute
a reasonable approach and an unreasonable one (Score:2)
'The article suggest two possibilities. Classifying AI developers as ordinary employees leaves employers then sharing liability for negligent acts (giving them "strong incentives to obtain liability insurance policies and to defend their employees against legal claims.") But AI developers could also be treated as practicing professionals (like physicians and attorneys). "{In this regime, each AI professional would likely need to obtain their own individual or group malpractice insurance policies."'
So yeah,
Re: (Score:2)
...Physicians and attorneys are hired by clients directly, clients of AI do not hire the programmers at all...
Physicians also are generally a single person hired by a single customer for a single problem. If my doctor misdiagnoses my skin condition, there's only two or three people involved, not thousands. If there are dozens of other doctors in the medical group they are clearly not responsible.
I was going to say attorneys (and accountants) work the same way, and they often do. However, you do have large law firms with hundreds of lawyers working for a single client. I'm not sure the individual liability model is
Re: (Score:2)
Developers are not hardly held accountable ALREADY and it's always been that way as far as I've seen. The fact that nearly all the time many people are involved and organizations also makes it easy to diffuse blame like a corporation but the main protection is the terms of service/license agreements.
How is this different than relying on a shared library or Windows that breaks your software? Developers will know about a big flaw and work around it; admitting clearly they know they are using flawed components
Not with the current SCOTUS (Score:2)
What else is new? (Score:2)
Who would have ever guessed a lawyer would be arguing for legal regimes in which everyone is liable for what everyone else does?
No funny here (Score:2)
Seemed like a rich target story, but... 'Nuff said.
Can politicians be held liable for ... (Score:2)
Re: (Score:2)
This will simply result in endless litigation trying to score jackpots from deep pockets. Why don't we just put a warning label on every AI engine saying "use at your own risk, output may not be what you want". AI is in its infancy, it is going to have a HUGE number of errors in it. AFAIK no significant AI engine in existence can be blindly used without humans supervising its decisions. Consider AI as making suggestions, and the YOU decide whether or not to accept that suggestion.
Re: (Score:2)
Coders will insist on liability insurance as part of their employment package.
There will be less money for innovation and more money for lawyers and bureaucrats.
More development will move overseas.
Re: Who decides what an 'unreasonable risk ' is? (Score:2)
Legislation may have to change in order to define what's permitted in the EULA as well as requuring sign-off by a licensed software engineer.
Re: (Score:2)
Coders will insist on liability insurance as part of their employment package.
There will be less money for innovation and more money for lawyers and bureaucrats.
More development will move overseas.
Must be why we have a shortage of home builders and doctors in this country. Becasue of liability insurance. Everyone is moving overseas.
Re: Who decides what an 'unreasonable risk ' is? (Score:2)
Everyone wants to be an influencer.
Re: (Score:2)
Must be why we have a shortage of home builders and doctors in this country. Becasue of liability insurance.
Medical care in America is much more expensive than in any other country, and insurance is a big reason.
Home building is also dysfunctional, but mostly because of bureaucracy and regulation. Getting a building permit takes twice as long as it did 30 years ago and is much more likely to be denied. The result is very expensive homes sitting next to empty lots.
Medical, transportation, high-risk jobs (Score:2)
Can there be any liability for an AI company for
1) Using a biased set of input training data - like medical research, where the court case could hinge on the plaintiff selecting 5 expert doctors who disagree with the AI's output
2) Producing incorrect results
3) Producing different results for the same question when the question is asked in different ways. Something like prescribing 5 MG of a medicine sometimes and 10 MG another time where the 10 MG is life threatening to some patients
4) AI is unable to give
Re: (Score:2)
If I was to slightly challenge the comparison if you go to a doctor there liability is pretty obvious to define as things they themselves advise or do. If they follow medical guidelines that turn out to be poor, refer you to a specialist who screws up, proscribe a drug and dose that a pharmacist messes up on etc it isn't there responsibility. It's much harder to draw a line on when an AI de
Re: (Score:2)
A counter argument is that if the tech is known to be buggy and not ready for prime time (as you clearly believe to be the case), but the com
Re: (Score:2)
Bridge builders don't have the same kind of limited liability licenses (for good reason), but if the software causes too much damage, the limited liability license isn't going to help either.
Re: (Score:2)
Re: (Score:2)
What you are suggesting be done *has* been done in software since it's infancy.
The products are not warranted for any use.
Why wouldn't the same model work here?
You'll have to answer that before we get into your diatribe about bridges and AI being the same thing.
Re: (Score:2)
What you are suggesting would mean, yes, that we wouldn't be subject to those things.
However, the reason would be that none of it would ever be developed, and the US would be decades behind other countries in software development.
We'd still be trying to develop a single OS.
Re: (Score:2)
The former can be constructed incrementally, with design principles and guarantees, using a precise language. This is why software improves and bugs get fixed over time.
The latter, and indeed all neural networks at present are not constructed with any degree of precision or formal guarantees, and bugs cannot be addressed individually w
Re: real engineering standards Nah! (Score:2)
We're in the "Cambrian Explosion" phase of AI technology development and freedom to innovate is needed to ensure rapid improvement of the technology, including eventually its safety features.
Another reason why it's so difficult to effectively regulate software practitioners is that there are simply too many (unique and novel) choice points faced by developers of innovative software. It is a highly unconstrained landscape. A playground of infinite poss
Re: (Score:2)
Re: (Score:2)
if they had to get pilot licenses and an FAA type certificate and airworthiness certificate and comply with local noise bylaws first?
There's a time and a place for very careful and considered regulation, but right at the start of the innovation explosion in an area is not usually it. And that's where we are with materially functional/successful fairly general AI.
Re: (Score:2)
That's a great idea! While we're at it, let's also put disclaimers on bridges: use at your own risk, bridge may not support anyone's weight.
You've never gone over bridges with weight restrictions?
Actually we could also do this with drinking water. Drink at your own risk, water may contain untreated sewerage.
Or seen a gray water tap?
Or maybe even with every commercial product, eventually. Buy at your own risk
Or read a California cancer warning?
A counter argument is that if the tech is known to be buggy and not ready for prime time (as you clearly believe to be the case), but the company still puts out the product anyway knowing this, then this is willful and premeditated criminal deception and fraud.
How specifically does explicitly telling people who uses something its outputs are unreliable constitute "criminal deception"?
Some would say that the principals of the company should pay a steep price for any damage its deception has caused.
"Some would say" just about anything.
The most amazing thing to me is people created systems modeled from concepts of how brains work and then people bitch these systems are inherently unreliable. Well no fucking shit.
Re: You can already go to jail (Score:2)
The key is your software has to cause damages. If it does, then you can be sued.
Re: (Score:2)
Quick reminder: you can already be held liable for damage your software causes. A prime example is the software engineers at Volkswagen, who wrote the code to cheat emissions. This guy is another example [darkreading.com]. Occasionally drug website developers are arrested. The key is your software has to cause damages. If it does, then you can be sued.
And this guy is talking about negligence, not "I used your AI model to try to beat the market and lost money." Negligence is a lot different than just it didn't work like I thought it would.
Re: (Score:2)
That was intentional commission of fraud by means that happened to involve creating software to do it.
That's different than making an AI data structure and algorithm which when mixed with massively complex input data from the world of human knowledge and creativity creates both good and bad ideas and outputs.
Re: (Score:2)
Re: (Score:2)
This is akin to parents being held liable for the crimes of their children. Yes the parents can influence them, but children grow into people with a mind of their own, and experiences of and behaviors of their own. Children (and their independent lives) are too complex to blame it on their parents.
Historically, many societies assigned parental (and family) joint responsibility for the malfeasance of one member. But law is j
Re: You can already go to jail (Score:2)
Re: (Score:2)
Very specific matter. Not general culpability for the independent actions of your offspring.
Re: You can already go to jail (Score:2)
Re: You can already go to jail (Score:2)
Re: (Score:2)
"This is not enough. The risk is that the output may not be what we want you to hear."
-- From a totalitarian state or some equivalent though-police department.
Re: (Score:2)
Why not regulating software as much? (Score:2)
And much harder to predict what it will always do, right or wrong.
Putting an onus that you have to get it perfect before releasing it for use would kill software innovation and damage the economy.
Re: (Score:2)
Re: (Score:2)
This roughly means you can't prove (to yourself or others) (in general) (no matter how much testing or verification effort) that your program is free of bugs or unintended behaviors.
It is in this context that liability for software development must be assessed.
Re: (Score:2)
Re: (Score:2)
How about a regulatory authority? (Score:2)
Ask 10 people and you'll get 10 different answers.
Ask the FAA and you'll get one answer, and it'll be a precise answer.
The FAA has a bunch of codified rules for software development and testing. As I understand it, the intent was that if the code was developed and tested according to FAA rules, then the manufacturer would not be held liable for problems; specifically, the airplane manufacturer could not be sued for building an unsafe plane, because the design processes were the best standards of the industry.
(And note that the NTSB and FAA analyze crashes
Re: (Score:2)
I think the same thing will happen with autodrive in cars: if the autodrive software is statistically safer than a human driver, then the manufacturer can't be sued for fatal crashes.
They'll have to nail down "human driver" though. Are we talking about a 50th percentile, "average" driver? 5th percentile "drunk, distracted, and speeding"? 95th percentile "never had an accident"?
Then, of course, ideally "best practices" will eventually take over, where you stop competing against the "human driver" and start needing to meet "industry best practices" - IE not zero accidents, but a certain constant effort to reduce them.
Re: (Score:2)
Re: How about a regulatory authority? (Score:2)
Re: (Score:2)
This will incent them to release only self-driving systems that are considerably statistically safer than the average human driver.
Re: (Score:2)
Re: (Score:2)
Waymo on the other hand has actually driverless taxis. So liability rests with the Waymo company.
Tesla will go this way too. Tesla is already trying to self-insure its drivers/vehicles in states whose laws allow it. This is partly a
Re: (Score:2)
Re: Who decides what an 'unreasonable risk ' is? (Score:2)
No human involved. I do wonder what copilot would say though.
Re: (Score:2)
Ahem. Ask 12 people and you'll get 1 answer, and it may not be an answer you like.
Re: (Score:2, Insightful)
There's a reason this requirement isn't in place for computers / IT: It's not a static system, and the variables introduced into it may not even exist for literal decades after it's brought into service. Placing liability on the designers of the system for a problem that may not exist until well after the original designers have retired isn't going to get you a lot of people volunteering to go in
Re: (Score:2)
It would lead to a world where little to no software innovation occurred.
Software is already held accountable in situations where it matters, think trains, FAA, etc.
Your proposal just serves to get more lawyers rich.
You wouldn't happen to be a lawyer, would you?
Re: (Score:2)
Software is already held accountable in situations where it matters
Matters to who? That statement essentially says software is held accountable only to those with power. Everyone else doesn't matter.
Most doctors do have malpractice insurance and most never use it because they are never charged with malpractice. That is because they practice medicine to established standards.
I am not surprised that a lot of computer programmers don't think they should be held accountable. No one likes to be held accountable. But the result is mostly crappy software is being produced.
Re: (Score:2)
Establishing professional responsibility for bugs would start to create a world in which bugs were few and damages limited. Yes, it would lead to litigation. Yes it would require programmers to have insurance. Yes, it would raise costs. It would also make software a lot more reliable and hold people accountable when it failed.
Tell me more about how the bestest doctors in the world all operate under crippling amounts of insurance, and hone themselves under litigious ass-puckering pressure to be as perfect as your theory and their safety record, making malpractice insurance practically pointless and dirt cheap.
I’ll get my popcor, no wait. Better not. God knows I can’t afford to choke on it laughing.
Re: (Score:2)
Kind of like doctors, who routinely must take out malpractice insurance, driving up the cost of healthcare. Holding doctors legally liable for anything that goes wrong, whether or not malpractice is involved, doesn't improve the quality of care, it just makes it more expensive.
If programmers were held liable for defects, the quality wouldn't go up, just the cost.
Mistakes happen. It's nearly always the process, not the person. The same person, given a rushed process, will make far more mistakes than when the
Re: (Score:2)
Establishing professional responsibility for bugs would start to create a world in which bugs were few and damages limited. Yes, it would lead to litigation. Yes it would require programmers to have insurance. Yes, it would raise costs. It would also make software a lot more reliable and hold people accountable when it failed.
Tell me you don't write large software systems without telling me you don't write large software systems.
I find that position incredibly naïve. In any substantial software system, identifying the source of the bug is hard enough. Trying to identify which actual human was responsible is impossible. For most bugs, there are likely dozens of ways it could have been avoided and more dozens of people who could have prevented it. You're going to sue them all?
The actual effect of that practice would be to vir
Re: (Score:2)
You're going to sue them all?
Yes. Anyone who could have prevented it. There are human flaws in any product, but if a house burns down because of bad wiring the electrician responsible can't just shrug and saw "guess we goofed".
The argument that no one is responsible for flaws in programs is all the more reason that the system needs to be changed so people can and will be held responsible.
Re: All programmers should be liable (Score:2)
Establishing professional responsibility for bugs would create a situation where all coding in the US is outsourced to other countries. Domestic software development would become overworked slow and extremely expensive.
Re: (Score:2)
Establishing professional responsibility for bugs would create a situation where all coding in the US is outsourced to other countries. Domestic software development would become overworked slow and extremely expensive.
That is the classic response of any industry when "threatened" with regulation. The reality is different.What it would mean is that people who produced high quality software efficiently wouldn't have to compete with low quality junk cranked out by people with no standards.
Re: (Score:2)
Or shift it to countries with a more reasonable legal viewpoint on software liability, that considers the virtually unlimited complexity and inbuilt binary brittleness (one bit wrong amongst billions and you're screwed) of software.
Bad bargain.
Re: (Score:2)
I'm entirely unsure what your argument is.
We *do* have guidelines for safety critical software. I know. I used to write it. So that part of your argument is rubbish.
As far as free software, it's like finding a random piece of fried chicken by the road side and thinking it's okay to eat, not from some supermarket.
What is your point?
Re: (Score:2)
You really did not get the point, did you? First, you need to get rid of that "random piece of chicken" idea, because it has no connection to reality.
Start from deeply flawed assumptions and you obviously will not understand things.
Re: (Score:2)
I would think it would be hard to sue someone for flaws in open source software. When anyone can examine the source code who do you sue for failing to catch a flaw.
Re: (Score:2)
Let's just sue the parents of the people who did the work, too.
If they hadn't spawned the kid, they would never have made the mistake!