EU Proposes Rules Making It Easier To Sue Drone Makers, AI Systems 66
The European Commission on Wednesday proposed rules making it easier for individuals and companies to sue makers of drones, robots and other products equipped with artificial intelligence software for compensation for harm caused by them. Reuters reports: The AI Liability Directive aims to address the increasing use of AI-enabled products and services and the patchwork of national rules across the 27-country European Union. Under the draft rules, victims can seek compensation for harm to their life, property, health and privacy due to the fault or omission of a provider, developer or user of AI technology, or for discrimination in a recruitment process using AI.
The rules lighten the burden of proof on victims with a "presumption of causality", which means victims only need to show that a manufacturer or user's failure to comply with certain requirements caused the harm and then link this to the AI technology in their lawsuit. Under a "right of access to evidence," victims can ask a court to order companies and suppliers to provide information about high-risk AI systems so that they can identify the liable person and the fault that caused the damage.
The Commission also announced an update to the Product Liability Directive that means manufacturers will be liable for all unsafe products, tangible and intangible, including software and digital services, and also after the products are sold. Users can sue for compensation when software updates render their smart-home products unsafe or when manufacturers fail to fix cybersecurity gaps. Those with unsafe non-EU products will be able to sue the manufacturer's EU representative for compensation. The AI Liability Directive will need to be agreed with EU countries and EU lawmakers before it can become law.
The rules lighten the burden of proof on victims with a "presumption of causality", which means victims only need to show that a manufacturer or user's failure to comply with certain requirements caused the harm and then link this to the AI technology in their lawsuit. Under a "right of access to evidence," victims can ask a court to order companies and suppliers to provide information about high-risk AI systems so that they can identify the liable person and the fault that caused the damage.
The Commission also announced an update to the Product Liability Directive that means manufacturers will be liable for all unsafe products, tangible and intangible, including software and digital services, and also after the products are sold. Users can sue for compensation when software updates render their smart-home products unsafe or when manufacturers fail to fix cybersecurity gaps. Those with unsafe non-EU products will be able to sue the manufacturer's EU representative for compensation. The AI Liability Directive will need to be agreed with EU countries and EU lawmakers before it can become law.
How about we fix bigger issues first? (Score:2, Interesting)
Re: (Score:2)
Nah, I'd rather take this at face value. After all, when 100,000 people are harmed and the class-action kicks off, I'll be looking forward to really sticking it to them with my $7 check in five years.
That'll fuckin' teach them to do it again and again and again.
Gee, it's almost as if lawyers are helping write this law, 'cause Daddy Esquire needs a new yacht when his new one becomes fashionably old in 3 years. Like a smartphone.
Re: (Score:2)
Re: (Score:2)
Re:How about we fix bigger issues first? (Score:4, Insightful)
No, they dont have less accidents than humans. When they only drive under conditions where humans have 100x fewer accidents, they have only 10x fewer accidents, but then compare that to humans under all conditions to lie and pretend to be safer.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I get the angst on AI driven cars and deaths, but they have way less accidents than humans.
I don't think there is any evidence that autonomous cars are safer at the moment. Maybe in the future.
Re: (Score:2)
Why wait? I don't know how large Tesla's "full self driving" beta is or how often the testers use it versus just autopilot. But It's already pretty rare for me to hit the road and *not* see at least one of those Waymo Jaguar SUVs driving around. And Google is nothing if not good at collecting and analyzing data. The Cruise self-driving cars are very common these days too. It shouldn't be to much of an exercise to run the man vs machine numbers for the two, at least.
Re: (Score:2)
Re: (Score:2)
Right. AI cars only kill people sometimes, so we should focus on the really serious life-threatening stuff like copyright and DRM. Gotcha.
Re: (Score:2)
Mission In Life (Score:3)
I swear to god, it seems like EU's mission in life is to file lawsuits, create opportunities for filing lawsuits, invent new reasons to file lawsuits, etc.
I think the entire world could be at peace and harmony with itself...world hunger could be solved, everyone could have a home and plenty of food to eat and everything else they could possibly want, and the EU would be like, "We need to sue somebody!!"
Re: (Score:3)
Re: (Score:2)
Exactly this. The US (and, I think, the UK) have legal systems that make use of previous court decisions. Ultimately, a series of court decisions can wander farther and farther from the law, and create results completely at odds with what the law says.
In all other EU countries, indeed afaik in most of the world, the law is the law. Previous court rulings are only relevant, insofar as they clarify ambiguities in the law. Even then, the current judge is free to clarify those ambiguities as seems appropriate
Re: (Score:1)
The US (and, I think, the UK) have legal systems that make use of previous court decisions. Ultimately, a series of court decisions can wander farther and farther from the law, and create results completely at odds with what the law says.
Because sometimes, statutory law does not cover all the exceptions that society would really want to a law. In a common law system, judges generally listen to their gut to determine who is at fault and who is not, and then tend to write an opinion around it.
And of course, you rightfully ask "OH YEAH, GIVE ME AN EXAMPLE".
Riggs v. Palmer [wikipedia.org] (TL;DR: murderer did not get an inheritance from the person he murdered, despite statutory law saying he was entitled to it).
Re: (Score:2)
Re: (Score:1)
Well, if I were the judge I'd just give him a free pass, and then put in the decision my opinion that there should be a law for cases like that.
Well, that's how the common law system works. The judges opinion becomes part of established law, assuming it is not overruled by a higher court.
And that's also why I know about it: it was part of my list of cases to brief for law school.
Re: (Score:2)
I swear to god, it seems like EU's mission in life is to file lawsuits, create opportunities for filing lawsuits, invent new reasons to file lawsuits, etc.
I think the entire world could be at peace and harmony with itself...world hunger could be solved, everyone could have a home and plenty of food to eat and everything else they could possibly want, and the EU would be like, "We need to sue somebody!!"
Which is why the US is not the capital of frivolous law suits?
The first rule of Slashdot headlines about the EU is that Slashdot headlines about the EU are always going to be sensationalised, if not completely wrong.
This move has said that the plaintiff only needs to reasonably demonstrate that the company behind the device/software did not comply with EU regulations. Upon this presumption, the company is then required to turn over evidence that they did, indeed, comply with relevant EU regulations or
Re: (Score:2)
It seems to me that any copyrighted image used to train a neural net makes that neural net a derivative work, and by the laws of youtube 100% of the monetization revenue should be claimable by the copyright holder of any image/sound/video used in the training set of the AI. DMCA saves the world from Skynet apocalypse!
Re:Click on EULA, go to arbitration, no suing... (Score:4, Informative)
It also means no company in Europe tries to invest in these industries due to the overbearing regulation and potential costs. And in 15 years, the EU will start targeting successful companies from other parts of the world with protectionist lawsuits and fines because they believe it was monopolist behavior and not their own regularity atmosphere that once again kept Europe from being on the forefront of innovation.
Re: Click on EULA, go to arbitration, no suing... (Score:2)
+1, INSIGHTFUL
Re: (Score:1)
Re: (Score:2)
All the evidence suggests otherwise. For example, VW just released a minivan that has advanced self parking. You demonstrate how to park, which can involve a serious of manoeuvres along your winding driveway and around various obstacles, and then it can repeat that autonomously. Obviously it compensates for things like slightly different starting positions.
The driver is still required to pay attention in case is makes a mistake, but it's also by far the most advanced auto-parking system on the market.
Legal
Re: (Score:2)
Except that for contracts to be enforceable in many countries you will have to understand the details and in many countries you cannot be stopped from suing or other things in law by contract.
That is for example why here if you want to get a bank loan, the signing process takes a long time as the bank will go through with you every part of the contract so that they can prove that you understood the conditions in the contract.
That type of framwork makes Eulas basically useless except as a tool to scare peopl
Another EU industry kneecapped (Score:5, Insightful)
About the only tech industry where the EU is competitive these days is automotive, but this legislation would kneecap any hope of a European-made self-driving car. I mean, would you work on a self-driving car project knowing that you, personally, could be sued for a bug?
It looks like the future of automotive will be American, Japanese, Korean, and Chinese.
Re: (Score:2)
Re: Another EU industry kneecapped (Score:4, Interesting)
Let's imagine for a second that a self-driving car has a bug that causes an accident. Who, if not the car maker, should be held responsible?
Re: (Score:3)
It depends how autonomous it is. Up to level 2, which is driving aids and a qualified driver carefully monitoring them, the manufacturer will always try to blame the person behind the wheel. Tesla does this all the time.
For level 3 and up, where the driver can stop paying attention for at least some of the time, the manufacturer is entirely responsible during those periods. Sounding a warning for the driver to take over 0.3 seconds before impact doesn't absolve them either. I say that because when Tesla kil
Re: (Score:1)
Re: (Score:1)
Re: (Score:2)
How competitive would it be, though, without those juicy government subsidies [ustr.gov]?
Re: (Score:2, Troll)
The alternative is what you have in the US, where Tesla's beta test regularly kills people, and their surviving family members struggle to sue because Tesla just blames the driver and hides behind the EULA.
We shouldn't sacrifice our right to redress, or our privacy, just because big businesses claim it will "kneecap" them. They said the same about GDPR, but it turns out that they can in fact comply with it just fine, and it didn't destroy the web in Europe.
Re: (Score:2)
We shouldn't sacrifice our right to redress, or our privacy, just because big businesses claim it will "kneecap" them. They said the same about GDPR, but it turns out that they can in fact comply with it just fine, and it didn't destroy the web in Europe.
No, but constant cookie notifications on many sites is annoying.
Re: (Score:2)
Cookie banners are annoying, but also non-compliant. None Of Your Business (NOYB) has filed complaints about hundreds of sites over that, and I've done a few myself. The regulators are a bit slow though.
Hopefully in time cookie banners will go away. They violate Recital 32 of the GDPR, which states that to be complaint permission must be freely given. Any coercion like full screen banners that make the site impossible to use until dismissed, or making it harder to decline than to accept by putting the decli
Re: (Score:2)
Re: (Score:2)
And GDPR accomplished and added nothing useful whatsoever. My job was already about 50% InfoSec when GDPR rolled down on us. So I was pretty close to the "front lines," as the cliche goes. You know how much my job changed? You know how much my teammates' jobs changed? You know how much the directors of InfoSec or ProdOps jobs changed? Not at all. We were already following industry best-practices, and then some, to protect our company's data. To any of our knowledge; our efforts were already, and rem
Re: (Score:2)
GDPR isn't really about security, it's about who owns your data. You own your own data, and companies can't just steal it.
Haha (Score:2)
Lawyer: Your honour, the plaintiff identified the bug was the regularisation term in the loss function. They should have used L2 instead of L1, and used a different random seed; this was discovered using a non-causal inference method called "hunching" because the law doesn't r
Govs make laws to suite themselves (Score:1)
Notice how the governments always make laws to suite themselves way more than any citizens?
This law can be used to wipe out individuals that use AI to protect themselves from the government. "Perky little shit, lets slap him with a law suite, see if he can stay afloat with that."
In the Netherlands, the government made a law that says that "Anyone with a secrecy obligation cannot be found guilty of perjury for lying in court". They quoted "For example a lawyer that has a secrecy obligation to their client".
Why AI? Just ordinary product liability. (Score:4, Interesting)
Why single out AI systems? If a product is defective, it is defective. Product liability laws already exist.
The only thing special about AI is the fact that literally no one knows why an AI does something. But that also doesn't matter. If it has a provable error ("car thinks full moon is a stop light, stops in middle of road"), then it is defective. It doesn't matter whether that error is in a logical rule or buried in a neural net. Prove there is an error, get compensated for your damages.
Re: (Score:2)
AI-enabled products? (Score:2)
What exactly is AI-enabled products? Do these idiots not realise that these are just software systems? There is nothing special about these systems, it's just software. Neural nets are basically just gigantic if-then-else lines of software.
Re: (Score:2)
What exactly is AI-enabled products? Do these idiots not realise that these are just software systems? There is nothing special about these systems, it's just software. Neural nets are basically just gigantic if-then-else lines of software.
Now that, is what I call shortsighted. Not sure why you think laws written by and for Greed today, won't be still sitting in stone a half century from now, when those if-then-else systems are far more than that.
These laws, are to protect those so they can develop full AI without restriction, regulation, or remorse. Of course, the irony of that is even the father of AI will be viewed as a pointless meatsack, and will be considered a mere annoyance by AI not unlike corporate fines are today.
Impact on open source (Score:2)
Re: (Score:2)
Re: (Score:2)
that all possible scenarios in the real world have been anticipated and tested.
Too bad the law didn't include the politician's solution to the Halting problem [wikipedia.org].....*facepalm*
Anything and everything is the manufacturer's faul (Score:1)
Re: (Score:2)
Bottom line: this is a major threat to the EU's already waning ability to innovate. They are shooting themselves in the foot.
Manufacturers will selectively rationalize, disable, or find substitutes for AI in the EU versions of their products. By "rationalize" I mean product teams will have to spin gears going back and forth with legal convincing each other that a particular application of AI isn't a risk (such as using AI during the design phase where output will be extensively human-reviewed) or the risk i
Better Call Saul V 2.0 (Score:1)
The danger with laws like this... (Score:2)
Re: (Score:2)
AI should follow laws (Score:2)
The core issue here is that people want to pretend that AI is magic and cant discriminate or break laws; it honestly does all the time.
We need greater transparency. and public integrity of these systems, and to do that, you have to be able to validate conclusively that they will not discriminate illicitly.
Panic (Score:3)