A Rogue Robot Is Blamed For a Human Colleague's Gruesome Death (qz.com) 407
A new lawsuit has emerged claiming a robot is responsible for killing a human colleague, reports Quartz. It all started in July 2015, when Wanda Holbrook, "a maintenance technician performing routine duties on an assembly line" at an auto-parts maker in Ionia, Michigan, called Ventra Ionia Main, "was 'trapped by robotic machinery' and crushed to death." From the report: On March 7, her husband, William Holbrook, filed a wrongful death complaint (pdf) in Michigan federal court, naming five North American robotics companies involved in engineering and integrating the machines and parts used at the plant: Prodomax, Flex-N-Gate, FANUC, Nachi, and Lincoln Electric. Holbrook's job involved keeping robots in working order. She routinely inspected and adjusted processes on the assembly line at Ventra, which makes bumpers and trailer hitches. One day, Holbrook was performing her regular duties when a machine acted very irregularly, according to the lawsuit reported in Courthouse News. Holbrook was in the plant's six-cell "100 section" when a robot unexpectedly activated, taking her by surprise. The cells are separated by safety doors and the robot should not have been able to move. But it somehow reached Holbrook, and was intent on loading a trailer-hitch assembly part right where she stood over a similar part in another cell. The machine loaded the hardware onto Holbrook's head. She was unable to escape, and her skull was crushed. Co-workers who eventually noticed that something seemed amiss found Holbrook dead. William Holbrook seeks an unspecified amount of damages, arguing that before her gruesome death, his wife "suffered tremendous fright, shock and conscious pain and suffering." He also names three of the defendants -- FANUC, Nachi, and Lincoln Electric -- in two additional claims of product liability and breach of implied warranty. He argues that the robots, tools, controllers, and associated parts were not properly designed, manufactured or tested, and not fit for use. "The robot from section 130 should have never entered section 140, and should have never attempted to load a hitch assembly within a fixture that was already loaded with a hitch assembly. A failure of one or more of defendants' safety systems or devices had taken place, causing Wanda's death," the lawsuit alleges.
And so it begins... (Score:3, Interesting)
And so it begins...
Re: (Score:2)
Yeah, what could go wrong with dumb robots everywhere, and pesky people getting in the way?
Re:And so it begins... (Score:5, Insightful)
I think about the only possible dumb reaction to this news would be to immediately assume you know what happened.
Re: (Score:3, Informative)
When an airplane flies into a mountain, it's a pretty good bet that it's pilot error.
When a robot crushes someone, it's a pretty good bet that someone didn't deactivate the robot before working on it.
Re:And so it begins... (Score:4, Funny)
When an airplane flies into a mountain, it's a pretty good bet that it's pilot error.
Thanks for providing us with a detailed example of your root cause analysis abilities. I will be sure to disregard everything you say as baseless rubbish from this point forward.
Regards
The people who actually know the many ways things can fail.
Re:And so it begins... (Score:4, Insightful)
I did read it and this bit puzzles me
a robot unexpectedly activated, taking her by surprise
because I can't find the bit about her relaying that information to co-workers because they "eventually noticed that something seemed amiss found Holbrook dead". So how did she say that the robot unexpectedly activated and took her by surprise? It may well be a valid assumption but without any witnesses it cannot be stated with certainty.
Re:And so it begins... (Score:4, Insightful)
One might infer that it took her by surprise because she failed to get out of the way before it crushed her.
Re: (Score:3)
Re: (Score:3)
Re: (Score:3)
We are still in an age for civil discourse, just not using a forum where it is commonly practised. Seriously, read at -1 at some point. If it weren't for the moderation system doing it's job you'd see that Slashdot is a cesspool of shit that makes 4chan look good.
Re: (Score:2)
Talk about an inappropriate response, you will the loon award for the day.
Re: (Score:3)
It sounds to me like she didn't lock it out because she wasn't working on that robot. She was working in an adjoining cell where she should have been protected by safety doors.
Re: (Score:3)
Re: And so it begins... (Score:3)
That's exactly what Lock Out Tagout is for, to remove energy from the device and ensure it cannot operate. If a device can not be safely locked out it would never be allowed in an American facility.
Re:And so it begins... (Score:4, Interesting)
Re: And so it begins... (Score:3, Informative)
Re: (Score:3)
So THAT is the fault of the electrician/company. not the Robot manufacturer.
Re: And so it begins... (Score:4, Insightful)
Re: (Score:3)
Re:And so it begins... (Score:5, Informative)
Someone didn't follow Lock Out Procedures or those procedures were inadequate.
The only possible liability lies with her, or the company, not the robot manufacturers.
For those who don't know what "Lock Out Procedures" means... It is safety protocol that has been used in industry for at least decades in which a person who is going to work near dangerous machinery turns off the power to the system and physically puts a padlock on the switch so that it can not be turned back on. Protocol is that there is only one key to the padlock and the person who placed the padlock carries the key with them. This way the person is responsible for their own safety. If 15 people are working on the equipment there are 15 padlocks hanging off the switch (there are special devices that allow a whole gob of padlocks to placed on a switch.) Lockout can be mechanical in addition to electrical, but the concept is that when something is locked it, it is not physically possible for it to operate. It important to note that control systems are not locked out, actual power sources are, this way even a computer or control system failure can not cause a dangerous condition when something is "locked out".
Re: (Score:3)
I don't think you understand the concept of "lock out - tag out".
If the robot has no power, it can't hurt anyone. No further "test cases" required.
This is maintenance 101, you never work on any dangerous system without first personally turning off every power supply to it, and putting a padlock on the switch that only you have the key to, along with a tag with your name and contact information in case anyone has questions.
The question here isn't about what test cases the robot passed, the question is about
Re: (Score:3)
I don't think you understand the concept of "lock out - tag out".
It seems that she was working on the robot in location 130, and that robot was absolutely safely shut down - and it didn't occur to anyone that the robot in location 140 could reach far enough to kill someone in location 130.
They were most likely properly trained, and handled the danger that they were aware of correctly. Unfortunately, they didn't see the other danger.
Re: (Score:3)
Then they weren't properly trained, or they didn't follow that training.
Safety isn't about what "should" happen under ideal situations, it's about what "can" happen under worst case situations.
If the robot in location 140 never goes in to location 130, then it shouldn't be capable of doing so. If it does sometimes go in to location 130, but wasn't expected to at this time, it should have been physically blocked from doing so, or shut down as well.
This is safety 101 here, it's not complicated.
Re: (Score:3)
Experienced people are actually the most likely to miss important safety steps. They get complacent and think they know better. After all, they've never had a problem yet...
Re: (Score:2)
"should not have been able to move" and yet did, tells me it had power, which means it wasn't locked out.
You don't lock out only the things that *should* happen, you lock out all the things that *can* happen.
Re: (Score:2)
Should she have locked out every robot in the plant? She wasn't working on that robot. It moved into the cell she was working in from another cell.
Re: (Score:3)
You apparently have no clue what Lock Out means.
It is the process whereby you render a device completely inoperative and additionally, prevent it from being unintentionally reactivated. In almost all industries, it is an inherent part of the operation of the device. It is as ubiquitous as are seat belts. If you buy a dangerous machine, it most likely comes with instructions for locking it out, along with (literally) locks, flags, tags, etc. Hell, we just got a large air compressor at our plant and yes, it h
Re: (Score:2)
A lot of workplaces won't let you just power down the machinery before working on it, since powering one machine down would also necessitate shutting down an entire assembly line along with it, which would cost them production time (and money). That's why they have safeties on the machines.
Re:And so it begins... (Score:4, Informative)
Re: (Score:2)
Re:And so it begins... (Score:5, Insightful)
Re: (Score:2)
Begins? This sounds exactly like the sort of issue from that start of the industrial revolution, where people were routinely mauled by machinery with inadequate safety standards. About 200 years too late for 'and so it begins'.
No shit. "Man killed by industrial machinery when a safety feature failed." That may be news, but not outside the local community.
Re: (Score:2)
Well, woman. Her name was Wanda.
[John]
Re: (Score:2)
Re: (Score:3)
It already began back in 1979 [wikipedia.org], humans getting killed by robots is nothing new.
Yep, there's your problem. (Score:4, Funny)
You had the switch on "kill" rather than "assemble".
Re:Yep, there's your problem. (Score:5, Funny)
This is what you get when you demand kill switches for robots.
Before you crack stupid jokes please read this: (Score:2, Funny)
Industrial accident (Score:5, Informative)
A failure of one or more of defendants’ safety systems or devices had taken place, causing Wanda’s death.
That's it. That's all this lawsuit is about, faulty failsafes on industrial equipment that lead to an accident. Probably with merit.
But sure, call it "rogue robots" and "killing"...
Re:Industrial accident (Score:4, Insightful)
Re: Industrial accident (Score:3)
If only it had been connected to the Internet...
Re: (Score:3)
If only it had been connected to the Internet...
Then it would be an AI.
Re:Industrial accident (Score:5, Interesting)
My experience with industrial accidents is that it's almost certainly human error. I've seen someone deliberately disable the safety systems because they were inconvenient, then get mutilated doing something stupid the safeties would have prevented them from doing.
Personally, I've operated machinery on manual override when it should have been on automatic, the machine blaring warnings at me the whole time which just didn't register because I heard them so often at work. Luckily, the passive safety systems (the big steel protective cage I was in) kept me from harm.
With robots, failures are more likely to stop the system than to start it up. To accidentally start something when it shouldn't be started usually takes human interference.
Re:Industrial accident (Score:4, Interesting)
My experience with industrial accidents is that it's almost certainly human error.
My experience with human error is that the only way to avoid future human error is to find the underlying cause which permitted that error.
I've seen someone deliberately disable the safety systems because they were inconvenient, then get mutilated doing something stupid the safeties would have prevented them from doing.
Sheer stupidity exists. The only way to eliminate it is by removing the stupid people. I remember a palatising machine at a biscuit factory where one employee asked the other to lock him into the cage and start the machine because of a faulty sensor was misplacing every 8th box. When a manager walked in, he hit the e-stop. Shortly after security walked in and escorted both the person in the cage and the person who locked him in the cage off site.
When they said "no one told me" they were made a huge example, and management made sure that everyone at the plant knew the story of the two idiots who got themselves instantly fired.
It is one area that is really difficult to manage.
Re: (Score:2)
Re:Industrial accident (Score:5, Informative)
Re:Industrial accident (Score:4, Interesting)
Looks like the factory has both a history of accidents (2 previous deaths) and owner/name changes. That could indicate a culture of disregard for safety.
Perhaps even welcome. From The President Changed. So Has Small Businesses’ Confidence [nytimes.com]
The owner of an automotive parts assembler [electroprime.com] gave thanks that he would not be receiving visits from pesky environmental and workplace overseers.
The president [kellerlogistics.com] of a trucking company spoke of a “tremendous dark cloud” lifting when he realized he would no longer be feeling the burden of rules and regulations imposed by the Obama administration.
“My gut just feels better,” said Bob Fleisher, president of a local car dealership. [franklinparklincoln.com] “With Obama, you felt it was personal — like he just didn’t want you to make money. Now we have a guy who is cutting regulations and taxes.
Thankfully, those pesky environmental and safety rules and regulations protecting people and getting in the way of profits over disposable employees will be going away...
Re:Industrial accident (Score:5, Interesting)
I think calling it a "war" is disingenuous. Name a single regulation that was not based on a real public need. The only one I can think of was the whole anti-pipeline thing (KXL and Dakota Access), which was just straight-up ignorant. I wouldn't call that a war on businesses, though. In fact, my company made money off of TransCanada due to that debacle.
Compare that to what we have now, though. The president can cause stock prices to drop 5% or more with a single vindictive, vapid tweet. Obama's administration could lay out a damning white paper with detailed explanations of why mining tailings were bad for drinking water supplies, and nobody gave a shit because they knew the Republicans in congress would never actually do anything about it. God forbid we have an EPA or OSHA that actually, you know, defends we the people.
I'm pretty pro-Tenth, but honestly how are you going to handle river pollution at the state level? Is Louisiana really going to tell Texas to keep their cadmium on their half of the Sabine? Air pollution: can Arizona tell California to get its cars off the road when the wind is blowing? What about global warming? Anything requiring an international treaty is by definition a federal issue.
I'm tired of people claiming that all regulations are anti-business, and that all businesses just want to screw people for a buck. There's a real benefit to the American people when a smoothly functioning federal government does it's job. Also, there are real financial incentives when businesses act ethically. Unfortunately we haven't seen either in so long, most people have forgotten what it looks like.
Re: (Score:3)
And your logic is?
Here's how my equation falls out:
* States without coal: safety paramount.
* States with coal: extraction paramount.
Hence, disaggregation of oversight guarantees extraction.
Or—wait for it!—we can draw a BIG circle around the ENTIRE externality all at once (and one for all).
But that would actually lead to broad discussion, and horse trading, and the sound exercise of restraint, and the wrong kind of green.
Re: (Score:3, Insightful)
The other thing to note is that a good solution to the robot safety problem is to simply add more automation. Mixing humans and automation requires huge effort and cost at the interface. Even if it costs a lot more than the marginal cost of labour to eliminate the last pieces of manual work on a production line, the potential savings across the wider system could make it worth doing. This is likely to mean that even those prepared to work well below minimum wage will not be able to get jobs that can be auto
Re: (Score:2)
She was in there to repair the robots. We haven't developed robots yet that can diagnose and repair problems in other robots. That work will likely remain with Humans until AI is sufficient to accomplish these tasks and I don't see that happening anytime soon.
what about repair and the push to keep the line (Score:2)
what about repair and the push to keep the line moving as much as it can while working to repair the automation?
abolish health and safety laws?? and when the rich boss says do some unsafe thing or your gone some may just say fuck it and beat the shit out that boss just so they can get free room, board and doctor in the prison!
Re: (Score:3)
I wrote where I am [google.is]. Hint. [wisc.edu].
That said, lox has the same root word [etymonline.com]. It was even the English word through Old English (læx) until "salmon" (a word from Latin (salmonem) of unknown origin) took over. That actually happened with a lot of "food-related" terms, with Latin-origin terms (via French) replacing Germanic/Norse-origin terms - but usually the animal itself kept the Germanic/Norse. For example, you have cow (proto-germanic *kwon, Norse kýr/kú) but the food is beef (Latin bovem); swine (pro
Orders (Score:5, Funny)
The robot was just following orders.
Terrible (Score:5, Informative)
It's sounds absolutely terrible, but one of the primary things you learn when doing heavy machinery maintenance is lock out/tag out that renders all related machinery completely inoperable while servicing. It doesn't seem that this was done?
To be clear, if the company maintenance policies prevented her from properly locking or what she was working on, then they certainly do have a suit.
Re: (Score:2)
Honestly, the way the article is worded, it sounds like the 'safety doors' were supposed to lock out the other robots, rather than say a breaker being flipped. I'd love to know how those doors are supposed to work, I'd also love to know whether what she was doing was supposed to be done with the robots powered or not (not everything can be done with them powered down).
Re: (Score:2, Interesting)
The issue has very little to do with the lockout in the robotcell (well.. see below)).
The problem is that you have two adjacent cells that can interact. If Cell A is in lockout mode then the Robot from Cell B should not be allowed to reach into Cell A, but it may well still be allowed to work in it's own cell.
So the possible faults here are:
1. User failed to use correct lockout (climbing over door, having other human help reset door (it's supposed to be impossible from insdie the cell))
2. The lockout failed
Re:Terrible (Score:5, Insightful)
Re:Terrible (Score:5, Insightful)
Indeed, and that is a completely standard approach. If the safety equipment was faulty, not present, or there was pressure to not use it, then there is a very good case. If the equipment was there, working, but not used, then there is no case at all. Machinery is always dangerous and you must never bypass safety procedures, or suffer the consequences.
Many people are stupid though. Refer, for example, to all the photos on the Internet where you see somebody operate a circular bench saw without the protection bar. Just stumble once while the thing is running....
Re: (Score:2)
. If the safety equipment was faulty, not present, or there was pressure to not use it, Well the work place can say someone did not lock out the full area so we are not at fault even if that lock out is say not really that well known that to kill power you need to do A + B + C to fully turn off work zone 130 and 140.
Re: (Score:2)
Indeed. Safety equipment that is difficult to use (except when there is no other possibility, e.g. for a "moon-suit") is faulty by definition. When there is no other possibility than complicated safety equipment, it has to be assured that everybody is trained and capable to use it correctly, and this has to be tested and assured regularly by training exercises.
Re: (Score:2)
The laws on worker safety in most jurisdictions are pretty well thought out in that regard. The only way the employer would avoid responsibility is if they can prove that the worker was properly trained in the proper procedures, and that the corporate culture encouraged them to use them.
If the employer can't prove the worker was properly trained, or is known not to enforce their safety procedures, then the employer will be held responsible.
All that said, it seems like the lawsuit is trying to blame the manu
Re: (Score:2)
Refer, for example, to all the photos on the Internet where you see somebody operate a circular bench saw without the protection bar. Just stumble once while the thing is running...
Frankly, every circular bench saw I've ever seen had had a protection shield that probably wouldn't protect you if you used it on the ground, and then fell on it. I converted my bench saw into a table saw, so it would be a lot harder to fall on. Which is good, because it doesn't have a guard either.
Re:Terrible (Score:4, Insightful)
And yet an action that it was known to be fully capable of doing under other circumstances.
It doesn't matter if it *should* do the thing at the time, what matters is whether it *can* do the thing at any time. And if the answer is yes, it should have been subject to lock out-tag out.
"Human Colleague"... Nope, You Just Don't Get It (Score:5, Interesting)
The term "human colleague" immediately reveals that the writer has no idea of what a "robot" is. The most important thing always to keep in mind is that a "robot" is a machine - or, more likely nowadays, a collection of machines. It is a tool, even if that tool is capable of a limited set of autonomous actions. The accidental death described in TFA is a perfect illustration of this vital principle. Maybe there should be signs ten feet tall prominently displayed on all walls in workplaces that use robots: "A ROBOT IS *NOT* A 'COLLEAGUE'!"
Mind you, this confusion has been inherent since the word was first coined. "The word 'robot' was first used to denote a fictional humanoid in a 1920 play R.U.R. by the Czech writer, Karel Capek but it was Karel's brother Josef Capek who was the word's true inventor". [Wikipedia] The word is derived from the Slavic language root meaning "work" or "worker", and strongly suggests that a robot is to some extent intechangeable with human workers. Of course, that is absolutely not the case.
Isaac Asimov confronted these issues head-on when he began writing science fiction stories about robots. His "Three Laws of Robotics", which essentially forbid any robot to harm a human being, are treated as indispensable in his stories. But Asimov blandly ignored the obvious fact that there is no known way to implement such laws, which incorporate high-level abstract notions and moral principles. Until robots become at least as intelligent and complex as human nervous systems, such commands cannot be implemented. And if they ever do, we will immediately face even more tremendous problems.
Re: (Score:3)
Re: (Score:2, Interesting)
You haven't actually read Asimov's Robot stories, have you? If you had, you'd know that the implementation of the three laws of robotics was entirely i
Re: (Score:2)
You haven't actually read Asimov's Robot stories, have you? If you had, you'd know that the implementation of the three laws of robotics was entirely irrelevant to the points he was trying to make.
I had read all of them - some several times - by 1965. When were you born?
Re:"Human Colleague"... Nope, You Just Don't Get I (Score:5, Insightful)
Agreed that a Robot is no more a colleague than a screwdriver.
I think you're wrong about Asimov, though. It's obvious that to write about theoretical concerns of future technology, the author must proceed without knowing how to actually implement the technology, but may be able to say that it's theoretically possible. There is no shortage of good, predictive science fiction written when we had no idea how to achieve the technology portrayed. For example, Clarke's orbital satellites were steam-powered. Steam is indeed an efficient way to harness solar power if you have a good way to radiate the waste heat, but we ended up using photovoltaic. But Clarke was on solid ground regarding the theoretical possibility of such things.
Re: (Score:2)
But Asimov blandly ignored the obvious fact that there is no known way to implement such laws, which incorporate high-level abstract notions and moral principles.
He didn't ignore that, he simply assumed that it would become possible in the future -- and extensively explored the ways in which it could still go wrong, showing that even with that nonexistent technology, the seemingly foolproof Laws of Robotics were anything but foolproof.
Re: (Score:2)
Oh dear. Like the other AC (or are you the same?) you don't seem to grasp that even if Asimov's laws failed to achieve their intended consequences, that has nothing to do with the fact that they could not be implemented anyway.
In other words, Asimov may have been interested in showing that the laws didn't accomplish what was intended. But they were also impossible.
One of the rules of hard science fiction is that the author must not introduce more departures from known scientific facts than absolutely necess
"Should not". But did. (Score:2)
"The cells are separated by safety doors and the robot should not have been able to move. But it somehow reached Holbrook, and was intent on loading a trailer-hitch assembly part right where she stood over a similar part in another cell".
From a design/programming point of view, the key words are: "...the robot should not have been able to move. But it somehow reached Holbrook..."
"Should not". Hmmmmmmmmmmmmmmmmm.
Sounds as though someone made a mistake designing the system. Which is easily done. Restoring a d
Auto drive cars need a avionics software level cod (Score:2)
Auto drive cars need a avionics software level code reviews and QA!.
If not just wait for one to take poor map data and crash.
Re: (Score:2)
The problem is that some idiot relied on "should" when they should instead have been looking at "can"
When designing safety procedures you NEVER look at what a system should do in a given set of circumstances, you always look at what the system is capable of doing under any circumstances, and act accordingly.
If that robot sometimes goes in to that cell, but shouldn't decide to right now, you don't take that as good enough, you think about what you need to do to stop it from doing so no matter what. So that c
Come on, not that "Terminator" BS again... (Score:5, Interesting)
"Robot" engineer here. And when I say "Robot", I really talk about "Industrial Robot". Not the one that look like human.
It's 2015 all over again when another "Robot" killed a Volkswagen worker. People were all "Matrix have begun" rogue.
First, let me tell you to scary part : "The robot have done exactly what it have been programmed to".
Second, let me tell you the encouraging part : "The robot have done exactly what it have been programmed to".
It's always the same thing, "industrial robot" kill/hurt someone, and we see an headline about Robot revolution coming to kill us all in Terminator style. Those robot are just basic program controlling a bunch of servomotor, nothing "AI rogue humanoid robot with a shutgun" like. But there's on thing that are common to each of those story : "Safety violation".
In my mind, industrial robot are still the most dangerous piece of hardware you'll ever work with, period. And that's why there's a shit ton of safety measure for them. Yeah gears are dangerous and could tear off your finger, but you indistinctly know that as long as you don't put your finger close to them, they won't bite you. It's not the case with robot.
Back to the Volkswagen case, the worker didn't respect the safety procedure. The robot are connected to a safety gate that "must" be open when there's a worker inside the cell. You enter the cell, you put your lock in the gate to deactivate everything dangerous inside of it. But, from what I've understand, those worker wanted to work fast and took a "shortcut" while testing their equipment and decided to close the gate while a worker was inside. Of of the system then activated the robot that started it's wielding procedure with the worker right between both : https://www.youtube.com/watch?... [youtube.com] (Look between 0:05 and 0:30, everything else in this video is shit).
I work constantly in this sort of system and you'll be amazed how many "close call" I've seem so far. The thing is, people are completely clueless about robot (Hell, one time I was presenting a robotic cell with two KUKA robotic arm to some potential customer and one of the cute asian girl asked me if she should "see" the body of the robot. She was thinking there's was a huge robot under the floor controlling the two arm).
Long story short : Respect the freaking safety procedure.
Re: (Score:3)
Re: (Score:2)
The story makes me wonder a little if there was a mistake in the safety lockouts or general software if the machine that killed her was never supposed to work in the area she was in. However I agree most problems are with not following safety procedures. Stating someone has worked safely on certain machines for years says nothing about how safe they are working with them. Someone can cross busy roads not on a crosswalk many times before getting run over.
Re: (Score:2)
If the machine is NEVER supposed to work in that area, then it shouldn't physically be able to get there.
If the machine wasn't supposed to work in that area AT THAT TIME, then it should have either been powered down, or physically blocked from getting there.
This is safety 101.
Re: (Score:2)
Re: (Score:2)
In my mind, industrial robot are still the most dangerous piece of hardware you'll ever work with, period. And that's why there's a shit ton of safety measure for them. Yeah gears are dangerous and could tear off your finger, but you indistinctly know that as long as you don't put your finger close to them, they won't bite you. It's not the case with robot. Back to the Volkswagen case, the worker didn't respect the safety procedure. The robot are connected to a safety gate that "must" be open when there's a worker inside the cell. You enter the cell, you put your lock in the gate to deactivate everything dangerous inside of it. But, from what I've understand, those worker wanted to work fast and took a "shortcut" while testing their equipment and decided to close the gate while a worker was inside.
Somehow I don't find robots in a safety cage that aren't supposed to be turned on with humans inside particularly scary. Getting killed by that is like being killed by a car sliding off the jack and crushing you, if you'd just bother to secure it properly before you crawled under it'd be completely harmless. I bet more people have died from the GPS giving faulty directions than industrial robots, much less faulty brakes, defective medical equipment and such that could quite easily kill people by simply not
Re: (Score:2)
Well... the terminator was only doing exactly what it was programmed to do. ;) The terminator programmers were, fortunately, very bad at their jobs.
The way I see it, terminator are "real" AI. Meaning that you program the AI, then AI evolve by itself the same way the brain of a baby does. It can "think outside the box".
Industrial robot program can't. And they also can't deactivate their safety measure. If you open the safety game, the robot "will" deactivate no matter how fancy your program is.
"Rogue robot" - "Misprogrammed machinery" (Score:2)
At least if you remove the pervasive stupidity of today's press reporting. There is nothing special here. A piece of machinery was programmed wrongly, and there was no independent safety-equipment to stop it or it was not used. This is essentially not different from other machinery-related deaths at all.
Yup, here's your problem (Score:3)
Sounds familiar (Score:2)
It seems we have our first Runaway [wikipedia.org]. Better check to make sure its circuits haven't been modified.
What about the three laws of robotics? (Score:2)
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
There's some grusome bias here (Score:3)
I'm not sure what the literary device is being used here, but that word choice really makes it the entire blurb leading up to this point makes it sound like the robot became sentient and was on a mission. The last line (the bit I quoted) was equal parts gruesome, horrific, and fantastic. I would read a book by whomever wrote the bit there.
Re:Unplug it first (Score:5, Insightful)
Re: (Score:3)
"No! Shut them ALL down!"
Re: (Score:2)
Or isn't there a master power switch.
When I change the blade on my circular saw table, I always unplug it first. Even though I know I won't be touching the "on" switch, I don't trust it.
I somehow doubt you're allowed to turn off the whole factory at the mains just because you're moving into a different room.
Re: (Score:3, Interesting)
I'd trust it more if it didn't have a fallible human behind the wheel.
I trust computers not to drink, drive sleepy, fiddle with the radio, talk to the hounddog on the CB, text cousin Willy, be aware of what was happening around it for a full 360 degrees every microsecond.
Re: (Score:2)
Re: Still want self driving cars? (Score:2)
Re: (Score:2)
I'd trust it more if it didn't have a fallible human behind the wheel.
I trust computers not to drink, drive sleepy, fiddle with the radio, talk to the hounddog on the CB, text cousin Willy, be aware of what was happening around it for a full 360 degrees every microsecond.
Yes, but can you really trust them not to be evil?
Re: (Score:2)
I'd trust it more if it didn't have a fallible human behind the wheel.
I trust computers not to drink, drive sleepy, fiddle with the radio, talk to the hounddog on the CB, text cousin Willy, be aware of what was happening around it for a full 360 degrees every microsecond.
As opposed to the fallible human that programmed those computers... who have no personal responsibility over the vehicle, who may have been drunk, sleepy, fiddling with the radio, etc... Ya, I know - but testing! blah, blah, blah ... That still makes my ass twitch.
Re: (Score:2)
why no hard kill switch? (Score:2)
why no hard kill switch?
Re: (Score:3)
...kill the kid in a way that no human would have ever done, unless high on drugs.
Or falling asleep, or distracted by texting, or messing with their music, or...- The computer doesn't have to be perfect, just better than us, which seems quite easy at this point.
it is no more than the sum of its programming or the reliability of its sensors.
And neither are you or I.
Re: (Score:2)
If someone is killed by an AI, it is a freak accident that makes the news and horror stories. It someone is killed by a drunk, it is considered an every day thing. However, in both cases, dead is dead.
At least an AI can be improved, the guy who has no license, four priors, and is driving a car titled under a family member's name isn't going to become a safer driver.
Re: (Score:3, Insightful)
It isn't always possible, however.
Note this portion of the description from the summary:
routinely inspected and adjusted processes
- there are many times when the design of the machine is such that adjusting and calibrating requires the machine to be energized; and sometimes safety interlocks must be disabled (generally with vendor provided tools) in order to make those adjustments.
An injury or death (sadly more specifically the high dollar value lawsuit following it) may provide sufficient
Re: (Score:2)
Did this robot learn from its environment? No.
No AI was involved. It was merely an industrial machine. You wouldn't expect a roll-former or a feed-to-stop to automatically shut down because a human stuck their hand in the line EXCEPT if the cage around the machine malfunctioned and failed to detect the hand or if someone had shut the cage controls off (probably it would have been interlocked with machine so it wouldn't work if the cage was shut off) or if the cage had been removed and the interlock overrode