Uber Ordered To Take Its Self-Driving Cars Off Arizona Roads (nytimes.com) 295
After failing to meet an expectation that it would prioritize public safety as it tested its self-driving technology, Uber has been ordered to take its self-driving cars off Arizona roads (Warning: source may be paywalled; alternative source). "The incident that took place on March 18 is an unquestionable failure to comply with this expectation," Gov. Doug Ducey of Arizona wrote in a letter sent to Dara Khosrowshahi, Uber's chief executive. "Arizona must take action now." The New York Times reports: Uber had already suspended all testing of its cars in Arizona, San Francisco, Pittsburgh and Toronto. "We proactively suspended self-driving operations in all cities immediately following the tragic incident last week. We continue to help investigators in any way we can, and we'll keep a dialogue open with the governor's office to address any concerns they have," said Matt Kallman, an Uber spokesman. The rebuke from the governor is a reversal from what has been an open-arms policy by the state, heralding its lack of regulation as an asset to lure autonomous vehicle testing -- and tech jobs. Waymo, the self-driving car company spun out from Google, and General Motors-owned Cruise are also testing cars in the state. Mr. Ducey said he was troubled by a video released from the Tempe Police Department that seemed to show that neither the Uber safety driver nor the autonomous vehicle detected the presence of a pedestrian in the road in the moments before the crash.
Next, banning humans? (Score:2)
Surely Governor Ducey is not going to be a hypocrite, particularly when lives are at stake: "Arizona must take action now!"
Re:Next, banning humans? (Score:5, Interesting)
According to the Arizona Department of Transportation, at least as of 2016 [azdot.gov], there were 952 fatalities in car accidents in Arizona, or approximately 2.61 deaths per day.
So far so good - now look up the number of cars in Arizona [statista.com] (about 2.4 million) and the number of Uber self-driving cars [techradar.com] (200 across 4 cities). Now apply "appropriate precision" in Uber's favour - 200 out of 2 million = 1/10,000 of AZ cars are uber self-drivers. So, with 1000 fatalities/year, Uber get to kill someone every 10 years - they've used that up in one. (Of course, that's an unspeakably crude and dubious calculation, but its better than yours).
Then, of course, of those 1000 regular fatalities, many will be attributed to drunk-driving, speeding, texting (or other forms of reckless driving), non-roadworthy vehicles etc. all of which carry potential criminal penalties - including possible driving bans - so its not the case that nothing is being done about them.
Uber were allowed to test experimental vehicles on the condition that they'd have a safety driver ready to take over - and one thing that the video clearly shows was that the safety driver was not paying attention (to the surprise of absolutely nobody except, apparently, Uber). The video also shows that the pedestrian was crossing the road in clear line-of-sight, in a street-lit area, from left-to-right yet the car made no attempt to brake or swerve. If you believe that the video truly represents what the Mk 1 eyeball and/or the car's sensors could "see" then all that proves is that the car was going too fast for the conditions - outdriving its headlights - and the driver should have taken action to slow it down.
Re:Next, banning humans? (Score:5, Interesting)
Re:Next, banning humans? (Score:4, Interesting)
Might be pointless but one recommendation I would have to the automated car companies is any warning sign, like yield to bikes, should have a visual and audible alert by from the car to "wake up" the human monitoring the car. Though given how often nothing would happen after that alert, it would probably get ignored after a while.
Re: Next, banning humans? (Score:2, Insightful)
Yes they would have seen her and so would you. The video released is overly dark. If you headlights only saw two stripes ahead you would be hitting a lot more things. But also check out other examples of the exact stretch of road.
On top of that why didn't the car even attempt to break? I don't care about human reaction time, I would just expect my self driving software to be able to pick up an obstacle that big in sub 1.5 seconds. And that is just off crappy video, what the hell was the lidar doing?
Uber obv
Re: (Score:3)
Re: (Score:2)
Have you calculated how many people would die if all automobile traffic were banned, due to being unable to get access to food, medical care, their jobs, etc? Sure Theaetetus is not going to be a hypocrite, particularly when lives are at stake!
Comment removed (Score:5, Funny)
Re: (Score:2)
Drivers staring at their cell phone are *already* banned.
Re:Better remove all drivers too (Score:5, Insightful)
Since the driver was unable to detect this incident too, they better remove all drivers as well!
If a driver was caught behaving like the one in the Uber video - not holding the wheel, concentrating on something in their lap and only glancing occasionally at the road, or otherwise not fully in control of their vehicle - they probably would be removed (and/or have their house removed by the civil courts if death/injury was involved).
Why are people implying that there is some double standard being applied against Uber here? They were already granted an exception that allowed them to test cars in "hands off" mode provided theyn had a safety driver ready to intervene - they've blown that by not taking steps to ensure that their safety drivers stayed on task (which anybody with a grain of nous knew was likely to be an issue).
Option A: the dashcam shows that there was nothing physically blocking the pedestrian from view, and in a street-light area either the driver's Mk1 eyeball or the car's sensors should have spotted them long before the low-sensitivity dashcam or, Option B: Uber's dashcam video does give an accurate impression of visibility at the time (flap, oink) - in which case the car was dangerously outdriving its headlamps and should have slowed down (or been slowed down by the driver) without needing to see the pedestrian. Pick one. If a human-driven car had had that accident, the driver would stand a good chance of facing - at least - careless charges and/or a civil lawsuit.
We have a process to revoke driver's licenses (Score:2)
So why the fuck didn't the car see the pedestrian? (Score:4, Informative)
This is not a person who suddenly jumped out onto the road here.... while she was jaywalking, she was also *WALKING*... I've seen an overhead view of the section of road where the incident occurred, and there's no significant occlusions there; ordinarily, vision seems that it would be pretty good there in daylight conditions. It's my understanding that self-driving cars use lidar sensors, and should even be able to detect a person in an absence of any visible light at all, so the fact that it was night should be immaterial. Reasonably, the car should have seen that she was on the sidewalk long before she stepped out into the road, and the very *instant* that she started to go off of the sidewalk should have been detected by the car, and the car should immediately begin to slow down.
Yet, by all reports I've heard, the car did not even see this pedestrian at all, and had not even tried to slow down until after the collision. Why? What the fuck happened?
Re: (Score:2)
Because Uber doesn't know what they're doing.
The car is supposed to have LIDAR sensors. So either Uber wasn't using them or the software failed catastrophically. (Or both.)
I'm sure they've been busy doctoring logs and sensor data.
Re: (Score:2)
Your response has nothing to do with the question.... I asked why the car didn't see her. By all rights, it should have, so why didn't it? If they don't know yet, what are they doing to find out?
Was it a sensor malfunction? Did the software in the car fail to recognize her as something on the road? If not, what did it see her as? Can this situation be recreated in simulation to figure out what the car did wrong, and corrected in the future so that there can be some assurance it won't happen aga
Great (Score:2)
Now you're not even safe on the sidewalk!
Measured approach to incident (Score:3)
I have already several articles suggesting that this should not be done because only more and more refinement of such a complex product will cause it to become viable. Also even with a few bugs, driverless cars are possibly already less accident-prone than humans.
As a software developer, I naturally side with continuing development.
Looking at the FAA gives a good model on how to proceed.
When an airplane crashes, the FAA sometimes grounds all models of that plane until the cause of the crash is determined and, if it was a technology error, will not allow the planes to fly again until the problem is satisfactorily resolved.
That would appear to be a measured response to this type of problem.
Don't halt all development. Don't proceed, ignoring the death(s).
Prohibit the specific driverless system from using the public roads until the problem is determined and an acceptable fix is made.
Just as cars have model years that receive approval, so should specific versions of driverless systems.
Then we can have official patches deployed on an as-needed basis, not just when a software engineer declares a bug has been fixed.
Very strict controls need to be in place to allow/deny a software/hardware update to a driverless system.
I don't want my car to be hacked and used as a killer weapon.
Re: (Score:2)
I would definitely be in favor of requiring national and, in the US, state authorization for autonomous vehicles to be tested on public roads. I believe the objection is that a "bureaucratic process" would "stifle innovation" and preventable deaths are a preferred alternative. Heaven forbid we let engineers use their engineering expertise to minimize loss of human life when there's a mad rush to be first to market.
The benefit of requiring licensing and government oversight is that every developer will be wo
In response (Score:2)
1. A request has been made to have them all removed.
2. Except for Christine. We've lost contact with Christine. And we really don't know where this car is.
Proactively (Score:2)
We proactively suspended self-driving operations in all cities immediately following the tragic incident last week.
proactive - adjective
(of a person, policy, or action) creating or controlling a situation by causing something to happen rather than responding to it after it has happened.
Re:Big mistake! (Score:5, Insightful)
Testing is done for finding problems. They found one. Don't stop testing now!
Testing is also done (or should be done) in controlled environments until you get way past the alfa and beta stages. Putting the autonomous car on the road can be justified when the car doesn't need human supervision and it can deal with normal traffic conditions in day and night with the same performance as that of a human driver.
I seriously have no idea why autonomous cars in pre-alfa stage are on the roads.
Unfortunately most people are treating autonomous cars as software. And we know how software engineers think. Throw the alfa software to the public and fix mistakes afterwards. Oh and we're not responsabile for anything the software might do that brings down your house, empties your bank account etc....
Re: (Score:3)
What's your proposed model for testing an autonomous car driving amidst normal traffic conditions that does not include actually having it drive among normal road traffic?
Secondly
Re: Big mistake! (Score:2, Insightful)
proposed model for testing an autonomous car driving amidst normal traffic conditions that does not include actually having it drive among normal road traffic?
Hire people operate vehicles on closed tracks to simulate traffic and specific scenarios.
Or, better yet, orchesate dozens of other self driving cars to move in fixed choreographed patterns that the autonomous vehicle must recognize and react appropriately to.
Re: (Score:2)
proposed model for testing an autonomous car driving amidst normal traffic conditions that does not include actually having it drive among normal road traffic?
Hire people operate vehicles on closed tracks to simulate traffic and specific scenarios.
Or, better yet, orchesate dozens of other self driving cars to move in fixed choreographed patterns that the autonomous vehicle must recognize and react appropriately to.
I'd be interested to see just what has been done by various AV developers with regard to comtrolled scenario testing. Obviously you can't re-create every possible scenario, but you could create hundreds of very likely ones. A person crossing the road in various manners would be part of that testing regimen. Seeing what happens with certain sensors failed might be another. It is certainly something worthy of a question. I assume much has been done but I'd love to see the checklist(s).
At some point thoug
Re: (Score:3)
What's your proposed model for testing an autonomous car driving amidst normal traffic conditions that does not include actually having it drive among normal road traffic?
I thought Uber had an entire town set up for this sort of thing. On a related note, someone should develop mechanized crashtest dummies that can be programmed to walk into the path of an autonomous car; when the cars are capable of avoiding people who are actively trying to get hit, then (and only then) have they achieved enough superiority over humans that we can justify (maybe and only maybe) thinking about sharing our roads with them
Re: (Score:2)
Re:Big mistake! (Score:5, Insightful)
What's your proposed model for testing an autonomous car driving amidst normal traffic conditions that does not include actually having it drive among normal road traffic?
Perhaps they should focus first on not killing pedestrians on an otherwise empty road, and worry about "normal traffic conditions" later.
Re: (Score:2, Insightful)
Perhaps they should focus first on not killing pedestrians on an otherwise empty road, and worry about "normal traffic conditions" later.
And will you apply the same standard to human drivers, who kill 270000 [who.int] pedestrians per year on average?
Re: (Score:2, Insightful)
Re: (Score:3)
responsibility is on the driver. Read the traffic laws.
There was no driver: my arguments
1. No eyes on road.
2. No hands on wheel
3. No feet on pedals.
LITERALLY no driver in the car.
Re: (Score:2, Insightful)
autonomous cars have already caused less accidents per miles driven than your average human driver
Statistics are like bikinis. What they show is not anything special. What they conceal is vital.
Self driving cars only operate in a very subset of extremely limited conditions. When you compare like for like, limit the stats on human drivers to the same limited conditions that self driving cars, do you sell that have killed (3 people killed so far) and been the cause of far more accidents than human drivers.
Re:Big mistake! (Score:5, Interesting)
Shadow operations. Don’t control anything, just observe. If software action deviates from human action, review and analyize. It appears that this is the stage in product development that Uber needs to be in.
Once the shadow driving has advanced to a point where there are no negative deviations in a control environment, define that envelope and test rigorously within the envelope. Continue to shadow outside the envelope and slowly expand the envelope. Within test envelope, thoroughly validate performance with external telemetry— were there any cases where nothing bad happened, but the action taken should have been different.
Re: (Score:2)
Re: (Score:2)
Nope, horses are dangerous. People get thrown all the time. And die as a result.
If we followed that principle, we'd still be walking everywhere. Well, probably not in the Americas, since people had to cross open water to get here, and ships are really dangerous....
Re: (Score:2)
Correction, you can't walk everywhere either: http://www.newsweek.com/apple-... [newsweek.com]
Until people learn how to walk, we should revoke their walking license.
Re: (Score:2)
What's your proposed model for testing an autonomous car driving amidst normal traffic conditions that does not include actually having it drive among normal road traffic?
Secondly measured in terms of accidents and fatalities [vt.edu], autonomous cars have already caused less accidents per miles driven than your average human driver, so if that's your metric, the argument can be made that said bar has already been passed
There are entire test sites for self-driving cars. Once your car meets some standard in the test site, you could then take it on the road. https://www.freep.com/story/mo... [freep.com] Autonomous vehicles, in general, may be safer than ones operated by the average human driver. But that doesn't mean that the *Uber* vehicles are safer than ones operated by human drivers.
It appears that Waymo and such have done a great job getting their vehicles as ready as possible before putting them on the road and, as a result,
Re: (Score:2)
Re: (Score:3)
So people dying during testing is considered normal??? What kind of a fucking douche bag are you....
For the brain damaged out there, no people dying during testing is not considered normal. As others have pointed out, testing is done in controlled conditions before they are allowed in live environments. THIS WAS NOT DONE.
This kind of scenario is a rather
Re: (Score:2)
What's your proposed model for testing an autonomous car driving amidst normal traffic conditions that does not include actually having it drive among normal road traffic?
Simulations, no reason someone can't simply drive around recording sensor data which you then feed to your driving algorithms later to see if it behaves properly.
We should also know all know that alpha or not, there's no bug-free software. You can do all the simulations and all the testing you want, bugs and accidents will still happen. However, once they do and are fixed, the vehicles will not do the same mistakes again, which is not true for most human drivers. Human drivers also do not learn from the mistakes of other human drivers that they've never met. Autonomous cars do.
We've seen numbers that Uber's software needed human intervention every 13 mi, Google's on the other hand is 5600 mi. If we consider how far people drive Google's still needs help a few times a year, Uber's needs it a few times an hour.
Re: (Score:2)
What's your proposed model for testing an autonomous car driving amidst normal traffic conditions that does not include actually having it drive among normal road traffic?
Drive around in real traffic with a human driver in a car kitted out with all the sensors that the autonomous car would have. Record the sensor data. Use that sensor data to build a simulation to train the AI. I know an actual solution would be more complicated, but it could be done. It's just cheaper to put real cars on real roads and endanger real people.
Secondly measured in terms of accidents and fatalities [vt.edu], autonomous cars have already caused less accidents per miles driven than your average human driver, so if that's your metric, the argument can be made that said bar has already been passed, although that obviously does not mean that the safety cannot and should not be further improved until the fatalities drop to zero.
You can't just lump all autonomous vehicles together under one rubric when the issue is whether Uber should be allowed to continue road testing. Ube
Re: (Score:3)
What's your proposed model for testing an autonomous car driving amidst normal traffic conditions that does not include actually having it drive among normal road traffic?
As far as I've read, Google (Waymo) created an entire fake town to test different traffic situations out before letting their autonomous cars onto the actual roads in actual towns...I have not heard of Uber doing such a thing.
Secondly measured in terms of accidents and fatalities [vt.edu], autonomous cars have already caused less accidents per miles driven than your average human driver, so if that's your metric, the argument can be made that said bar has already been passed, although that obviously does not mean that the safety cannot and should not be further improved until the fatalities drop to zero.
This is a completely irrelevant statistic at this point. You're comparing accidents per miles driven of regular vs. experimental self-driving cars (all of which, btw, have human drivers which are supposed to - and often do - intervene to prevent accidents)...the two "sample sizes" so t
Re: (Score:2)
That link (the one that supposedly shows autonomous cars are safer) has a date of January 2016. Any data studied would obviously have to be older than that. Now, exactly how many fully autonomous cars were there, driving in real-world conditions, more than 2.5 years ago? Maybe 0?
Re: (Score:3)
I'm also not confident that software engineers will have the same level of respect for life that other engineers have shown.
Re: (Score:2)
"What's your proposed model for testing an autonomous car driving amidst normal traffic conditions that does not include actually having it drive among normal road traffic?"
The obvious answer is to install the hardware and software on vehicles that are driven by humans performing normal driving and not to enable the autonomous control of the vehicle until millions of miles are accumulated with no false negatives (failure to detect a problem) and minimal false positives. In practice, that's not going to wor
Re: (Score:2)
Secondly measured in terms of accidents and fatalities [vt.edu], autonomous cars have already caused less accidents per miles driven than your average human driver, so if that's your metric, the argument can be made that said bar has already been passed, although that obviously does not mean that the safety cannot and should not be further improved until the fatalities drop to zero.
Two points:
1) This comparison is of automated vehicles driving only in the safest conditions to humans driving in all conditions. That introduces a huge bias in favor of automated vehicles.
2) From your link:
Low exposure for self-driving vehicles (about 1.3 million miles in this study) increases the uncertainty in Self-Driving Car crash rates compared to the SHRP 2 NDS (over 34 million miles) and nearly 3 trillion vehicle miles driven nationally in 2013 (2,965,600,000,000). ...
Current data suggest that self-driving cars may have low rates of more-severe crashes (Level 1 and Level 2 crashes) when compared to national rates or to rates from naturalistic data sets, but there is currently too much uncertainty in self-driving rates to draw this conclusion with strong confidence.[Emphasis mine]
This isn't an apples-to-apples comparison. It's an apple-seed-to-orange-tree comparison, and should be taken with a whole lot of salt.
Re: (Score:2)
I disagree, it IS a point in their favor. Life goes on even in bad weather, etc. The point of a car is to get you from point A to point B when you want to go. If the car can't/won't do that it is a failure. Now, certainly there are times when you just should not drive, but ordinary rain and snow are not those times.
Re: (Score:2)
Putting the autonomous car on the road can be justified when the car doesn't need human supervision
Yeah why don't we hold humans to that standard, no supervised driving on public roads until you have your driver's license. You can have a professional driving instructor giving you reasonable challenges building up your skill step by step making sure to catch your mistakes or you can have a teen with ADD and a dad who'll throw you out there to either sink or swim with a legal babysitter. And the last guy just mowed down somebody, clearly supervised driving on public roads is the problem... not. Uber is the
Re: (Score:2)
Testing is also done (or should be done) in controlled environments until you get way past the alfa and beta stages. Putting the autonomous car on the road can be justified when the car doesn't need human supervision and it can deal with normal traffic conditions in day and night with the same performance as that of a human driver.
I seriously have no idea why autonomous cars in pre-alfa stage are on the roads.
Unfortunately most people are treating autonomous cars as software. And we know how software engineers think. Throw the alfa software to the public and fix mistakes afterwards. Oh and we're not responsabile for anything the software might do that brings down your house, empties your bank account etc....
Speaking of testing in a controlled environment. Run that video in front of 100 human drivers in a simulator with some variable amount of the video taken in the few minutes before hand with a control group that cuts out a few seconds before the collision and see if they can react in time to avoid the collision. I would be willing to bet that many won't be able to react in time to avoid that collision either.
To be realistic give a few of them a beer, make a few not get enough sleep the night before and cho
Re: (Score:2)
I seriously have no idea why autonomous cars in pre-alfa stage are on the roads.
They're SUPPOSED to be safe with a single safety driver being present to takeover the moment the self-driver fails to react or does something wrong. That's a bad approach because of how a person fatigues when they're JUST WATCHING for long periods of the time.
There may be other viable solutions but the one-safety-driver solution doesn't work.... maybe they'd be better off with 2 safety drivers holding a big red "Stop
Re: (Score:2)
Oh dear, it killed a pedestrian, how inconvenient, that'll play hell with our testing schedule!
WTF, are we China now? Human life is cheap, throw-away? Someone being KILLED can't stand in the way of progress? Fuck that. Good on you, Arizona, ACTUALLY looking out for public safety.
Re: (Score:2)
It might also explain why we haven't seen a self driving Alfa yet. It's stuck in early testing :)
Re: (Score:2)
That's a dangerous slope.
He didn't watch well so another Hawtch-Hawtcher had to come in as a watch-watcher-watcher! And now all the Hawtchers who live in Hawtch-Hawtch are watching on watch watcher watchering watch...
Re: (Score:2)
the problem they found was that the cars wouldn't force the driver to take over if their systems are fucked.
lidar didn't work. apparently the driver wasn't aware of that it wasn't working for whatever reason.
and look, it's pretty easy to make a test track that should have exposed this problem long before driving on public roads.
Re: (Score:2)
Test tracks are useless here. That's the problem with machine learning - if you test too much in the same environment, the algorithms will "memorize" the test data and still be useless in the real world.
What these companies need to do is put all the needed sensors on millions of cars and collect the data, without actually being self-driving. Problem is, what consumer would pay for that?
Re: (Score:2)
Oh it's worse than that. These algorithms aren't ready; all this testing leads to tuning and adjustment of the code.
Testing them in a closed environment will build a learning algorithm more-refined to work on the fundamental assumptions of that closed environment.
Re: (Score:2)
Frist problem they found - the backup driver wasn't engaged.
Since this 'driver' was being monitored, albeit perhaps incidentally to video recording, they will be going back and determining how many other drivers were not 'engaged' during testing.
Fix that, and they we can talk about software, sensors, and features of the autonomous mode vehicle...
Re: (Score:2)
Re:Big mistake! (Score:5, Interesting)
SAE Level 3 automation should be illegal. Period. The backup driver simply cannot be expected to go from "no interaction with the vehicle for 1500 miles on average" to "rescuing the vehicle from an emergency".
Automation should be locked at Level 2 (ProPilot, Autopilot, Supercruise, etc, etc) until vehicles are at least ready for Level 4, if not Level 5. Level 2 = hands on the wheel, ideally with eye tracking, ideally making the user drive for themselves for at least a couple minutes every hour to stay alert.
And it should not be up to companies when their vehicles are ready to put them on the roads, as they're in a race to be seen as first movers. Governments should have their own review and testing processes, which involve both code audits (in the case of neural nets, audits of the net core and how the nets are trained) and real-world testing of simulated hazard scenarios.
Re:Big mistake! (Score:4, Informative)
Level 3 is fine if there is plenty of warning to take over, say a minimum of 30 seconds. Time to put away your phone, pack away the sandwich and slip your shoes back on.
Unfortunately what we have is crap like this from Audi: https://youtu.be/WsiUwq_M8lE [youtu.be]
Note the way it disengages suddenly and the guy has to instantly take over. That's dangerous and unacceptable.
Re: (Score:3)
Level 3 is fine if there is plenty of warning to take over, say a minimum of 30 seconds.
In other words level 3 is never fine.
Re: (Score:3)
Re: (Score:3)
But that more or less gets you to Level 4, which IIRC is just Level 3 + pull over safely if something goes wrong.
Re: (Score:2)
With level 4 you could potentially have someone unqualified to drive take a pre-planned trip, as long as the whole trip was on roads that the car can handle. That could be most roads, in fact, with a few exceptions for things like driving on to ferries, unpaved roads, certain types of car park etc.
With level 3 you would always need a qualified driver behind the wheel. Say Audi did their system properly, it would never disengage suddenly in traffic but you would have to be prepared to take over when traffic
Re: (Score:2)
One of the most frequent minor accident scenarios in freeway driving is when stop-and-go congested traffic comes to a sudden stop. Often we get a series of rear-end collisions at this point after drivers' attention has drifted. Does Audi have any tests on how traffic behaves when all of the cars in a congested area are on Level 3 autopilot?
Re: (Score:2)
Yes that is literally the use case for emergency braking assistance that has been a feature of upper market cars for many years.
Re: (Score:2)
Level 3 is fine if there is plenty of warning to take over, say a minimum of 30 seconds.
Emergencies develop in a fraction of a second, not half a minute. If a human is needed, a human is needed NOW. But you are right that people can't realistically take over in such a small amount of time. That's why level 3 is not fine. There's really no reason why a car should need a human to take a car over in 30 seconds, so at that point, why even have that level of automation?
Re: (Score:3)
Having a human that can get bored babysitting automation is fundamentally the wrong way to do it anyway. Until the automation is reliable to not need human supervision, it should be the other way around: the automation monitoring the human to provide an extra safety net, so that the human has to be actively driving and alert rather than getting bored and not monitoring the automation properly.
Not that hard to engage human (Score:5, Insightful)
Flash a dim light on the windscreen at random intervals, and ensure the human responds. No mobile phone usage then. This sort of thing has been done for trains for ages (I'm not talking about dead man, but attention monitors.)
Uber did not care. And that video they released was dubious, someone else took a dash cam of the area and it was reasonably well lit, even if there was no Lidar.
Uber were totally negligent. And I suspect they did not pay that driver very much. Monitoring a test car should not be a minimum wage job.
Re: (Score:2)
Yeah Uber were negligent but I wouldn't use the video of evidence of this. Regardless of how well lit the area was there's a shitload of crap cameras out there. For every well lit Street I can show you a dash cam that would be worthless for recording footage there.
More likely a car loaded with sensors didn't seem like it needed any investment in a camera pointed at the driver. ... Right until they did
SIL Level (Score:2)
More importantly, the systems should be required to be certified througout for have a functionnal safety SIL level 3 or 4.
Re: (Score:2)
Re: (Score:2)
SAE Level 3 automation should be illegal. Period. The backup driver simply cannot be expected to go from "no interaction with the vehicle for 1500 miles on average" to "rescuing the vehicle from an emergency".
I agree for normal usage. For testing however it should be possible, however the driver should probably be an engineer working on the program, they're much more likely to pay attention and understand what the system needs.
Re: (Score:2)
Governments should have their own review and testing processes, which involve both code audits (in the case of neural nets, audits of the net core and how the nets are trained) and real-world testing of simulated hazard scenarios.
Government already has a test for driver readiness. It involves a guy in the passenger seat telling a teenage kid to perform some simple maneuvers on some side street while following the rules of the road and making notes on a notepad while nodding or shaking his head... then hopefully that kid only gets into a few accidents before he or she actually learns how to drive. And re-certifications require showing up to prove you are still alive.
I think before you start idealizing human drivers and raising the
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
I didn't see any system to keep the driver's eyes on the road and hands and feet on the controls, you can't rely on the human to remain vigilant. You have to force him to be vigilant. Just like in trains.
Re: (Score:2)
An ordinary car won't go very far without an active driver.
Re: (Score:2)
He's not driving (Score:4, Insightful)
Because the driver in this instance is a bogus legal device to pretend that self driving cars running self driving tests are somehow not doing the driving.
He can only judge the cars choices by the outcome of those choices and that would be too late to intervene. He is not making the driving decisions, he is not driving the car.
Instead of testing self driving cars until they are safe to unlease on public roads, they let car makers get away with this bogus legal device.
More troubling in this case is the doctored video they released. A normal car recorder video in the dark, would have the gain turned up, and would be bright but grainy. That video was dark, suggesting it had been darkened. I ask again, did the police obtain the video from the car recorder themselves, or did they ask Uber's technical assistence, because the video shouldn't be dark. Especially darker around the edges which suggests intentional vignetting.
Re: (Score:3)
The gain difference isn't necessarily artificially darkened. It is speculated it was tuned so that it wouldn't be blown out by headlights of oncoming traffic. Without high dynamic range cameras, you have tradeoffs for what lighting conditions your camera is adapted for.
Re: He's not driving (Score:4, Informative)
Re: (Score:2)
If anyone gets prosecuted it will likely be the woman in the car. Her job was to pay attention and keep the system safe.
Uber will be in trouble if it turns out that they didn't have any system in place to detect inattentive workers. It seems like the car wasn't monitoring her attention, but they should also have had someone reviewing the video of her work on previous days.
I wonder what she was being paid to do that job. I'm betting minimum wage.
Re: (Score:3)
which presumably he was there for, and trained to do,
. . . or he was some poor guy, so desperate for money, that he was willing to work for Uber, despite their stellar record of how they treat their employees.
With Uber, don't presume anything that will cost them money.
why is is this not simply a case of dangerous driving/driving without due care and attention?
Because Uber had a status in Arizona of being "more equal than others." The cops originally wanted to dust this under the rug . . . after all . . . it was just a homeless bag lady who died, right . . . ?
Someone at the police needs to call up Facebook and take a look at the driver's call data
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Waymo cars in Arizona are often without any driver. They use plan B, which is "design cars which won't hit people, no matter what the idiot in the car does or the idiot walking on the road does".
Re: (Score:3)
Imagine if they used Plan C: Vehicle-to-Vehicle communication to share sensor data. Crack open a VTV module, take the private key, spoof people everywhere suddenly to get cars to slam brakes and cause collisions all over the place.
VTV is an excuse for municipal governments to not invest in infrastructure (sensors on posts and poles), but rather to make people pay for it with not-tax-money.
I'm surprised Google's gotten to that phase already, in any case.
Re:typical, predictable move (Score:5, Insightful)
I guess you're not aware that the governor of Arizona is a Republican? I don't know much about him, but I'd presume that means he leans conservative, not liberal. Self-driving cars are being tested in California as well, and now perhaps you see why you can't always trust a corporation to self-regulate in the absence of government oversight. This is, sadly, how government regulations tend to come about.
The pedestrian was technically in the wrong, but we've heard a lot of rumors recently that Uber's self-driving software was being pushed forward recklessly. And given the wonderful people at Uber and their stellar track record, this isn't exactly hard to believe. A competently programmed car with a properly functioning lider probably should have seen that woman on the bike and reacted to it by braking far earlier than it did.
Yes, deaths will inevitably occur, but let's at least try to make sure there are as few as possibly going forward. This is a good reminder that machines can be just as fallible as the humans who create them.
Re: (Score:2)
There are several different self-drive systems in beta in Arizona. The suspension only applies to Uber, and because there appears to something strikingly wrong with its implementation. The car should have seen a slow-walking pedestrian in a well lit area.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
"it wasn't particularly dark."
Exactly :
https://arstechnica.com/cars/2... [arstechnica.com]
Uber released a video with a misleading contrast level, they also decided to show only a couple of seconds of video before the crash.
Re: (Score:2)
Yeah— the headline “expectations” made me laugh— good luck with charging them for failing to meet expectations. Totally irresponsible.
Re: (Score:2)
On what basis do you determine these vehicles need large amounts of memory? Self-learning systems actually function with surprisingly-little memory; and special-purpose ANN hardware uses little power and small amounts of embedded RAM. Visual training doesn't recognize by comparing an image library or anything like that; it scans an image for attributes, feeds those into a processor, produces a result, and then validates the result and adjusts. That is: all images are compressed into one fixed-size bloc
Re: (Score:2)
Re: (Score:3)
In most resorts, the class action suite is designed to be a rugged and easily cleanable venue for homecomings and spring break: vomit proof furniture, a punch bowl full of condoms, quick-change sheets, and the usual stack of brochures for 'safety' schools to apply to after your kid gets expelled.
Re: (Score:3)
Re: (Score:2)
You can't blame the automation for this.
No human could have missed her.
Having seen actual video of the place this happened, I agree that no human could have missed her. I don't see how that supports your contention that you can't blame automation for it. Either the sensors failed to detect what a human driver easily could - ie that there was an obstacle in the road - or it failed to respond as any human driver would (by some combination of braking, changing lanes, and honking the horn). In either case, the automation failed to prevent an accident that a human driver would e
Re: (Score:2)
If the camera footage released resembles what a human would have seen, sure, I agree. I would have hit the person. I doubt I would have killed them, though - I think I would have been able to react in some fashion to minimize the impact.
But the footage released is fucking shitty as hell, and the human eye can see much better than what that camera shows.
If that's the kind of footage they're relying on at night then they need to be stopped from all activities regardless.
The car is supposed to have LIDAR on i
Re: (Score:2)
Re: (Score:2)
Yeah. Makes sense.
It does make sense if the vehicle was supposed to be equipped with LIDAR, because that person being in the shadows in the middle of the night would make it easier to see them. We already have the LIDAR manufacturer on record as saying it wasn't their fault. Either the Uber car didn't use the LIDAR sensor it had, or it used it and the software failed hard.