Waymo CEO Expresses Confidence Its Cars Wouldn't Have Killed Elaine Herzberg (washingtonpost.com) 141
theodp writes: Nearly a week after an autonomous Uber SUV claimed the first life in testing of self-driving vehicles, The Washington Post reports that Waymo CEO John Krafcik says he is confident its cars would have performed differently under the circumstances (Warning: source may be paywalled; alternative source), since they are intensively programmed to avoid such calamities. "I can say with some confidence that in situations like that one with pedestrians -- in this case a pedestrian with a bicycle -- we have a lot of confidence that our technology would be robust and would be able to handle situations like that," Krafcik said Saturday when asked if a Waymo car would have reacted differently than the self-driving Uber.
In explaining its since-settled lawsuit against Uber last year, Google charged that Uber was "using key parts of Waymo's self-driving technology," and added it was "seeking an injunction to stop the misappropriation of our designs." In announcing the settlement of the lawsuit last month, Uber CEO Dara Khosrowshahi noted, "we are taking steps with Waymo to ensure our LIDAR and software represents just our good work." A Google spokesperson added, "We have reached an agreement with Uber that we believe will protect Waymo's intellectual property now and into the future. We are committed to working with Uber to make sure that each company develops its own technology. This includes an agreement to ensure that any Waymo confidential information is not being incorporated in Uber Advanced Technologies Group hardware and software." All of which might prompt some to ask: was Elaine Herzberg collateral damage in Google and Uber's IP war? "I want to be really respectful of Elaine [Herzberg], the woman who lost her life and her family," Krafcik continued. "I also want to recognize the fact that there are many different investigations going on now regarding what happened in Tempe on Sunday." His assessment, he said, was "based on our knowledge of what we've seen so far with the accident and our own knowledge of the robustness that we've designed into our systems."
In explaining its since-settled lawsuit against Uber last year, Google charged that Uber was "using key parts of Waymo's self-driving technology," and added it was "seeking an injunction to stop the misappropriation of our designs." In announcing the settlement of the lawsuit last month, Uber CEO Dara Khosrowshahi noted, "we are taking steps with Waymo to ensure our LIDAR and software represents just our good work." A Google spokesperson added, "We have reached an agreement with Uber that we believe will protect Waymo's intellectual property now and into the future. We are committed to working with Uber to make sure that each company develops its own technology. This includes an agreement to ensure that any Waymo confidential information is not being incorporated in Uber Advanced Technologies Group hardware and software." All of which might prompt some to ask: was Elaine Herzberg collateral damage in Google and Uber's IP war? "I want to be really respectful of Elaine [Herzberg], the woman who lost her life and her family," Krafcik continued. "I also want to recognize the fact that there are many different investigations going on now regarding what happened in Tempe on Sunday." His assessment, he said, was "based on our knowledge of what we've seen so far with the accident and our own knowledge of the robustness that we've designed into our systems."
Seems legit (Score:3, Funny)
Re: (Score:1)
Re: (Score:2)
While the way that was expressed was moderately reprehensible, the underlying facts aren't wrong. You can't hold self-driving vehicle technology to a higher standard than human drivers yet. Now, how visible Ms. Herzberg was to a human eye is a matter of debate. A human eye can discern far more contrast differences than a camera does, so whether she was visible to a human eye while in the shadow is questionable. There can be no real dispute, though, that she was careless, didn't watch where she was going
Re: (Score:2)
That's arguable. You'll often read people claiming that the human eye has a dynamic range of 20 stops. As I understand it, that's technically correct (the best kind of correct), in that the human eye has a range of 20 stops if you include everything from fully dark-adapted vision with the iris wide open all the way to full day vision with the iris as close
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Soo.. (Score:2, Insightful)
Waymo killed Elaine by forcing Uber to take out the parts that worked..
Re: (Score:2, Insightful)
Re: (Score:1)
For profit this morning I had a bagel, then I took my dogs for a profit-making opportunity. It was unseasonably profitable for March, and the dogs had a profitable cash return.
Fixed that for you.
Re: (Score:2, Interesting)
Intellectual property kills. Drug patents kill millions every year. Self driving car patents will kill people too. The only solution would be legislatively-enforced openness/collaboration, done in a careful way that ensures there's enough profit incentive left for developing the tech.
What amazing me (Score:5, Insightful)
I think Some of this is the media at large siding with corporations to our detriment. The big outlets (CNN, Fox, MSNBC) have long since stopped covering the story on their front page websites, even as a single link. There's a little bit of left wing press, but I heard about those videos showing how well lit the road was from a post on Ars Technical that was on my feed.
Based on this I'm guessing that most people who don't read
Re: (Score:2)
No, not really like that at all.
Re: (Score:3, Insightful)
This is less about Uber's damage control and more about how moronic and whipped the US public is. It doesn't matter how dark the road was, even with no lights at all, the car should have been able to see her using those other sensors. These AI cars are supposed to be better at driving than we are because they can monitor what's going on in a 360 degrees around the car and aren't subject to the same light limitations that we are.
I do wonder a bit if the dashboard cam wasn't deliberately set for day and left
Re:What amazing me (Score:5, Insightful)
Re: (Score:2)
Why does a self-driving car not have to make a basic demonstration of visual/sensor skills before being put on the road?
What test do you propose? I'm willing to bet they will all pass with flying colours. That's the problem with standard testing. A friend of mine is in a similar but human scenario. He is blind as a bat but not restricted to driving with glasses. Why? He remembered what the bottom few lines of the eye chart said and just recited them.
Just look at how good we are at emission controls.
Re: (Score:2)
That's what I don't get. It would be so simple to certify for these kinds of things before allowing on public roads. Everyone acts like it would be such an impediment.
I think you'd just be wasting a ton of resources testing a non-problem. The sensors are designed to be capable enough and throw errors if they malfunction as they either go dark or produce gibberish. It seems extremely unlikely that you'd have a sensor glitch hide a person crossing the road while everything else looks normal. It's the logic layer that's difficult and formally retesting that for every tweak would be an impediment, it'd be like retesting a human driver every time he learns something new. At b
Re: (Score:2)
The sensors are designed to be capable enough and throw errors if they malfunction
You're still using that line even after sensors have clearly failed and the driver was none the wiser.
Re: (Score:2)
Re:What amazing me (Score:5, Insightful)
is how good the damage control from Uber was. They got videos out fast with pitch black cameras that made it look like she came out of nowhere. Several days later videos popped up from locals showing the stretch of road was actually well lit. Even now I'm having a trough time finding those videos. There are stories now saying Uber's cars are behind Waymo, but I'm only just now seeing stories that say Uber should have avoided the crash. The first several /. posts about this story were riddled with comments from folks saying the crash was unavoidable and the pedestrian was completely at fault.
I'm not sure it was that good. The point of damage control is to find the least damning narrative and make that the one that sticks. Here Uber's first narrative was:
"Not our fault! The pedestrian came from the next lane and appeared out of nowhere! Totally unavoidable!!"
The moment they put out the crappy video a bunch of us could tell Uber was lying [slashdot.org]. Now their corporate credibility has taken another hit and they don't control the narrative anymore.
Now imagine Uber's narrative was more like:
"While the pedestrian was legally at fault our vehicle should have avoided the accident, and barring that, the safety driver should have been more attentive and avoided the situation. We are suspending all tests until we have determined the nature of the failure and taken steps to make sure it won't be repeated."
The beauty of that narrative is it's consistent with an accident that's almost impossible to avoid, it just sounds like Uber is being really accepting of blame. And then when it comes out they really should have avoided the accident... well the statement is still true, so it doesn't really trigger another news cycle or destroy their credibility.
I think Some of this is the media at large siding with corporations to our detriment. The big outlets (CNN, Fox, MSNBC) have long since stopped covering the story on their front page websites, even as a single link. There's a little bit of left wing press, but I heard about those videos showing how well lit the road was from a post on Ars Technical that was on my feed.
It's more to do with the news cycle, news outlets only do investigative reporting when their viewers really care, otherwise they just do events. Self-driving Uber killing a pedestrian is an event, and they covered with the sources that were available, Uber and the PD, and they both backed Uber's narrative. Now for the investigative portion only the technical press really cares (Ars Technical, Slashdot). But if that investigation turns into another event, ie the Police making another announcement or a lawsuit on behalf of the victim, well that's an event again and Uber's BS gets called out by the mainstream headlines.
Re: (Score:2)
The Victim's step daughter is 'retaining' lawyers regarding the death.
Re: What amazing me (Score:2)
Re: (Score:2)
But that was their narrative, almost word for word. They immediately suspended all tests and put out a press release saying the above. Uber is a horrible c
Re: (Score:2)
But that was their narrative, almost word for word. They immediately suspended all tests and put out a press release saying the above. Uber is a horrible company, but in this case they've done exactly what you suggested. The problem is that they could've easily foreseen this accident if they hadn't been cutting corners and trying to pretend their tech was better than it was for the sake of the next round of funding.
I think it's a bit of a both. Uber was mostly silent while the police put out a very Uber friendly (or homeless pedestrian hostile) statement about the crash [theverge.com] which became the only narrative.
I can accept Uber got a bit unlucky with how the PR played out. You don't want to say a lot during the investigation, and if the cops are being nice it doesn't really occur that you need to speak up to lower expectations.
Plus, for whatever reason the video is so crappy, it makes it look like Uber is trying to pull a fast
Re: (Score:2)
The thing is:
"While the pedestrian was legally at fault".
In some counties (NSW Australia e.g.) the driver would be totally at fault in this particular case. Pedestrians have right of way - period. The only case where it's not a cause for very still penalties is that the person walks out from the kerb in front of the vehicle. Even if the pedestrian is not supposed to be there.
This is something I'm not completely clear on.
It's one thing to say the vehicle has right of way. But realistically, if someone is in the middle of the road with high visibility and I just plough through them without even slowing down then I'm pretty sure I'm getting charged.
If not I feel like we would have heard of a few more cases of sociopaths trolling the streets after the bars close so they can mow down pedestrians.
Re: (Score:2)
I'd think most people would assume that a human driver would have a better view. That the accident was unavoidable would be due to Uber relying exclusively on such poor video (if that indeed was what they did). Most people would also put some blame on the "crazy [or inebriated] old homeless woman" who couldn't be bothered to cross in a legal fashion and wouldn't yield to
Video links here (Score:1)
Video links and a technical discussion here Engineering Tips - Engineering Failures & Disasters - Self Driving Uber Fatality [eng-tips.com]
Re: (Score:2)
I haven't read the news on this very closely, so I have no idea whether the woman was crazy, old, homeless, high, or whatever. When I watched the video, I did assume that she had some level of cognitive impairment for one reason or another because otherwise she probably would have checked for traffic before walking in front of it.
Doesn't mean she *deserves* death, but regardle
Re: (Score:2)
The fact that she was attempting to cross a road isn't in dispute, whether or not the autonomous vehicle could of stopped is in dispute. Although pretty much every expert has said the car should have stopped and I very much agree given the road was well lit and the woman crossed several lanes and was nearly finished crossing the road when the vehicle hit her without braking. And the car also had radar and lidar.
Re: (Score:2)
Waymo CEO John Krafcik says he is confident its cars would have performed differently
Really? I'll believe him when I see him in this. [youtu.be]. Strongly suggest you waste 30s viewing this -- I wish a lot MORE people would stand behind (or in front of ) their work.
Re: (Score:2)
Except the video you cite was released by the police, not by Uber. Maybe Uber requested that it be released, but it seemed like the police were just trying to fill the information vacuum with what little information they had. Since the video is half-damning (the safety driver appeared to be distracted) and half-vindicating (the pedestrian was not easily visible), it's not really a great thing for Uber. The better PR came from initial reports, like that she "stepped out from behind a bush in the median" or s
Re: (Score:2)
Part of the problem is that by the time the information comes out it's not newsworthy anymore. It's a traffic accident with an experiment that was supposed to fail and rely on a safety driver. That's worth a news article on the day it happens, but it's not worth mainstream news updating with details as they develop. Makes it easy for Uber's spin to be the only thing most people see.
Re:Vids or it didn't happen (Score:4, Informative)
https://arstechnica.com/cars/2... [arstechnica.com]
After looking at that it's pretty obvious Uber's video is very very misleading.
Guy (Score:1)
Re: (Score:2)
I wonder how soon (Score:2)
suicide by self-driving car exceed suicide by cops deaths?
Easy to verify (Score:1)
Re: (Score:2)
This is exactly what I was thinking. So take very top ranking exec at self driving car company ? and have them walk in front of their car. Put your life where your mouth is!
Re: (Score:2)
So take very top ranking exec at self driving car company ? and have them walk in front of their car.
I wouldn't be surprised if they actually did this as part of an advertising gag:
"Our CEO is so confident of our car's safety, that he will walking out in front of a speeding car!"
Of course, the car will have been customized, with special "high level exec" detection sensors . . .
Re: (Score:2)
This is actually exactly what the founder of Otis lifts did: he demonstrated his safety elevator by standing on a raised platform and having the rope severed, showing that the safety system would stop the fall safely [https://www.britannica.com/technology/elevator-vertical-transport#ref90006]
The only important thing, IMO.... (Score:2)
Figure that out, program that into part of the car's repertoire of situations to handle, appropriately, and at the very least, you'll have made future cars that much safer.
If software was supposed to do X, and didn't do X, then the designers need to find out what is wrong with their assumptions about what the software is doing, and come to a resolution, so that software can behave as intended by it
Re: (Score:2)
Re:The only important thing, IMO.... (Score:5, Informative)
Uber cars can't manage 13 miles between interventions. Whoever is doing the debugging must be buried under tens of thousands of reports.
THAT is why they are to blame. An automatic car can mess up. It will happen once in a blue moon, and someone will die. Too bad, but that is the price of progress and you cannot really blame anyone, you can just compensate the family.
Sending cars out on the road that demonstrably cannot function is different. That is reckless manslaughter. There is no way that Dara Khosrowshahi was unaware of the (lack of) performance of the Uber cars. He needs to be prosecuted.
Re:Monday-morning quarterbacking and spin control (Score:4, Insightful)
NO MORE DEATHS AT THE HANDS OF SELF DRIVING CARS! GET THEM OFF THE ROADS!
You don't actually give a good goddamn about deaths due to cars, or you'd be agitating to get rid of cars period. We do have alternatives, like PRT. Self-driving cars will kill people, but human-driven cars kill people.
Re: (Score:1, Troll)
Putting that into perspective, the CDC estimates that there are 45,
Re: (Score:1)
Re:Monday-morning quarterbacking and spin control (Score:5, Insightful)
Maybe that is something we should tackle first
Maybe there is more than one department / corporation in the world and we should tackle multiple problems at the same time.
Re: (Score:2)
Re: (Score:2)
Negative. If you only ever look for a 100% perfect solution to a problem you will never solve any problem. You don't need to not kill people, you just have to prove you're better than a human driver. And in many scenarios this is already the case, e.g. NHSTA's report on Tesla's death found despite the fatality that users letting auto drive do the work were 40% less likely to end up in an accident.
Sign me up.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Making a conservative assumption that 90% of the deaths are licensed drivers, you have a 0.013% chance of dying in a car accident.
About 13% of the people involved in fatality collisions are licensed drivers. About 19% of fatality collisions are caused by unlicensed drivers. But what do the percentage of licensed drivers involved in fatality collisions have to do with my personal chance of dying in a car accident? Nothing whatsoever, that's what.
Drivers that remain alert, look ahead and use good practices such as slowing down to appropriate speeds for conditions decrease their odds even more.
Yeah, so? I'm worried about all the drivers, not just the good ones.
Putting that into perspective, the CDC estimates that there are 45,000 deaths a year from *second hand* exposure to cigarette smoke. Maybe that is something we should tackle first, and then come back to self driving when the technology is fully baked.
How about we tackle both things at once? What are we, Windows 3.1?
Re: (Score:2)
How about we tackle both things at once? What are we, Windows 3.1?
Because we have solutions to second hand smoke deaths that don't themselves kill people. We don't have that yet with self driving.
Re: (Score:2)
Because we have solutions to second hand smoke deaths that don't themselves kill people. We don't have that yet with self driving.
Perfect is the enemy of good. If less people are killed by AV systems than human drivers, then it's still a win, even if they kill people.
We DO have a self-driving solution that won't kill people. It's called PRT. So let's return to my earlier point.
Re: (Score:1)
Re: (Score:2)
> and then come back to self driving when the technology is fully baked.
Unfortunately, on-road testing is how the technology becomes fully baked. I expect at least 1,000 deaths in the next several years as the technology gets put into real use and it gets billions of miles of experience.
Whether Uber's tech is baked enough that they should be allowed to test on the road will only become apparent once this accident is investigated.
Re: (Score:2)
Re: (Score:2)
Drivers that remain alert, look ahead and use good practices such as slowing down to appropriate speeds for conditions decrease their odds even more.
True. However, I'm a pretty good driver; not reckless, leave good distance to the car in front, speed appropriate to conditions, etc. Just last night I was coming home on a route I have driven at least a hundred times and missed a turn because I was thinking of various things and my wife, who is usually aware of where we are too also failed to notice. These kinds of lapses happen to everyone at some point, and sometimes they have consequences worse than driving a couple of miles out of the way.
the CDC estimates that there are 45,000 deaths a year from *second hand* exposure to cigarette smoke. Maybe that is something we should tackle first
This incident
Re: (Score:2)
Re: (Score:2)
You can REASON with a human being.
Says who? You'd rather humans drive cars whether they're safer than software or not. That doesn't seem reasonable.
Re: (Score:2)
Re: (Score:2)
Enact reforms of how drivers are trained, tested, and held accountable, and these problems will go away. This may mean some people will not be allowed to drive ever again due to a consistent pattern of incompetence in spite of training and education; too bad,
Ah, but there's a problem with that notion. The auto companies have been crapping on public transportation with the aid of the federal government. The interstate highway system is a prime example; the nation would have better been served by further development of the rail network. Outside of certain urban areas, people who don't own a car are second-class citizens, and that situation was deliberately created. Now you want to say that a bunch of people should be deprived of their right to drive, and up their
Re: (Score:2)
Re: (Score:2)
People do not want to live their lives with nothing but public transportation.
That's what they've been told, yeah. But they've also been told that public transportation can't work for them, and it can.
Re: (Score:2)
Re: (Score:2)
No, it really can't.
See? You're one of the people telling them. I rest my case.
Re: (Score:2)
I can't reason with another driver. I'm in my car and the other driver is in their car, so there's no way to communicate with the bandwidth needed. I don't know what's going on with other drivers either. They may be tired, depressed, angry, having a bad side effect from a prescription drug, on something illegal, drunk, or distracted. Nor do I understand brains nearly as well as I understand artificial neural nets and the like.
Re: (Score:2)
The cars would likely pass a driving test. Why impose a double standard, especially for technology that's likely much safer than a human driver?
This exact type of fatal crash happens all the time, hence signs like "BRAKE FOR MOOSE".
Re: (Score:2)
Re: (Score:1)
Titanic (Score:2)
"This ship is unsinkable!"
History says NO ONE should make statements like this.
Re: (Score:3)
There is a difference between saying "This ship is unsinkable" and "This ship would not have been sunk by that particular iceberg."
Killed by IP (Score:2)
I kind of thought this would be the end result of only one company seriously doing any work on self-driving cars. What we have here is a typical capitalist worst case scenario: Every company doing their own thing and starting from square one because each is unwilling to license IP from the other, and each is unwilling to let the fruits of their labour be used for the general good.
Waymo could have avoided the accident? As far as I'm concerned they are culpable.
Re: (Score:2)
If you can't make a self driving car that drives safely, the answer is not to make a self driving car. "We couldn't make safe cars on our own, so we just put dangerous ones on the road" is not an excuse. This is totally Uber's fault. Waymo started working on this long before Uber did, and they've been a lot more cautious about deploying it because they don't want to kill anyone.
Re: (Score:2)
If you can't make a self driving car that drives safely, the answer is not to make a self driving car.
You just completely missed the point in a most spectacular fashion.
Re: (Score:2)
I think the point is that it's entirely possible for anyone to come out with open source self driving car software. No one has. Google has dumped billions of dollars into testing and developing these things. If they didn't have the assurance that they could use the stuff they developed as they wished, they wouldn't have done it in the first place.
Will there be some sort of framework developed over time where edge cases can be shared between platforms? Probably. But we are a long way off from that.
Re: (Score:2)
And that was my point. Yeah Google is only doing what's best for it's shareholders, that doesn't mean that this problem potentially could have been avoided if there wasn't an IT spat between two money hungry corporations.
The industry can't continue this way. If it wasn't for the gifting of IP to save lives, then we wouldn't have seat-belts, famously developed by Volvo, and then purposely patented for the sole purpose of publicly opening the patent and ensuring no single company can lock down innovation in s
Re: (Score:2)
Re: (Score:2)
On the other hand, research can be really expensive, as you've pointed out. To make this worthwhile, they have to protect their ideas and implementations as best they can. The alternative is large government grants to develop self-driving cars, and that's less efficient than the private sector. Government research grants are a good idea if the private sector isn't going to do the research, but not otherwise.
Re: (Score:2)
Then perhaps you'd care to clarify. What is your point? That Waymo is evil because they're investing billions of dollars into developing technology and not giving it away for free? That capitalism is evil because it leads to companies investing in technology and not giving it away for free? Clearly not that Uber is evil for choosing to develop their own technology rather than licensing another company's superior technology, though that seems to me a more obvious conclusion. Of course, an even more obvi
Put up or shut up Challange (Score:2)
At least (Score:2)
Prove it (Score:1)
All he has to do is step out in front of traffic consiting of his own cars in the same conditions. I think the typical statistical standard is about 30 samples, right?
Re: (Score:2)
If I were in his position, and if somebody offered me big enough of an incentive (perhaps a massive fine on Uber), I would. Seriously, the conditions were such that any driver who wasn't DUI'ing wouldn't have hit the pedestrian, much less the "superior" autonomous car that can see in wavelengths we can't see.
Uber really messed up here, and Waymo is doing a good (corporate) job kicking Uber while it's down.
The safety driver must have been insane (Score:3)
No Surprise. Uber's tech sucks. (Score:3)
Google's self-driving car technology has been around longer and probably done far more miles than Uber's tech ever has and we have yet to hear of them running over a Pedestrian. Even Tesla's Super-Cruise technology despite it missing trucks and killing the driver hasn't run over Pedestrians yet. Plus the statistics that the makers are required to provide show that the Uber self-driving tech has an alarmingly large number of required Driver interventions. Heck it was speeding to begin with! Plus we know Uber's in a rush to get this tech to work because the hope to IPO in about a year so at the end of the day, can't say I'm surprised they'd be first to kill a Pedestrian.
Re: (Score:2)
Re: (Score:2)
Exactly my thought.