Waymo’s “Rogue” Car Did A Lot Better Than HAL 9000 (That’s A Good Thing)

Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!

Last week, my colleague Johnna Crider covered a really bad experience one YouTuber had with Waymo’s robotaxi in Chandler, Arizona. This obviously was far from his first time using Waymo, as he had quite a bit of experience with how the system works, what to expect in different situations, etc. While I wouldn’t say that Johnna trashed Waymo (we try to not do that here unless someone really, really deserves it), I do think that the video could use some more context for readers to better understand what happened.

With some more information I found, I think this actually shows that Waymo has a pretty solid test program going, even if this particular ride was pretty subpar. The way it handled conflicting directives was particularly impressive.

For reference, here’s the video of the ride (the problems start out at about 16 minutes in):

The first thing I noticed was what the car did when it wasn’t sure what to do. While “Do something, even if it’s wrong” (or “move fast and break things”) works in many circumstances, this isn’t one of those. With a customer in the car and a public that didn’t consent to getting smashed into on the road, the default action of the vehicle should be to wait for assistance if it’s not sure how to proceed. It’s an inconvenience, but that’s better by far than tragedy.

The question that remains is, what next?

How This Is Handled Will Vary

Looking at the SAE J3016 document (popularly known as the “levels of autonomous driving”), they define many industry standards, or definitions people can use to consistently define operations. Not all autonomous vehicle “robotaxi” operations will be the same, as they point out, but we can get clues as to the different approaches people in the industry are taking and how they’re described.

Some operations will mostly be confined to the car, with the person interacting with the car (perhaps one they or a friend own) to set destinations, etc. Others will be dispatched, either by an app or through people who arrange the ride (or some mix of the two). For dispatched vehicles, there are various methods that can be used to handle the situation.

For vehicles that have to truly be on their own, there needs to be a fallback system of some sort that can safely bring the vehicle to a stop. For low-speed city streets, just stopping can probably work in most cases. For freeways, coming to a sudden stop and just sitting there will get people killed, so there needs to be a way to get the vehicle safely to the shoulder (a less sophisticated backup computer, or a duplicate computer).

Another option is for the vehicle to ask a human for help. If a vehicle isn’t dispatched, it will need to ask a human there in the car to intervene and get the vehicle through a rough patch, or wait for a suitable human to move the vehicle. If it’s dispatched, then a human can remotely give the vehicle additional guidance (“follow this line through the construction zone” or “take this different route”) rather than taking full, direct control. Alternatively, a remote driver with simulated controls and enough camera view can drive the vehicle itself through the rough patch until the system can take over again.

How Waymo Is Handling These Situations

Waymo is taking a hybrid approach. When a vehicle gets stuck, they can both send humans and issue some sort of remote guidance to keep the trip going. Their statement tells us what approach the company is taking:

“While driving fully autonomously through an extended work zone, the Waymo Driver detected an unusual situation and requested the attention of a remote Fleet Response specialist to provide additional information. During that interaction, the Fleet Response team provided incorrect guidance, which made it challenging for the Waymo Driver to resume its intended route, and required Waymo’s Roadside Assistance team to complete the trip. While the situation was not ideal, the Waymo Driver operated the vehicle safely until Roadside Assistance arrived. Throughout, Waymo’s team was in touch with the rider, who provided thoughtful and helpful feedback that allows us to continue learning and improving the Waymo Driver. Our team has already assessed the event and improved our operational process.” (emphasis added)

This tells us that Waymo’s approach to a confused computer is to pause the ride (to prevent an accident) and to ask dispatch how to get through. Dispatch then provides additional information (probably something like “follow this line through”), but does not take remote control of the vehicle. However, the vehicle doesn’t just trust whatever it’s been told, and still keeps some sort of collision avoidance and other safety limits (don’t drive on the wrong side of the cones) going.

When Waymo’s team gave the vehicle incorrect guidance, it created a HAL 9000 type of situation. With conflicting programming, the car’s behavior got pretty erratic. One set of instructions from humans told it to do something unsafe (go through the construction zone incorrectly), while other programming basically said “don’t do this unsafe thing.” So, the car paused again, because the two directives were in conflict.

“I’m sorry Dave, I’m afraid I can’t do that.”

If Waymo had done this completely wrong, the car would have just followed the human advice and drove on the wrong side of the cones, which, as we know, is where there tend to be construction workers and equipment. Given that it’s less than ideal to plow a vehicle through a construction zone, it’s good that it stopped instead of going into murder mode the way HAL did in the movie.

I’m Not Saying This Is Perfect

Obviously, Waymo’s fleet can’t do this long-run. Their reliability needs to be a lot better, and stuff like this needs to be super extra rare. The average person would stop using the service if this happened to them even once, so it can’t be common. From what the video says, this was his third time getting stuck and needing human intervention.

However, not all failures are the same. A bad system will fail spectacularly. A good system is resilient, and when resilient systems fail, they fail gracefully. To see that the system didn’t to anything amazingly stupid when given the wrong information by dispatch shows that it’s a resilient system.


Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.

Latest CleanTechnica.TV Video

Advertisement
 
CleanTechnica uses affiliate links. See our policy here.

Jennifer Sensiba

Jennifer Sensiba is a long time efficient vehicle enthusiast, writer, and photographer. She grew up around a transmission shop, and has been experimenting with vehicle efficiency since she was 16 and drove a Pontiac Fiero. She likes to get off the beaten path in her "Bolt EAV" and any other EVs she can get behind the wheel or handlebars of with her wife and kids. You can find her on Twitter here, Facebook here, and YouTube here.

Jennifer Sensiba has 1975 posts and counting. See all posts by Jennifer Sensiba