AUSTIN, Tex.—When one of Google (GOOGL)’s self-driving cars encounters a woman on a self-propelled wheelchair chasing a duck with a broom, it knows what to do. “Our cars have seen that now, and they know how to deal with it,” Chris Urmson, the head of Mountain View’s project to create vehicles that can drive themselves, said at South by Southwest (SXSW).
Everyone there awaited the story of the car’s first collision in which it was at fault, this past Valentine’s Day. Mr. Urmson didn’t shy away from it, describing in detail how the incident took place because two drivers (one robot and one human) read street conditions slightly differently. Displaying a visualization of the vehicle’s sensor readings just before the impact, Mr. Urmson walked the audience through the car’s decision structure. The consequences of the collision were material and minor, yet the company has run the car through approximately 3,500 tests based on the lessons learned on February 14.
The duck story and the crash story both helped Mr. Urmson make the same point: Every driver learns lessons when in a collision or near-collision, but when a human reads a familiar turn wrong in the rain and nearly loses control of the car, only one human learns how to respect that bit of road. But when one of Mr. Urmson’s four-wheeled robots experiences a similar situation, all of its cars learn it.
Similarly, when one of Google’s cars misjudges a bus yielding or waits as a woman chases a duck in the street, all of the cars log that observation forever in their shared database and can better judge similar observations in the future.
“To make this happen we had to build an unprecedented level of sensors into the vehicle,” he explained, showing how lasers, cameras and GPS enable the vehicles to have a field of vision that goes right up to the cars’ very skin.
After driving 1.4 million miles so far in the real world, 10,000 miles per week, Google’s self-driving cars have seen a lot of weird stuff, complex conditions and profoundly irresponsible decisions by other drivers.
The car constantly notes and tracks every possible actor and object in its space. “About 10 times per second we make a prediction about what those actors are going to do,” Mr. Urmson explained.
He revisited one moment twice, an incident where a car had to navigate a pedestrian, an indecisive bicyclist and other cars all at once. Suddenly, another bicyclist shot across the the car’s planned route, coming from a blind turn. The robot driver successfully applied the breaks and didn’t collide with the second cyclist, though a human driver might have been so focused on the first cyclist that he or she might not have noticed the other rider in time. Computer-driven cars are capable of paying attention to more than one object at a given moment, plus more objects further away and it learns from all of them.
“All of this is powered by the fact that we share data between the vehicles,” Mr. Urmson explained.
It’s not just gathering data, though. The cars work with humans to develop a sense and feel for that data. Besides the 1.4 million miles they have driven in the real world, they drive millions of miles every day in simulation. The abundance of practice helps the cars’ turn observations into good decisions. It’s something of the marriage of left and right brains.
All of which make the cars, Mr. Urmson contends, vastly safer. Further, with people wasting 162 lifetimes worth of time each day sitting in traffic, he said, “We are literally killing people with gridlock.”
He argued that the product can’t come to market fast enough, but he also believes that setting aside the social good, it is also been shown to be a compelling product. Once the cars were ready to expand their field tests beyond pre-planned routes, the company invited Google staff to borrow a car for a week or so, use it for their daily business and let it collect data on some natural, unplanned routes.
One Googler came to the team and told them that he was a Porsche driver, who loved driving. He dismissed the whole project as stupid and only volunteered to try it out because he’s a fan of gadgets .
When he came back at the end of his trial, though, he said he didn’t want to return the car. The Porsche-owning driving aficionado went on to tell Mr. Urmson that the self-driving car had made him realize what a bad driver he was and what bad drivers we all are.
In other words, technology taught a lesson in humility. It just did it for one guy, though, and as we’ve already discussed, human brains aren’t yet equipped for him to put that lesson up in the cloud and share it.
As far as we know, Google hasn’t yet taken that one on.