It’s 10.45am on December 28th 2016, and I am driving south on Alma St in Palo Alto, CA to see my friend Ian. On the other side of the road coming toward me a modern day BMW Izetta, but without a driver or passengers.
I experienced the kind of fear jockeys have when a loose riderless horse tries to keep up with the herd. The owner of this loose horse? Google. The horses name? Waymo. Without a driver, could I trust in Google’s Waymo technology to keep me and other drivers safe?
What if the car were made by a less reliable software company? Would I trust their riderless machine not to crash into me more, or less?
In 2009 Google began researching autonomous driving. Waymo has driven 1 million miles so far, first in open country and now in cities. Zhaoyin Jia, is the technical leader and spoke at the AI Frontiers conference in Santa Clara today. He spoke of the research that informed two design assumptions they made regarding the safety principles of their design.
Design Assumption 1 – Let the car do the driving, not a human. Cars are the leading cause of death for people between ages 15 and 25 in the US, and 94% of car accidents are caused by human error.
Design Assumption 2 – Don’t design a human assisted driverless car. Design the car from the ground up to work on its own. From watching drivers multi tasking whilst driving, for example, reaching into the back seat to retrieve a phone charger from a bag, then plugging their phone in to charge it, the team realized people put trust in machinery, and they get into trouble.
This case study in identifying assumptions, informed by data and ethnographic research, has made me feel more confident in Waymo the riderless horse. Next time I see him galloping along the leafy streets of Palo Alto, I won’t be as alarmed.