If We’re All Getting Robot Chauffeurs, We Need Robot Ethics

On behalf of school buses full of children everywhere.

(Source: Steve Jurvetson via Wikipedia)

Get ready for the day when you sip mimosas and curl your eyelashes as you commute, because the driverless car revolution is upon us. These futuristic machines are now legal in three states, and Google (GOOGL)’s working hell-for-leather to make them part of regular life. But, as this essay in the New Yorker points out, such a technology raises thorny implications.

Sign Up For Our Daily Newsletter

By clicking submit, you agree to our <a rel="noreferrer" href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime.

See all of our newsletters

When we turn our shiny metal death machines over to computers, how are they going to make the right decisions?

The advent of driverless cars isn’t as simple as providing another option for those who prefer to sleep and ride. As NYU psych professor Gary Marcus writes:

Within two or three decades the difference between automated driving and human driving will be so great you may not be legally allowed to drive your own car, and even if you are allowed, it would be immoral of you to drive, because the risk of you hurting yourself or another person will be far greater than if you allowed a machine to do the work.

Somebody’d better break the news to Bruce Springsteen.

But besides robbing speed devils of their chance to drive like Dale Earnhardt on the interstate, this gets complicated when you drag out the old Trolley Problem from Philosophy 101. The proverbial schoolbus full of children pulls out in front of your car; swerving means you’re likely to be hurt. The computer has to decide. How’s an algorithm supposed to parse an ethical quandary even college freshmen stumble over?  (If you’re skeptical that this matters, read this New York piece about the spike in traffic deaths.)

That means we’re going to have to develop machines that can not just operate on our ethical level, but can address these questions on their own:

What we really want are machines that can go a step further, endowed not only with the soundest codes of ethics that our best contemporary philosophers can devise, but also with the possibility of machines making their own moral progress, bringing them past our own limited early-twenty-first century idea of morality.

Ethical machines, no big thing, Google will probably have it solved next week. It’s not like Sergey has anything else to do besides go skydiving.

If We’re All Getting Robot Chauffeurs, We Need Robot Ethics