Self-driving taxis launched in Singapore last week. Uber has announced that it will have self-driving cars available for passengers in Pittsburgh within days. Now that self-driving cars are a reality, what are some of the ethical and legal issues surrounding self-driving cars driven by artificial intelligence.
The classic problem is often called the “trolley problem.” I’m going to modify the usual trolley problem to better fit cars and the issues we face today.
Imagine this scenario. A self-driven car is on a curvaceous road doing the speed limit. Suddenly, the car’s computer sees four people in its path and a fork in the road, but it’s too late to stop. If the computer does nothing the car will kill all four people. If the computer chooses the fork, the car will only kill two people.
How will the computer decide which way to go? Does the car take the fork killing two people instead of four? Or better yet, does the car self-destruct by running into a wall killing only the car’s single passenger? What if the car has two young children in it? Will age determine the car’s decision?
Maybe we shouldn’t allow self-driving cars until we have a fleet of drones or real time satellite feeds available to all self-driving cars. The drones and satellite feeds would increase situational awareness so that the self-driving car completely avoids the trolley problem by “seeing” the problem with enough time to stop. However, not even that solution avoids the problem if children step in front of the car after it is too late to avoid a collision. And what happens if the children are under an overpass? (I have the answer. The drones must be able to see through the overpass. Really?)
Now here’s another scenario for you to consider. I don’t know about you, but I don’t consider myself much of a speeder. Still, I often drive slightly over the speed limit. What’s the big deal doing five miles per hour over the limit? If no one is around, why not punch it up to 10 and do 75 in a 65. No biggie. Or it’s a deserted straightaway in the middle of nowhere. Why not bump it up to 90 in a 75?” We all “know” that cops don’t write tickets for a few (whatever that means) miles over the limit.
It comes down to this. Must a self-driving car limit itself to exactly 55 in a 55? What if “HAL” were driving? May the computer “choose” to break the law by exceeding the speed limit? Would we permit computer code that essentially says, “Computer: You may break the law on purpose?”
Let’s change the scenario a bit again. Would you allow a computer to “choose” to bump up its speed to 60 in a 55 to safely allow for passing a car doing 45 on a two-lane highway? What if the computer determines that it would have to do 70 in a 55 to safely pass on that two-lane highway? And if it’s okay to speed on a two-lane highway, why not an Interstate? Maybe, the law should allow for a manual override by a “mere mortal” (that’s you) to overrule the programing that says never exceed the speed limit?
As you can sense, the scenarios are infinite. What does the law say about these and the endless scenarios? Nothing. Sure we can work by analogy from current traffic laws and the case law around car driving negligence and liability, but I would suggest that it’s time to consider a comprehensive regulatory scheme for self-driving cars. Without this regulation, you are leaving up to the programmers to make these decisions for society. Could you imagine a BMW ad campaign that says something like, “With our sporty brand, our computer will drive your car faster around the curves. Why let a measly speed limit slow you down?”
Then come the accident scenarios. Who is responsible when a self-driving car hits something or kills someone? Is the “driver” sitting in the driver’s seat off the hook because the computer was in control? What if there is no one in the driver’s seat, just passengers in the car? What if nobody is in the car because the car is in route to pick me up? Is the company that wrote the software responsible?
Regulations and laws for self-driving cars are in their infancy now. In April, a group of companies including Ford, Uber, Google, Lyft and Volvo formed a coalition to advocate for safety regulations for self-driving cars. I’m a bit cynical about the coalition though. My gut tells me that this is more about eliminating any laws that stand in the way of self-driving cars and less about regulating ethical dilemmas. It’s a start though.
The snowball of self-driving cars is growing and coming faster than you think. Remember Uber said the tests with live passengers start this month.
While it is a truism in the world of technology that first we develop the new technology and then we figure out how to regulate it, it’s time to acknowledge that self-driving cars are here and here to stay. Now we must begin to discuss and reach legislative consensus on the myriad of ethical and legal issues self-driving cars thrust in front of us.