Driverless cars are no longer a fantasy. Despite being far from general purpose use, this technology is evolving leaps and bounds, thanks to players like Google and Tesla making steady progress on this technology. As the technology evolves and enters into public life, several legal and moral issues are going to crop up.
The recent issue of Communications of the ACM carries a nice article describing the moral challenges of driverless cars. In this thought provoking article, the author brings up scenarios that bring up ethical and moral questions. To quote from the article,
However, should an unavoidable crash situation arise, a driverless car’s method of seeing and identifying potential objects or hazards is different and less precise than the human eye-brain connection, which likely will introduce moral dilemmas with respect to how an autonomous vehicle should react …
Driverless cars have potential to fare better than humans in 90% (or better) of the times. But the other small percentage of times usually bring in more ethical and legal dilemmas where humans would fare vast better than the technologies used in driverless cars. In these situations, human drivers are usually faced with multiple choices that vary in terms of amount of impact or destruction to property or humans. The senors and algorithms used in driverless cars (as they stand for the next few years) may have limitations in identifying the course that leads to least impact or least destruction. When the system operating a driverless car takes a non-optimum decision, there could be several legal and ethical ramifications.
As discussed in the above mentioned article, handing over control to a human driver in emergency situations is far from reality, given the response times needed by a disengaged human. Even the automation around a fully engaged driver’s action is being subjected to several legal questions around responsibility. For example, this article on WSJ discusses how Tesla’s autonomous car-passing feature intends to pass on the responsibility to the driver, by making it a driver initiated (e.g. turn on the signal) automation. Given that the same action of the driver in a car with and without these autonomous features results in drastically different ramifications, states like CA, NV and FL are mandating special registrations for drivers of autonomous vehicles. The registration is based on the level of autonomous features of the vehicle.
Beyond the responsibility question that touches the legal aspects, driverless cars technology needs to continually improve upon the ethical questions that come up during an emergency situation. For example, is it okay to crash the car in the next line to avoid a bicyclist who is jumping a pedestrian signal?
Then comes the integrity question around the autonomous features. What is the possibility of these features getting tampered or outdated? Is Tesla’s Over-the-air update going to be a typical standard for automakers across the globe?
In a nutshell, the legal aspects of driverless cars can be best handled by training the drivers for those specific features. However, the ethical aspects require more maturity of the technology. Add the complexity of changes in driving rules across multiple geographic regions (states, countries) and we are going to see a lot of technology evolution in this space.
Here are a few lingering thoughts that I have regarding driverless cars. I am more anxious to find the answers sooner.
- What happens if the road sign standards change across borders? E.g. colors and sizes of signs across states, speed limits posted in miles vs. kms across countries. We may soon see a few settings on your dashboard to let the car know (or confirm) that you are driving in New Jersey or Maine or Canada.
- Cars may be certified to run autonomously in certain areas only. Like “This car can use the autonomous features in CA and NV only, but not in AZ.”
- Cars would be able to identify the speed limit on a signpost and ignore a similar looking sign on a billboard next to a freeway. Do they do it by improving their sensors or depend on a networked repository (say Google Maps) of speed limits in that area.
- Visual congestion identification and taking alternate routes. Pretty simple given the current advances in maps technology.
- In situations where disengaged drivers don’t have awareness of circumstances that led to an accident, cars may require legally acceptable sensor information logs. In other words, the cars would have scaled down versions of blackboxes like in aircrafts.
- What if someone hacks the “car stack”? How does one get to know? Do we get to do a periodic (smog-check like) stack-check and certification? If this looks like a fantasy, please checkout the Tesla hack and fix a couple of days ago.
And here is an extreme one:
- If it turns out that the damages caused in an accident by an autonomous car with a disengaged driver are much higher than the damages if an engaged driver were operating the car without autonomous features, what are the insurance ramifications? Would insurance companies track maturity levels of the autonomous features and charge accordingly for insurance?
I do live in interesting times.