In the wake of the recent fatal Tesla Model S autopilot crash, media and public attitudes toward self-driving car technology have cooled. For some, the vision of a utopian future with zero automobile fatalities has come crashing down, replaced with the stark reality that the technology is not yet perfect. Still, pessimists should note that Tesla’s autopilot is doing better than U.S. drivers in terms of fatalities per miles driven and that the driver may not have followed Tesla’s explicit instructions to remain ready to take control of the vehicle at all times. This situation should not be used as evidence to condemn autonomous vehicle technology or prematurely argue for stricter regulation that would make it more difficult for true autonomous, and not autopilot, technologies to make our roads safer.
Instead, this reality check presents an opportunity to introduce one of the less widely discussed dimensions of autonomous cars: that we, either implicitly or explicitly, must give them a moral compass. This will dictate how these cars will behave in the event of an unavoidable accident. For example, what should our cars do when faced with a choice between hitting a pedestrian and risking harm to the driver by swerving into a wall? Such a situation is described by the “Trolley Problem.” Basically, the problem asks whether you would choose to divert a runaway trolley towards a set of tracks on which one person is tied or allow it to remain on a set on which five people are tied. In its simplest form it is a question of whether you favor a utilitarian view, meaning you would sacrifice the one to save the five, or a “do-no-harm” view in by which it is immoral to make an active choice regardless of how many you save.
A study performed at Michigan State University found that 90.5 percent of participants would reroute the Trolley to save the five when placed in a virtual reality environment complete with an actual pullable lever and screaming pedestrians. Complementary studies have shown the willingness to sacrifice the one increases as the number of people on the other set of tracks increases, but that participants are generally against rerouting the trolley if there is one person on both tracks. Whether through survey or virtual reality simulation, people demonstrate a clear preference for the utilitarian response.
Opinions change, however, when the discussion turns to autonomous vehicles in real-world scenarios. Only 50 percent of respondents reported they would consider purchasing a car that was programmed to sacrifice the driver to minimize overall casualties. This dropped to 19 percent when the hypothetical car was programmed to sacrifice a family member. This paradox is a common social phenomenon: people often do not want to abide by the moral code they believe society as a whole should follow.
This disconnect indicates there will need to be a regulatory framework for this technology that sets universal standards regarding how these systems will behave but still allows companies to innovate and compete on new safety technologies. For example, Chevrolet could offer a suburban that prioritizes passengers in the back of the car if children are on board and it is possible and reasonable to do so given standardized behavioral constraints and the circumstances of the crash. This is but one example of a feature that may benefit certain consumers and could be included for a premium in certain models but does not need to be standard on all cars. This style of regulation would allow car manufacturers to offer a diverse range of models based on consumer preference at lower costs while preserving overall safety. However, companies should not be free to offer cars that prioritize passengers over all else in every situation as this could lead to unnecessarily drastic action, such as killing a pedestrian to avoid a one percent chance of passenger injury. If companies are left entirely to their own devices, given consumer preferences passenger-preserving cars would dominate the market leading to unnecessary damage and death.
We do not yet live in a world where there are zero automobile accidents, and the first generation of autonomous vehicles will not make that world a reality. In the case of automobile regulation, we must work to create regulation that ensures both drivers and pedestrians will be as safe as possible. This ultimately means recognizing our paradoxical preferences for self-preservation and understanding we cannot avoid choosing a moral framework for autonomous vehicles, be it utilitarian, “do-no-harm,” or otherwise.