Technology

The Role of Data in Autonomous Vehicle Crashes

Stock Car

In the next few years, autonomous vehicles will become mainstream.  While the policy infrastructure has a ways to go, companies like Google and Tesla are incredibly close to perfecting the technology that will allow cars to safely drive themselves.  As a result, it is estimated the yearly accident frequency per vehicle will decline significantly from .043 in 2013 to .010 by 2040.  However, so long as this number is above zero there will be a need for transparent crash data to ensure proper action can be taken in their aftermath.

Autonomous vehicles have the data collection part down.  They are equipped with a variety of sensors that track most anything you could want to know about a car.  In the event of a crash, these data can be used to help determine how the crash happened and who is at fault.  Recently, a Tesla Model X owner claimed their vehicle crashed itself by suddenly accelerating and turning without driver input.  Tesla denied the owner’s claim, citing proprietary diagnostic data that shows neither the autopilot nor cruise control were engaged at the time and that the gas pedal was suddenly pressed down seconds before the collision.

In this case, whether Tesla’s software or the human driver crashed the car is irrelevant.  Tesla makes it very clear to its consumers that autopilot mode is not a substitute for human awareness, so the human driver is at fault regardless.  But what happens once “autopilot” mode becomes “autonomous” mode, and no driver input is required?      

Earlier this year, the National Highway Traffic Safety Administration answered a piece of that question when it notified Google the law will treat Google’s software as the vehicle’s driver.  Google will be treated differently than Tesla because it plans to build its cars without a steering wheel or any foot pedals, making human input impossible.  This begins the process for developing legal framework in which Google’s software can be found at fault in the case of an accident.  Although it took 1.3 million miles of driving over seven years for Google’s car to cause an incredibly minor accident, the fact remains that no software is perfect.  At least for the foreseeable future we must understand that software, while superior to human drivers in many ways, can still cause accidents. 

So how do we determine who is at fault when an autonomous car crashes into a human-driven car?  Or when two autonomous cars crash into each other?  We obviously cannot interview the software to get its side of the story like you would a human, at least not yet.  You can, however, look at data collected by each car prior to the crash.  Data is actually far more effective in determining case facts than eyewitness testimony, which has been shown time and time again to be unreliable.  

This all sounds great so far.  Smarter cars will crash less often and, when they do, will be more effective in identifying who or what is at fault.  The potential problem, however, is that this data is solely under the control of the same companies that could be liable if their technology causes a crash.  As it stands, these companies could abuse this to report false data that constructs a crash narrative that suits their interests, whether it be to assure consumers their technology is safe or to avoid paying penalties resulting from being found at fault.  We have seen how far a car company will go to manipulate regulatory agencies and consumers in the case of the Volkswagen emission scandal.  Reporting false data is much easier.  

As Google continues testing its already autonomous vehicle and other companies enter the space, we should be mindful of the trust we will be placing in these companies to act in consumers’ best interests when other incentives might induce non-desirable data practices.  Since technology tends to move faster than policy, we should start this process sooner rather than later.   Moving forward, we should develop a policy framework that allows autonomous vehicle companies to collect data to help mediate crash disputes while preventing its misuse.