Article

Who's Liable For A Crash When It's The Car Driving?

Apr 10, 2018
Law360

Recently, an Uber Technologies Inc. autonomous car struck and killed an Arizona woman who was crossing a multilane road at night, far from the nearest crosswalk. While the accident was tragic, it was not a complete surprise. The major players in the development of autonomous vehicles have long known a pedestrian fatality was inevitable at some point, even if self-driving technology worked perfectly.

Uber has reportedly already settled with the woman’s family, but the accident still provided hints of the transformative legal issues that will arise from the widespread adoption of autonomous vehicles. For example, should the Uber car’s lidar detection system have spotted the danger in time to avoid hitting the woman? Would a human driver have done any better?

These are merely two of many arising issues that will face companies, lawyers and courts once self-driving technology expands on U.S. roads — which will almost certainly happen more rapidly than most people believe. Self-driving technology will transform large segments of auto liability law and the automobile insurance industry.

To understand just how revolutionary self-driving vehicles will be with respect to automobile liability law, consider a hypothetical: It is 11:00 p.m. on New Year’s Eve. You are in a car on a busy city street, lined on the right side with multiple restaurants. You watch from the car’s window as pedestrians move in waves from the restaurants to the sidewalks, jockeying for space to watch the annual countdown to midnight, which a local radio personality hosts in a park across the street.

Despite the pedestrian crowds, traffic is moving smoothly, and you are only minutes from the restaurant where you will meet your friends. But then a flash of light hits your face as a pickup truck veers into your lane, speeding directly toward your car.

Options flash through your mind: veer left and slam into oncoming traffic; stay in your lane and brace for a head-on impact with the pickup; veer right onto the sidewalk, where you will likely be safe, but your car will mow down a dozen pedestrians. But here is the rub: None of these choices are yours, because you are not driving. In fact, your car has no driver.

Ten minutes earlier, you used your phone to summon an autonomous ride-hailing vehicle to get to your destination. Your reactions to the traffic emergency are as irrelevant as your ethical calculations about potential injuries to yourself and others. The only thing that matters is how the machine in which you are riding will choose to react — and you have no idea what it will do.

The fates of you, the wrong-way driver, the other oncoming drivers and passengers and the pedestrians on the sidewalk were sealed two years earlier, when the autonomous fleet car left the factory. Its software includes decision rules guiding its artificial intelligence in its determination of how to prioritize the factors presented by the emergency you now face. The car might prioritize the safety of the car’s own occupants, the minimization of physical damage to itself, the safety of pedestrians or other factors. You have no idea.

Furthermore, you have no clue who decided on those priorities in the first place. Are they the same priorities you would have applied? It is too late for wondering. You are merely along for the ride.

In the vast majority of current-day vehicle accidents, a human is negligent in some way, and most often the negligent human is one of the drivers. But an autonomous vehicle is vastly superior to a human driver. Once it determines how to respond to a situation, it will usually execute that response perfectly. The determination of how to respond is the key, and that is where the artificial intelligence that drives autonomous vehicles will change everything for automobile liability law.

What decision rules and action priorities will govern the artificial intelligence functions of driverless cars? What are the legal implications of each? To expand upon the novel legal issues that will arise, let’s explore our New Year’s Eve hypothetical — a classic no-win crash situation.

First scenario: Your car might choose to save itself, and you, at all costs. This choice results in its swerving onto the sidewalk and injuring or killing a dozen pedestrians, because pedestrians are less likely than oncoming vehicles to cause serious damage to your car. To whom would those pedestrians and their heirs look for a legal remedy?

The decision priority baked into the car was the ultimate cause of its hitting the pedestrians, so presumably the litigation investigation would focus on who was responsible for setting those priorities and why. But certainly you, a mere passenger, could not be liable... right?

Well, not so fast. You are a subscriber to a ride-hailing service for which you pay a monthly fee for access to a certain number of vehicle miles in autonomous cars of a certain quality (e.g., luxury, full-sized, economy). What if your subscription agreement with the ride-hailing company disclosed to you that the vehicles to which you would have access would prioritize your safety, as a passenger, over all other considerations? And what if alternative prioritization options were available at different prices? Could you be culpable by virtue of your choice?

If so, would that culpability take the form of simple negligence or would it be willful? After all, you made a conscious decision that pedestrians should be sacrificed for your own safety. You might think it is unlikely that the ride-hailing company would ever provide a consumer with choices regarding collision priorities, but should they be required to do so? Wouldn’t you want to know before you got in a driverless car whether it prioritized your safety or the safety of strangers?

Second scenario: Your car might choose to prioritize the minimization of serious injury to humans. This choice results in its veering into the oncoming lane of traffic and sideswiping two cars, resulting in a severely sprained back for you but no other injuries. To whom would you look for a remedy?

How about the ride-hailing subscription company? If the company failed to disclose to you that your safety was not the car’s top priority, you might be justified in believing it should have. But what if it did disclose its priorities, and, with full knowledge, you still chose to subscribe? Can you really complain once the vehicle executes those priorities to perfection, but to your detriment? Where is the negligence, and where is the product defect?

Third scenario: Your car might choose to maximize risk to the party most at fault in creating the emergency, the head-on truck driver. This choice results in sharp braking and a last minute turn to reduce the risk to you and maximize the risk to the head-on driver. The head-on driver dies, and you sustain only minor injuries.

But what if the head-on driver wasn’t really at fault at all? What if he veered into your lane because he had been struck in the arm by a stray bullet fired by a drunken New Year’s Eve reveler (one of the pedestrians the car decided to avoid)? To whom would the head-on driver’s heirs look for a remedy?

Of course, the shooter, but what about the vehicular side of the accident? He had no idea how your driverless car was programmed, and he had no ability to choose how it would prioritize its responses to an emergency. Furthermore, the car’s artificial intelligence drew a logical, but wrong, conclusion about fault. Could the car’s artificial intelligence, itself, have been negligent in its interpretation of the situation? If so, where does responsibility for that negligence come to rest? The car’s programmers?

Fourth scenario: Forget what choice the car makes. What if the car’s manufacturer, just that afternoon, had distributed a software update to enhance the car’s ability to react to emergencies? After all, the car’s software, just like your cell phone’s software, continues to develop throughout the life of the car. And what if the subscription ride-hailing company intended to download the update to its fleet on New Years’ Day, a much slower day than New Year’s Eve in the ride-sharing business? What if the upgrade would have changed the outcome of the accident?

By now you have undoubtedly noticed that this article is peppered with question marks. If you were expecting answers, forget it. No one knows the answers yet. We can all just be happy we are not insurance companies. They have to figure out how to underwrite all of this. But that is the subject of a whole different article. Welcome to the wild new world of autonomous vehicle law. I hope you enjoy the ride.

 

Jim Jordan is a shareholder in Munsch Hardt’s Dallas, Texas office. He focuses his practice on commercial litigation, trade secrets, warranties, covenants not to compete and complex litigation. In addition to his litigation docket, Jim regularly provides experienced counsel to companies seeking practical business solutions to difficult legal problems, and he has tried cases in federal and state courts and before arbitration panels from coast to coast.

The opinions expressed are those of the author(s) and do not necessarily reflect the views of the firm, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.