Autonomous Vehicles and Products Liability

Anastasia Kontaxi

Autonomous vehicles have gone rapidly from a theoretical future technological development to a reality, in which autonomous cars are set to quickly break into the mainstream. Their societal value, which consists, among others, of their increased safety, their ability to facilitate reduced-mobility persons and their environmental efficiency, is a factor which encourages their broad production and use. The Department of Transportation recently explored the legal questions raised by autonomous vehicles at an Autonomous Vehicles Summit Event on March 1, 2018, during which the Transportation Secretary Elaine Chao explored the possibility of an updated federal guidance being released by the end of this summer.

In the wake of their increasing number, the question of the appropriate treatment of autonomous vehicles under law becomes harder to ignore. The present article reviews the current products liability legislative framework and its application to autonomous vehicles.

The Inadequacy of the Current Legislative Framework

One of the prerequisites for applying the current products liability scheme on autonomous vehicles is the qualification of the latter as “products”. The legal soundness of this characterization is not indisputable, however, because of the ability of autonomous vehicles to make independent decisions without human intervention or with minimal human intervention. Liability resulting from automation is not unknown in the current legislative regime – for example, aircrafts have been using autopilot technology for years. However, the element that distinguishes autonomous cars from other products that use autopilot technology is the level and necessity of human control. Human pilots are expected to exercise continuous manual oversight and control of the airplane during flight and to monitor the autopilot to determine whether override is necessary. Furthermore, autopilots on ships and airplanes are usually designed to navigate the vehicle on a predefined course, whereas autonomous cars will be required to autonomously react to diverse and rapidly-evolving circumstances on the road. Those facts differentiate existing autopilots on other forms of vehicles from autonomous cars rendering the laws with regard to the former inapplicable.

In the event that autonomous vehicles are conclusively established to be products, there remain a variety of legal concerns that can arise from the current legislative framework. For instance, in the very plausible example of an accident caused solely by a faulty operating system, the courts can follow one out of the two following alternatives: either they will determine that the operating system itself can give rise to products liability actions (although intangible products generally don’t), or they will consider the autonomous vehicles to be the ultimate product in which the defective operating system is incorporated. In either case, it will be difficult to argue that the defect is a manufacturing one, because a faulty operating system most likely affects the entire production line instead of just one specific vehicle. Therefore, it is doubtful whether an injured person involved in an accident caused by such a defect would be able to sue under products liability due to a manufacturing defect.

Both the Second and Third Restatement definitions of “defect” for products liability are also likely to present challenges in the context of autonomous vehicles. According to the Second Restatement of Torts, a product is defective when, at the time it leaves the seller’s hands, is unreasonably dangerous to the ultimate consumer. Pursuant to the “consumer expectations test”, a product is unreasonably dangerous if it is dangerous to an extent beyond that which would be contemplated by the ordinary consumer with the ordinary knowledge as to its characteristics. In the case of an autonomous car bearing a design defect, courts will likely encounter difficulties in determining the standard of common knowledge with regard to the highly sophisticated algorithm incorporated into the vehicle’s operating system. An easier alternative, as opposed to resorting to knowledge of computing science, would be for the courts to establish a minimum consumer expectation standard which would abide by the principle that “all consumers expect that the product will not malfunction”. Although the simplicity of this solution might seem attractive, it would probably result in producers’ facing excessive liability, which would inevitably have a dramatic impact on innovation and purchase prices.

The “risk-utility balancing test” used by the Third Restatement of Torts will likely pose similar concerns. Under the test, a product is defective when a reasonable alternative design which was available at the time of sale or distribution would, at reasonable cost, have reduced the foreseeable risks of harm, and the omission of the design rendered the product not reasonably safe. For determining the concept of “reasonable alternative design”, comparisons will need to be made with regard to either already existing conventional vehicles or other autonomous vehicles on the road. In the case of a comparison to a conventional vehicle, the risk-utility analysis will likely involve a combined assessment of that vehicle’s design and the contributing driving behavior of the average or perfect driver. On the contrary, in the case of a comparison to other operating autonomous vehicles, courts will be required to gain both an understanding of the embedded algorithm and knowledge of the available – self-learning and continuously improving –  relevant technology in order to apply the risk-utility balancing test.

As an alternative approach, it has been argued that the adequacy of prior testing could be the sole criterion of whether a fully functioning operating system contains a design defect. Under this approach, the vehicle is considered reasonably safe if the test demonstrates that the operating system manages to reduce the occurrence of crashes by at least 50% relative to non-autonomous vehicles. However, because the 50% baseline depends on the relative number of autonomous versus non-autonomous vehicles on the streets, it will be subject to constant change as consumers’ preferences will shift towards autonomous vehicles. As a result, the lack of a firm standard for determining what is reasonably safe will contribute to the existing legal uncertainty.

Implications of Legal Uncertainty

The legal uncertainty resulting from the absence of a concrete legislative regime with respect to products liability for autonomous vehicles will have several implications. Until either federal and state legislation or courts take a stand on the precise treatment of autonomous vehicles under tort law, manufacturers will fear the outcome of a jury verdict and the amount of damages awards in potential products liability lawsuits against them. This fear will either inhibit creativity, innovation, and continuous improvement in the field of autonomous driving technology, or increase the cost of autonomous vehicles in an effort by manufacturers to compensate for the potential cost of liability. Manufacturers may seek to mitigate this risk by insuring against products liability. Even this scenario, however, will most likely result in an increase in prices of autonomous vehicles because legal uncertainty will probably be considered a systemic risk. Consequently, insurance companies, unable to assess the risk based on reliable probabilities, are likely to charge high premiums which will in turn be ultimately passed on to the final consumers through an increase in the purchase price of an autonomous vehicle.

Similarly, legal uncertainty may affect potential plaintiffs’ decision to bring a products liability lawsuit in the first place. For instance, in case of a crash, a large amount of data regarding the circumstances of the accident would be digitally stored. As a result, the injured party would have to file a motion to compel discovery in order to have access to data stored on the server, cloud or GPS unit of the autonomous vehicle, which translates into high pretrial investigation expenses. Thus, potential plaintiffs might be discouraged from bringing a lawsuit for damages under products liability laws, if they are unsure as to the legislative scheme that the courts will apply to resolve legal issues arising from such a claim. Only in the case of a clearly faulty operating system or other evident defect would the plaintiffs benefit from the electronical data storage, since the existence of concrete evidence against manufacturers in this case could be sufficient to compel more settlements.

Moving Forward

The wide spectrum of implications resulting from concerns about the current legislative regime calls for a critical review of the application of products liability laws to autonomous vehicles. One possible approach to mitigate these concerns would be for Congress to pass federal legislation that would preempt state remedies, in order to form a comprehensive uniform legislative framework. For example, in September 2017, the House of Representatives passed H.R. 3388, the SELF DRIVE Act of 2017, paving the way for increased research on autonomous vehicles in what may be signaling a change in the direction of future liability laws. Such legislation, however, will likely be challenged by supporters of state courts’ authority with regard to products liability, an area of law which is traditionally regulated on a state level.

An interesting approach that U.S. legislators could consider following is the European Parliament’s Resolution on Civil Law Rules on Robotics. The European Parliament made two sets of recommendations in the 2017 Resolution. First, the Resolution emphasized the importance of creating a compulsory insurance scheme for robots’ producers or owners to cover damages potentially caused by their robots. This scheme could be supplemented by a compensation fund, which would guarantee compensation for damages not covered by insurance and would allow for those who would otherwise be held liable to enjoy limited liability if they contribute to the compensation fund. Secondly, the Resolution introduced the possibility of creating a specific legal status for sophisticated autonomous robots, e.g. in the form of electronic personhood, which would allow robots being held directly responsible for their autonomous decisions. Overall, the European Parliament highlighted the significance of establishing a system which will in no way limit liability or impede the ability to recover for damages because of the involvement of a non-human factor.

Anastasia Kontaxi is a Quorum Editor and an L.L.M. candidate, Class of 2018, at N.Y.U. School of Law.