Serving Maryland and Washington D.C. 301.870.1200

Self-Driving Cars and the Trolley Problem: The Possible Future of Accident Liability

Self-driving cars are part of the new wave of the technological highway. Artificial intelligence, or AI, is dominating the way in which these autonomous cars are navigating the roads. The software that is placed within the cars aids in decision-making that would normally be in the hands of the driver, but AI rarely comes with its own conscience. At the time that an accident is about to occur, who should the car save - the driver or the passengers of the other car? This leads to a significant problem, known as the Trolley Problem, that even human beings have always been unsure how to resolve. If an out-of-control Trolley is moving down the tracks and it has the option of killing five people or one person, who should live or die?

How Should Self-Driving Cars Be Programmed to React to Accidents?

The self-driving cars are facing this issue as engineers are writing software to determine how the car should react when faced with a head-on collision. Absent a conscience and soul, the car will operate in the way that its software will dictate. The software is created and coded by engineers, leading the engineers to determine how best to answer such a moral and ethical inquiry. However, some AI technology, many that are being implanted into the self-driving cars, permits the computer an opportunity to learn from its surroundings and an accumulation of data. With time and experience, it will build up enough data within its AI to identify objects and people, to analyze situations, and act depending on the probability of success and survival. In this way, the car is no longer dependent on the consciousness of engineers, but is now its own agent in ethical matters. Its decision to act will be based on the accumulation of data, rather than what has been proven to be right most of the time - human intuition.

The Difference Between Self-Learning Systems and Pre-Programmed Cars

The difference between a self-learning system and a programmed AI may determine the future of liability cases in the event of accidents with self-driving cars. If engineers program the AI to act in a way that would save the most lives, then that could lead to liability on the part of the engineers for choosing a path that may save more lives, but the quality of those lives may differ than if a human driver makes a choice that may have more humane consequences. In other words, society accounts for the fact that human drivers generally operate under instinct at the time of accident by providing some legal reprieve such as eliminating criminal liability altogether or possibly manslaughter charges. However, self-driving cars would have to predetermine before hitting the roads how it might decide to act in the event that an accident is about to occur.

Would Drivers Buy Cars that Protect the “Greater Good” Over Their Own Safety?

Overall, drivers would want to know exactly how their self-driving cars have been programmed before purchasing them. No one will want to buy a car hardwired to promote “the greater good” over his or her own life; People buy cars that will best protect themselves and their passengers, regardless of how it will affect those in other cars.

For right now, the ethical software and the path that the engineers of the AI are looking to take is uncertain. Currently, the U.S. National Highway Traffic Safety Administration is in the process of releasing in July 2016 regulation for self-driving cars that will determine how the standards that these self-driving cars must adhere to and in particular, the safety standards before these cars are able to be manufactured and purchased by consumers.

Charles County, MD Personal Injury Lawyers that Fight for You

If you or a loved one has been injured in an automobile accident, please call the Law Office of Robert R. Castro at (301) 804-2312 for a confidential consultation.

Categories: