Firm Logo
215-608-9645

Autonomous Vehicles: Morality Meets Machinery

By: Anapol Weiss

By Larry E. Coben (lcoben@anapolweiss.com)


Motor vehicles in all shapes and sizes are being developed to function autonomously. Autonomous vehicles (AVs) are in the news every day.

Vehicle manufacturers, giants in the computer industry, government agencies and consumer-safety advocates alike have been touting the advent of AVs, and their potential to reduce motor vehicle accidents, injuries and deaths. What lingers like a dark storm cloud are questions about how these machines should be regulated to assure the public that these robotic machines are safe and fool-proof.

Because AVs are programmed to behave and react to circumstances and situations their “creators” foresee, we are only now beginning to ask “How should AVs be programmed to respond to emergent conditions?” This question requires programmers, manufacturers, governmental agencies and consumer safety advocates to define the choices these machines will be required to make when emergencies arise. As just one example, consider the question how AVs should be programmed to respond to this emergency situation: [ “The social dilemma of autonomous vehicles”, Bonnefon, et al, Sciencemag.org., Vol. 352 Issue 6293, 1573-1576 (2016).]

In 1942, science fiction author Isaac Asimov wrote a short story called “Runaround”. He postulated that when robotic machines populate the world a hundred years hence, they must be programmed with “the three laws”:


1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

In a postscript, Asimov added a “fourth law “, which he called the “Zeroth Law”—which was the precursor to the first “three laws”; stating that:

“A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”

Over the years, science fiction has blended with physical science and a host of robotic devices have been designed, built and employed in formats including “human like” appearance, automated aircraft, and now we are on the brink of the release of robotic machines we call “autonomous vehicles”. The evolution of science’s competence to build robotic devices has caused ethicists to refine the “four laws”, to remind their creators that these machines are just that. Between 2010 and 2015, a UK government agency known as the Engineering and Physical Sciences Research Council convened several conferences and published a list of more contemporary design rules governing the conduct of robotic devices: https://www.epsrc.ac.uk/ research/ourportfolio/themes/engineering/activites/principlesofrobotics.]

1. Robots are multi-use tools. Robots should not be designed as weapons, except in the interests of national security.
2. Humans, not robots are responsible agents. Robots must be designed to comply with existing law and fundamental rights and freedoms including privacy.
3. Robots are products. They should be designed using processes which assure their safety and security.
4. Robots are manufactured artifacts; robot intelligence is artificial.
5. Responsibility for a robot’s actions lie with its designer/manufacturer and the user when the robot is designed to allow human intervention. Legal liability must be shared or transferred e.g. both designer and user might share fault where a robot malfunctions during use due to a mixture of design problems, or user control failures.

Developing programs/algorithms so that AVs “know” how to perform in real-world circumstances may turn out to be the most difficult challenge that must be met before these robots are allowed to populate our roads and highways. Moral algorithms for AVs create social dilemmas. [Van Lange, et al., “The psychology of social dilemmas: A review”, Organ. Behavior and Human Decision Process, 120 (2013) 125-141.] The robot’s designer will be called upon to program the machine in a way that could create conflict between long-term collective interests (reducing the human toll caused by motor vehicle crashes) and the self-interest the machine will be called upon in choosing a path or course of action that is safe for the vehicle’s occupants. AV proponents stress the utilitarian value of these machines (e.g., minimizing the number of casualties on the road), while others value the perceived incentive that AVs offer which is to protect the occupants at all costs.

Regulation or industry standardization may provide a solution to this problem, but regulators and the auto industry will be faced with the publics distrust of allowing them to impose their collective value system upon them. Moral algorithms for AVs will need to tackle a very complex fault-tree that presents the machine with wide-ranging choices that will require instantaneous decision making, which may prompt significant conflicts between the safety of one over many. For example, let’s assume your AV is driving you to work on the freeway and there are 4 lanes of bumper to bumper traffic that is slowing down because of grid lock. Your AV begins to stop and then detects a tractor trailer approaching from the rear at collision speed. What will your AV do? Will it swerve out of your lane of travel exposing the vehicles ahead of you to this collision? And, in making this avoidance maneuver how many other vehicles will your AV
collide with, causing harm to others? Or, what if you are heading to work and your AV is traveling at 50 mph along with a line of cars and trucks behind and alongside when a traffic signal 70 feet away turns yellow and then red in 1.5 seconds. At 50 mph, your AV will need to almost instantly choose between performing an emergency stop—and triggering a succession of potential crashes behind you or to speed-up through the intersection, risking a potential intersectional collision.

Statistically, will the AV be programmed to stop because the risk of serious injury is less in a rear-end collision rather than a side impact collision? Or, will it depend upon the AV sensing what type of vehicles are poised to enter the intersection when the light changes to green for other traffic; likewise, will the AV be programmed to decide on a course of action by first “deciding” whether driving into the intersection will be less risky for the AV’s occupants or the occupants in the other vehicles with which it is on a collision path? Further, is it acceptable for an AV to avoid a motorcycle by swerving into a wall or onto a sidewalk lined with tress, considering that the probability of survival is greater for the passenger of the AV than for the rider of the motorcycle? And finally, should the AV be programmed to weigh—in any type of potential crash—the probabilities of harm to passengers based on age, seated position, restraint usage, etc.?

These simple scenarios feature uncertainty about decision outcomes,
and, therefore, require moral algorithms encompassing concepts of expected risk, expected value, and blame assignment. Figuring out how to build ethical autonomous machines is one of the thorniest challenges in artificial intelligence today. There are innumerable AI systems currently in use on vehicles.

As we are about to endow millions of vehicles with autonomy, a serious consideration of algorithmic morality is of great importance. While there is no easy way to design algorithms that will reconcile the big-picture societal interests with personal self-interest—let alone account for different value systems based on demographics—it’s essential that the automotive industry, government and consumer safety advocates seek a consensus on as many of these conflict decisions as are imaginable. And, regardless of the ultimate conclusions drawn, we cannot forget the basic principle that the creator of the robot remains liable for the choices programmed into the AV.

Anapol Weiss LawyersAnapol Weiss Lawyers

ABOUT THE AUTHOR

Anapol Weiss

Anapol Weiss is a top-rated national personal injury firm with a reputation for winning big. Our trial attorneys are leaders in medical malpractice, women's health litigation, personal injury, and mass torts cases. As a female majority-owned firm with a deep bench of experienced, determined trial attorneys, we are compassionate with our clients and fierce in the courtroom.