Translated by Miguel Fernández Garrido.

In 1942 Isaac Asimov formulated in one of his science-fiction writings the three laws that must be “embedded” in a robot´s brain and that constitute the robot´s ethical code. How can these three laws related to the autonomous car? This appears to be the biggest issue for the fully fledged implementation of the self-driving automobile.

The main purpose of  the Three Laws of Robotics as formulated by Asimov is to ward off the human fear of a rebellion of the machines against their creators, which Asimov dubbed the “Frankenstein complex”. The three laws are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

What is the main purpose of the autonomous car?

Las tres leyes de Asimov en el coche autónomo

 All car manufacturers are betting on autonomous driving technology. In the last few years, car designs have been adapting in order to achieve a higher level of interaction with each other. You may have noticed how modern cars´rear ends are full of angles, they have huge exhaust outlets shaped as rectangles or triangles, their bodies and bumpers are full of edges… the reason for this lies in the fact that these elements make the car more visible for obstacle detection systems, adaptive cruise controls, collision warning systems and so on.

In this race for launching into the market the first self-driving car, most car manufacturers are targeting the year 2020. Some companies, like Volvo, admit that their aim is that not a single person dies in one of their vehicles by that year.

Little by little, car manufacturers are re-educating us so that we no longer view driving as an enjoyable activity. They overwhelm us with a plethora of distracting gadgets in their cars (which seems to contradict one of the most important safety considerations, namely, that the driver should focus on driving), internet connectivity, ever more complex infotainment systems, their cars even offer us the possibility of being able to check our e-mails. This way we will not miss driving when the time comes when we will not be allowed to, for this is the ultimate aim, even if nobody will openly admit it: one of the keys to having cars that can drive 100 per cent autonomously is to remove the human factor from the safety equation, and so to rid the roads from all the vehicles driven by humans, since only humans can have unpredictable reactions. Only if the car knows with 100 % accuracy what other cars are going to do will it be able to drive autonomously in a safe way.

The technology is already there for cars to be able to drive autonomously, but there two main problems the implementation of the self-driving car.

  • The existing infrastructures must be adapted to autonomous driving. In order for the cars to be able to drive unaided by human input, they must be able to communicate with traffic lights and road signs, and they must be aware of traffic conditions -if there is ice on the road or if there is roadwork ahead for example. To achieve this, infrastructures must be adapted and then maintained by the competent authorities.
  • The standing legislation must be changed to allow for autonomous driving. Currently, the law forbids and penalizes any distraction by the driver, and so a policeman would be right to give us a ticket if we were caught using a self-driving car in autonomous mode. And if that were the case, who would be responsible for wrongdoing, the car or the “driver”? Nonsensical as it sounds, it is a matter that needs legal defining.
La conducción autónoma tiene como finalidad principal reducir las muertes en accidentes.

Las tres leyes de Asimov en el coche autónomo

Aside from these two problems, many authors including Jean-Francois Bonnefon, Azim Shariff or Iyad Rahwan have pointed out another important hurdle that the development and eventual implementation of the self-driving car must overcome: its ethical and moral dimension. Would you drive a car that might decide sacrificing its driver in order to save other lives? Or would you drive a car that, in order to save its driver might kill other lives?

Unfortunately, the creator of the Three Laws of Robotics died on 6th April 1992, long before he could partake in this fascinating man-vs-machine moral debate.

In actuality, the autonomous car is born with the objective of reducing the number of deaths in car crashes -incidentally, defined by Albert Camus as “the dumbest of all deaths” only months before he was killed in an accident himself. In truth, fatalities are inevitable no matter the insistence of car makes in promising that the autonomous car will prevent people from being killed in car accidents. After all, the unsinkable Titanic did sink, and airplanes fall from the skies despite their auto-pilots and scrupulous safety protocols.

So how would the self-driving car act in a situation where a crash was unavoidable? The answer is quite simple: it should choose the solution that caused minimum damage. And herein starts the moral issue and its similitude with Asimov´s Three Laws of Robotics. 

Self-Driving cars: to kill or to commit suicide

La conducción autónoma plantea problemas legales y morales.

Las tres leyes de Asimov en el coche autónomo

There are different scenarios that are always brought forward to illustrate the ethical paradox incurred when developing the software that must govern the autonomous car, but they all come down to the same thing: given a situation where an accident is unavoidable, should the self-driving car choose to kill another person for the safety of its passengers, or should it sacrifice the occupants in order to save more lives?

This is one of the proposed examples:

An only occupant is cruising in his self-driving car and a group of people is standing on the way of the vehicle for whatever reason. The car cannot prevent an accident from happening and it is confronted with two alternatives: it can either run over the group of pedestrians -thus protecting the life of the passenger, or it can swerve out of the road potentially “sacrificing” its passenger in order to save more lives. If we follow the law of  minimum harm, in such case the car should “suicide” its occupant to minimize loss of human life but, would you buy or drive in a car that you know may regard you as dispensable? And if the manufacturer decided to give priority to the life of the car´s occupants, would you buy or drive in a car that could choose to kill other people in order to save your life?… a scary conundrum indeed!

Comments (4)

  1. Reply

    ¡Qué lejos me pilla todo esto! Con lo contento que estoy yo con mi coche de hace 23 años.
    Pero contestando a tu pregunta: no, no compraría ningún coche que decidiera por mí.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.