Even today AI (non-military) can kill you, here’s how

Pretty soon, autonomous cars

(AI and driverless vehicles) will become part of the urban landscape everywhere in the world. At the moment, autonomous cars with AI are used as delivery vehicles and taxis, partly in test conditions, partly in real road settings. In the US and China, the process is advanced, and in the UK, Sweden, Finland, Spain and others, the stage is final testing. Infrastructure and reliable communications are being built for this AI capability.

Imagine a huge truck is speeding behind your autonomous car. In the thrill of the game, three children suddenly jump out in front of your car. Your car must react automatically according to the algorithm of the technical model embedded in it. Yes, but here the decision must be moral.

We link the danger of AI to guns, future AGI or criminal misuse. Did you know that peaceful AI is already quite openly programmed to kill its user, but this is a topic that is avoided…Not in the distant future, but today, if a situation arises that fits the model, AI not only can, but necessarily will kill you!

🔻Where is the devil, is there anything to fear?

If the car stops abruptly to save the children from being run over, you will be crushed by the truck. If your car’s AI decides to protect you, it will have to continue on its way and run over the kids. The curious thing here is that different manufacturers today (and so will be in the future), embed different response models in AI, according to a preferred ethical paradigm resolving this moral dilemma. This is an avoided topic, and you won’t find information about it in the glovebox or in the description of the extras. You won’t know what ethical model is built into your car, and will you think to ask to know what awaits you?

Manufacturers hide how they solve moral dilemmas and how the car will automatically act in a critical situation. The goal is to keep up with cultural expectations in the region for sales, not provoke a judicial investigation into who is responsible for the automotive AI “decisions” and not have the image implications of divergent noise in a public discussion.

If there is no good possible maneuver in front of the autonomous car, do you know that it may decide to sacrifice you? There are extreme situations where programmed responses have moral justification, what’s more, there are differences in autonomous car models!

To be as helpful as possible in your future choices, we will suggest what to expect from current and future autonomous car manufacturers according to the cultural stereotypes of the regions.

If your car comes from Germany, France and the EU in general, expect the car to protect you because the ethical framework is Kantian and no discriminatory priority is given to anyone’s life. Let’s put it bluntly – the car will not sacrifice your life to save a few people, even children. Interestingly, in some strongly religious Islamic societies, there is also a preference for avoiding deliberate choices to take people’s lives, which means that here the AI would prefer not to actively do harm to someone else’s or your own. So the AI will not choose anything and the consequences will be – whatever chance shows.

There are also attempts in the EU to delegate more human choice, especially in moral dilemmas, but this is not a workable solution in sudden situations, especially if the human is in a passive position and not monitoring the situation.

Most widely preferred in different parts of the world is another solution. That is saving more people and reducing suffering for more people. This is the utilitarian ethic of “lesser evil” or the benefit of more people, and according to it, the AI of the car will prefer to react so that the risk of death is for you and not for the few children in front of your car.

If your car is American, expect more variability because each state imposes its own regulations and preferences. Clear moral signs are also avoided, the responsibility for the algorithm is shifted to the manufacturer. However, here too there is a tendency to impose the principle of statistical moral preferences on humans when training AI in autonomous cars. This means a preference for saving the overwhelming amount of people. We say it – here the AI will kill you to save a few people.

In Western cultures, preference is also given to the young, to children, to women, at the expense of adults, so if there are fine-tuning in the algorithm, you can expect surprises of that nature as well.

If your car is Japanese, Chinese, or from East Asia, you may also find yourself surprised by the decisions. From the collectivist culture here, expect your car, if it has no options to prevent a collision, to choose not to act. This will protect you, even if it means more casualties. But there’s also a good chance that the particular manufacturer relies more heavily on traditional local ethics. For example, it may choose to victimize you if there is even just one person in front of the car, but that person is an elderly person or an authority figure. However, in the Far East, processes are evolving and there may be compliance with global changes in regulations.

💬 Would you be afraid to ride in an autonomous vehicle?
💬 What road regulations for autonomous cars would you recommend?
💬 Do you think it’s fair to not know what moral framework the manufacturer has put into the car you’ve entrusted your life to?
💬 What kind of morality do you think AI should have?


Authors: Ivan Sapundzhiev and Ralitsa Atanasova