Making autonomous driving a reality is not only a significant technical challenge; it also raises numerous ethical and legal concerns. Who makes the decision in delicate situations where other road users may be endangered or harmed? Who bears responsibility and is held accountable for the consequences? So, before autonomous vehicles can be found on public roads everywhere, more than just technical implementation must be mastered. When developing algorithms, ethical issues must also be considered. In the event of an impending accident, the software must be able to deal with unpredictable situations and make the necessary decisions.
For the first time, researchers at the Technical University of Munich (TUM) have developed an ethical algorithm that does not follow the maxim either/or, but rather divides risk fairly. Around 2,000 scenarios with critical situations were reportedly tested on various types of roads in Europe, but also in the United States and China. The Chairs of Automotive Engineering and Business Ethics at TUM’s “Institute for Ethics in Artificial Intelligence” (IEAI) collaborated on the research, which was published in the journal “Nature Machine Intelligence”. The ethical framework guiding the software’s risk assessment was defined in 2020 in a letter of recommendation commissioned by the EU Commission, which includes principles such as the protection of vulnerable road users and the sharing of risk across all modes of transportation.
Maximilian Geißlinger, a scientist at the Chair of Automotive Engineering, explains the approach: “In the past, when faced with an ethical dilemma, autonomous vehicles were forced to choose between two options. However, road traffic cannot be divided into black and white, but must also take into account the numerous shades of grey. In a fraction of a second, our algorithm weighs various risks and makes an ethical decision from thousands of possible behaviours.” Until now, says Franziska Poszler, researcher at the Chair of Business Ethics, “Traditional ethical thought patterns were frequently used to justify autonomous vehicle decisions. This eventually led to a dead end, because in many traffic situations, the only option was to violate an ethical principle. We, on the other hand, approach traffic from the standpoint of risk ethics. This allows us to work with probabilities and weigh options in a more differentiated manner.”
The researchers emphasise that even risk-averse algorithms cannot guarantee accident-free road traffic. Further distinctions (such as cultural differences) would have to be considered in future ethical decisions. Following simulation validation, the software will now be tested on the road with the EDGAR research vehicle. The software’s source code is available as open source.
More on ndion
Share this page on Social Media: