
Autonomous driving: Algorithm allocates risks fairly
Researchers at the Technical University of Munich (TUM) have developed software for autonomous driving that distributes the risk of an accident fairly. It is considered the first algorithm that takes into account all 20 ethics recommendations of the EU Commission and thus makes more differentiated decisions than previous algorithms.
Before autonomously driving vehicles can be on the roads nationwide, not only the technical implementation must be accomplished. Ethical issues also play an important role in the development of algorithms. For example, the software must be able to deal with unpredictable situations and make the necessary decisions in the event of an impending accident. Scenarios are circulating in the public domain according to which the algorithms of such a vehicle would prefer a “softer” crash in any case in the face of an inevitable collision, even if it would be at the expense of vulnerable road users – pedestrians, for example. Such scenarios – whether with a real background or not – hamper the acceptance of autonomous driving.
Researchers at TUM have now developed an ethical algorithm for the first time as part of the ANDRE (AutoNomous DRiving Ethics) project, which does not act according to the maxim either/or, but rather shares risk fairly. Around 2,000 scenarios with critical situations were tested, distributed across different road types and regions such as Europe, the USA and China. The research work, which was published in the journal Nature Machine Intelligence, was carried out in cooperation with the chairs of Automotive Engineering and Business Ethics at the Institute for Ethics in Artificial Intelligence (IEAI) at TUM.
More options in critical situations
The ethical framework to which the risk assessment of the software is oriented was defined in a letter of recommendation by a panel of experts commissioned by the EU Commission in 2020. It includes principles such as the protection of weaker road users and the sharing of risk throughout road traffic. In order to translate these rules into mathematical calculations, the research team divided vehicles and people moving in traffic according to the risk they pose to others and their different risk appetites. A lorry can cause a lot of damage to other road users, while in many scenarios it will only cause minor damage itself. The opposite is true for a bicycle. Therefore, the algorithm was instructed not to exceed a maximum acceptable risk in the different traffic situations. In addition, the research team calculated variables that result from the responsibility of road users, for example, to obey traffic rules.
- Accenture sees need to reboot autonomous driving
- Safety experts put electronic systems at the focus of their assessments
- V2X infrastructure improves safety on US roads
Previous approaches dealt with critical situations on the road only with a small number of possible manoeuvres. In case of doubt, the vehicle simply stopped if possible The risk assessment now introduced into the code creates more degrees of freedom with less risk for everyone.
An example illustrates the approach: an autonomous vehicle wants to overtake a bicycle, but a truck is coming towards it in the opposite lane. All available data about the surroundings and the individual participants are now consulted. Can the bicycle be overtaken without driving into the oncoming lane and at the same time keeping enough distance from the bicycle? What risk is there for which vehicle and what risk do these vehicles mean for oneself? In case of doubt, the autonomous vehicle with the new software will always wait until the risk is acceptable for everyone. Aggressive manoeuvres are avoided, and at the same time the autonomous vehicle does not fall into a state of shock and brake abruptly. A balancing process takes place that considers multiple options.
Traditional ethical thought patterns led to dead end
“So far, traditional ethical thought patterns have often been chosen to justify decisions of autonomous vehicles. This ultimately led to a dead end, because in many traffic situations there was nothing left to do but violate an ethical principle,” says Franziska Poszler, researcher at the Chair of Business Ethics at TUM. “We, in contrast, look at traffic with risk ethics as a central starting point. This allows us to work with probabilities and weigh things up in a more differentiated way.”
The researchers emphasise that even algorithms that act according to risk ethics cover every possible situation and make a decision based on ethical principles, but still cannot guarantee accident-free road traffic. In future, further differentiations, such as cultural differences, would also have to be taken into account in ethical decisions.
The algorithm developed at TUM has so far been validated in simulations. In future, the software will also be tested on the road with the EDGAR research vehicle. The code, into which the findings of the research work flow, is available as open source.
More information: https://www.ieai.sot.tum.de/research/andre-autonomous-driving-ethics/
Code: https://github.com/TUMFTM/EthicalTrajectoryPlanning
Related articles:
Fail-operational architecture for automated driving covers L3, L4
Continental drives Automated Valet Parking with AI startup investment
Software for autonomous vehicles looks into the future
Research project to strengthen confidence in automated systems
“ISO 26262 is not perfectly designed for AI”
Ford develops technology to predict potential accidents
