Transforming uncertain control systems
Researchers in Japan and Canada have developed a formal technique to automatically transform traditional control software into models that satisfy safety requirements even when there is uncertainty in sensing the state of the environment.
This is vital for autonomous systems such as drones, driverless cars and satellites.
The team at the National Institute of Informatics (NII) in Japan and the University of Waterloo (Canada) developed the automated method that also generates formulas that represent the degree of uncertainty that the controller software can tolerate.
The controller software in autonomous systems determines its actions in response to the state of the environment, which is perceived with sensors. In reality, however, the systems may perceive values that differ from the true values. This can cause safety violations if the controller software behaves in response to the incorrectly perceived values. For example, if the sensor of an autonomous vehicle can misperceive the positions of other cars up to 1 m, it should operate with a safety margin of at least 1m.
Developing an automated transformation process is complicated as developers need to verify that safety is guaranteed for every possible behaviour while taking into consideration differences between true values and perceived values. It is also difficult to estimate the degree of the uncertainty. The perceptual uncertainty depends on the situation in which the controller system is deployed, such as whether or not it is foggy.
“Controller systems are crucial because most software systems’ usefulness is due to their interactions with external environments,” said Tsutomu Kobayashi at NII. “This research aims to help developers apply formal modeling approaches to realistic software by addressing the inevitable problem of controller systems regarding the gap between the perception and reality. Thus, developers can focus on the essence of controller behaviour. We believe that the method is valuable and can be extended in various ways. We will continue working towards the systematic and easy application of rigorous mathematical methods to ensure a safe environment for everyone.”
The method consists of two steps.
The first step of uncertainty injection transforms the input model of an uncertainty-unaware controller into an intermediate model. Here, the behaviour of the intermediate model is the same as that of the input model so it is unsafe.
The second step of robustification converts the intermediate model into one that is uncertainty-aware and safe. The behaviour of the resulting controller is updated so that it operates safely even under uncertainty.
Constraints on the behaviour of the generated controller are specified so that it is guaranteed to operate safely even under uncertainty. However, whether such constraints are satisfiable depends on the uncertainty. As an extreme example, if a sensor misperceives the positions of other cars up to 100 km, then guaranteeing safety is impossible in many situations.
The proposed approach also generates the limit as a formula of uncertainty. Developers can choose appropriate sensors from a given catalogue by using the formula as the criterion. In addition, the formula can be used to analyze uncertainty, such as how the uncertainty will be propagated if the controller is combined with other components.
The method makes the construction of uncertainty-aware and safe controllers more systematic and effortless. Moreover, it enables developers to flexibly analyze various situations of perceptual uncertainty. Thus, the method improves the overall safety of the real world in which controller systems are implemented ubiquitously.
In addition to autonomous vehicles, the proposed method can also be applied to various other controller systems that interact with external environments.
The team is working to generalize the method so that it will be able to deal with a broader range of uncertainty. For instance, we will tackle misclassification problems such as an object classifier module classifying an object as a wrong object class.
- Formal methods for AI in space navigation systems
- NREL slashes uncertainty rate for PV power measurement
- Formal verification platform leverages AI
- Memory design for autonomous driving systems
- Early warning system for self-driving cars
Other articles on eeNews Europe
- Samsung Foundry tapes out 3nm GAA chip
- Nexperia buys Newport Wafer Fab
- Plans start for TSMC 2nm fab
- Siemens looks to digital twin, IoT to drive growth
- Ruag teams for debris removal spacecraft