Understanding Artificial Intelligence


Symbolic Image; Source: DLR, Institute for AI Safety and Security

Understanding Artificial Intelligence

Proposals for the use of artificial intelligence and autonomous systems have made groundbreaking progress in recent years and are advancing more and more into safety-critical applications such as traffic systems, space travel or robotics. However, there is still a lack of certification methods for these systems, as many deep learning methods are still a misunderstood black box. In the SKIAS project - Safe AI for Autonomous Systems, the Institute for AI Safety and Security is working on fundamental methods to better understand artificial intelligence systems and make them safer. Starting in November 2021, we will work with the DLR institutes of Data Science, Flight Systems Engineering, Optical Sensor Systems, Terrestrial Infrastructure Protection, Robotics and Mechatronics, and Transport Systems Engineering to make the predictions of AI models more reliable, even in critical environments, and to establish quality metrics. Our Institute is primarily involved in the conceptual design of sensor-based AI.

Safe AI through traceability of its results

At the end of the project, the goal of reliable use of artificial intelligence will no longer be achieved by disseminating the processes within e.g. deep neural networks, but by focusing on the robustness of the result. In this way, we enable the definition of approval criteria for a wide range of autonomous systems and intelligent components that are independent of the method used and, ultimately, their use in safety-relevant infrastructures.

Explainable and reliable machine learning within the SKIAS concept consists of three sub-aspects:

  • Prediction
  • Sensor-based processing
  • Quality estimation

The project team focuses on image analysis tasks, in particular object and environment recognition and situation estimation. Based on uncertainties in the data and the model, the team for reliable prediction creates an approach for risk assessment and decision support for operators in extreme situations outside the operational design domain of the algorithm.

Contribution Institute for AI Safety and Security

The Institute for AI Safety and Security is involved in the project in the field of sensor-related AI. By implementing AI methods directly at the sensor, data available on site, but not directly readable or stored, will be used for initial pre-processing steps and reliability assessments of the training or operational data. This could be sensor location information, carrier system information or raw data directly from the sensor components. This is combined with the implementation of physical knowledge about the sensor architecture in the context of hybrid AI models. This will allow the sensor not only to generate direct estimates of the expected data quality - for example in bad weather conditions - but also to output a broader set of information, for example for automatic depth estimation or optical fluxes for optical sensors.

Finally, the third part of the project involves evaluating AI models in the context of defined situations and deriving metrics for the quality of the models produced. This will be done using environments for which a detailed description is already available as a comparison - for example, in the context of a digital twin - and synthetic data that allow direct control of the situation and reduce the use of real data.

The SKIAS concepts will be demonstrated on two multi-sensor platforms in operational environments. Together with the DLR Institute of Optical Sensor Systems, we are implementing AI on the sensor carrier of a track condition measurement system - mounted on a shunting locomotive - to jointly optimize the interaction of all sensors. Our partners are also working with an unmanned aerial vehicle (UAV) as a test platform. The UAV's sensor data is used to create an environmental model within a previously surveyed environment. By comparing it with an existing digital twin, the quality of the model can be evaluated and metrics derived for this comparison.


Dr. Arne Raulf

Head of Department
German Aerospace Center (DLR)
Institute for AI Safety and Security
Algorithms & Hybrid Solutions
Rathausallee 12, 53757 Sankt Augustin

Karoline Bischof

Consultant Public Relations
German Aerospace Center (DLR)
Institute for AI Safety and Security
Business Development and Strategy
Rathausallee 12, 53757 Sankt Augustin