Trustworthy autonomy: new approaches to AI safety and security in vehicle traffic.Safe AI Engineering
Trustworthy autonomy: new approaches to AI safety and security in vehicle traffic.
The development of automated driving functions presents a number of complex challenges for researchers and industry experts alike. While advances in artificial intelligence (AI) are opening up new possibilities, the need for greater safety and reliability is also increasing. This is precisely where the joint project Safe AI Engineering comes in. This ambitious research project aims to improve the safety of AI-based mobility solutions fundamentally.
The project involves 24 collaborators from the worlds of science and industry, including renowned car manufacturers, suppliers, research institutions, universities, and technology companies. The project's goal is to develop a comprehensive AI engineering methodology that protects the entire life cycle of AI functions in automated driving, from initial conception and development to testing, monitoring and continuous improvement.
One key area of innovation is the standardisation of training and validation data. Project participants are working to structure databases so that they can be used independently of the original system. This will enable more sustainable and cost-effective utilisation of data. Meanwhile, the researchers are developing robust and explainable validation methods to make the performance of AI systems transparent and comprehensible.
Practical testing is carried out using an AI perceptual function that recognises pedestrians as an example. The developed methods are systematically tested and validated in three successive use cases, ranging from static scenarios to complex, dynamic traffic situations. The scientists are drawing on international safety standards, such as ISO 26262, SOTIF and ISO/PAS 8800, and integrating these into a comprehensive approach.
Safe AI Engineering, funded by the German Federal Ministry for Economic Affairs and Energy (BMWE), is pursuing the overarching goal of establishing a new standard for safeguarding AI-based vehicle functions. The project's outcomes will benefit not only developers and manufacturers, but also support authorities in approval processes. Ultimately, the aim is to increase acceptance of and trust in automated mobility solutions.
The project is being coordinated by Dr Ulrich Wurstbauer from DXC Luxoft and Professor Dr Frank Köster from the DLR Institute for AI Safety and Security. Work at the institute is based in the AI Engineering department, which is headed by Dr Sven Hallerbach. Spanning three years with a total budget of €34.5 million (including €17.2 million in funding), Safe AI Engineering will make a decisive contribution to the future of safe and intelligent mobility.
Contribution Institute for AI Safety and Security
In addition to coordinating with Luxoft overall, the Institute for AI Safety and Security plays a central role in orchestrating the project. It is also actively involved in further developing the basic AI engineering methodology and defining use cases, as well as implementing them throughout the entire life cycle of an AI component.
The institute also focuses on normalising data and evaluating the associated information quality, particularly with regard to domain adaptation and transformation.
Furthermore, the institute is developing methods for AI monitoring, particularly for monitoring explainable uncertainty estimates, and for conducting risk assessments based on uncertainty quantification.
Participating DLR institutes and facilities