Safety-critical AI applications

Illustrative graphic on the focus topic 'Safety Critical AI Applications'
Credit:

all icons: https://icon-icons.com/

Transport, energy, aerospace: what do AI systems in these areas need in order to be protected against cyber-attacks while continuing to function reliably?

What innovative methods are we using in the development and certification of AI-based solutions for systemically important areas?

Artificial intelligence is becoming increasingly prevalent in areas of society that are important for its functioning, including traffic control, energy grids and aerospace. The Institute for AI Safety and Security is developing innovative methods and technologies to ensure the security and reliability of AI-based solutions in these ambitious fields of application.

Our research focuses on developing attack-proof, robust AI systems by securing the entire execution environment, from hardware to software. We also develop advanced methods and algorithms for secure, standard-compliant AI; cybersecurity in open data and service ecosystems; and automation in mobility and logistics.

Secure, decentralised data infrastructures and innovative encryption technologies form a key part of our work. Privacy-Enhancing Technologies (PETs) facilitate the secure exchange of data between industry and research. Post-quantum cryptography (PQC) will protect data from being accessed by quantum computers in future. Our work is also complemented by research into failure- and attack-resistant AI hardware, the resilience of distributed systems, and the secure use of innovative computing approaches, such as quantum computing and neuromorphic AI.

The institute benefits from DLR's decades of experience in developing, approving and operating safety-critical systems. In this way, we help to ensure the operational and security resilience of AI in transport, energy and aerospace, paving the way for a secure digital future.

Projects on the topic of Safety-critical AI applications

Loading