This DLR institute carries out interdisciplinary research and development activities in the field of artificial intelligence (AI) safety and security within the wider context of DLR’s research into aeronautics, space, energy, transport, security and the cross-sectoral field of digitalisation. The focus is on maintaining the security of safety-critical systems during operations and in the event of attack. The Institute takes a balanced approach to working on cross-sectoral fundamental research into artificial intelligence and application-oriented and practical developments. It also addresses the ethical, legal and societal aspects associated with the use of AI. The DLR Institute for AI Safety and Security is thus making a significant contribution towards the implementation of the national AI strategy and has set itself the task of putting AI developments at the service of the economy and society.
Departments and research topics
- AI Engineering
The integration of AI methods and technologies into the engineering models used to build safety-critical systems. This topic covers the reliable assessment and test procedures for AI systems and the ways humans interact with these systems.
- Algorithms & Hybrid Solutions
The systematic continuous development of AI methods and technologies to ensure they are accessible for security-related verification methods, or to synthesise them in a manner that is verifiable.
- Security-critical Data
The development of secure data infrastructure. The focus here is on combining methods and technologies to enable both the protection and use of sensitive data in distributed data infrastructures.
- Execution Environments
The development of principles for the software and hardware of AI execution environments that can be easily applied to specific implementations with different performance requirements while always ensuring a high degree of reliability. In this context, the opportunities and impacts of innovative computing approaches, such as quantum computing, in the context of AI are also taken into consideration.
- Business Development & Networks
The ongoing analysis of the AI research field, including the appropriate road-mapping activities. This includes the interdisciplinary discussion of the ethical, legal and social aspects (ELSA) of AI and of scientific contributions to inter- and transdisciplinary research and development projects.
The cross-sectoral Institute for AI Safety and Security carries out fundamental AI research both internally and with external partners. The primary focus is on ensuring the reliability of data and the ability to verify the security of AI processes and methods. The work is carried out in close collaboration with DLR’s primary research areas of aeronautics, space, transport and energy, and with the cross-sectoral areas of security and digitisation, and strengthens these core research areas as a result. Using a basic, results-driven approach, secure AI applications can then be set up at the other individual DLR institutes.