Safe and standard-compliant AI

all icons: https://icon-icons.com/
How can AI innovation be made compatible with safety certification, and how can AI be used in safety-critical environments?
What conditions need to be in place for international AI safety standards and norms to be implemented in practice?
Trustworthy AI systems must possess several key properties, such as explainability, interpretability, fairness and transparency. Developing such systems requires technical excellence and compliance with international standards and norms. At the DLR Institute for AI Safety and Security, our research and development focuses on AI-related methods, processes, algorithms, infrastructures, technologies, execution environments and system environments. Our aim is to create a solid foundation and enable the realisation of standard-compliant AI systems.
This research is crucial, as it helps to ensure the operational security and protection against attacks of AI-based applications and systems in areas where security is relevant, such as transport, energy and aerospace. As AI-based systems will play an increasingly important role in everyday life, ensuring security in all its forms is absolutely essential. Additionally, increased requirements in areas with high security needs (e.g. critical infrastructure) must be given special consideration.
To meet these diverse requirements, we take an interdisciplinary approach that combines technical innovation with compliance with regulatory requirements, creating the scientific basis for the responsible use of AI technologies in safety-critical sectors.