Artificial intelligence (AI) it is a key technology. In the course of advancing digitalization, we will encounter it more and more often and in an increasing number of areas. This also includes situations of high safety & security requirements or high demands regarding the protection of the underlying data which, for example, contains personal information or intellectual properties (IP). Accordingly, AI safety and security - being flawless and reliable to operate (safe) as well as protecting from external attacks (secure) – will become more relevant.
Only a comprehensive examination of the safety and security of AI-based technologies and applications can enable their practical use in the interest of the economy and society.
At the Institute for AI Safety & Security, we research and develop AI-related methods, processes, algorithms, technologies, and execution and system environments. In doing so, we also consider the possibilities of innovative computing methods, such as those emerging in the field of quantum computing. The organisation, storage and exchange of security-relevant and sensitive data in distributed data infrastructures are an important basis and accordingly belong in our field of work. Data sovereignty, transparency and the trustful distributed use of data are important fields of research and action in this context.
With our work, we contribute to ensuring operational safety and attack security for AI-based solutions in ambitious application classes. These include aerospace, transport, energy and other areas of application that are important for Germany as a business and science location, such as Industry 4.0 and other digitalization-driven fields of innovation based on distributed data and service ecosystems or a future platform and data economy.
Our research fields at a glance:
- AI Engineering - Increasing the connectivity, resilience and cost-effectiveness of AI; improving the ability of humans and AI to interact.
- Algorithms & Hybrid Solutions - Research into and further development of safe and secure AI, taking into account society, human beings and technology.
- Safety-critical Data Infrastructure - Requirements for secure data exchange and robust AI applications in industry and research.
- Execution Environments & Innovative Computing Methods - Attack-proof and robust AI by securing the entire execution system.
- Business Development and Strategy - Interdisciplinary foundations for trustworthy AI, social acceptance and technology transfer.
DLR offers optimum framework conditions
DLR's research and development activities – which are shaped by aeronautics, space travel, energy and transport – and the associated research infrastructures provide an optimal framework for inter- and transdisciplinary research. As an example, our researchers can access extensive data sets from DLR's large-scale research facilities in order to analyse them using AI or to utilise them in the construction of AI systems.
The work of the AI Institute is cross-sectoral and therefore actively contributes towards a unique constellation at DLR. Broadly diversified expertise from various scientific disciplines and in-depth application knowledge are merged together with fundamental AI research. In this way, we can ensure a subject-specific further development of the topic of AI and, at the same time, a high level of practical and application relevance. Our research and development activities are flanked by AI-related networking activities on the topics of ethics, law and society.
At the Institute for AI Safety and Security, we want to enable reliable AI through fundamental research and with new methods and to thereby overcome barriers to the success of new technologies. Specifically, we aim to make new contributions towards the responsible introduction of AI in practical applications with high security requirements.