November 7, 2023

AI Safety Summit 2023

Prof. Dr. Frank Köster, Founding Director of the Institute for AI Safety and Security
  • DLR research not only looks at individual aspects, but also focusses on the entire life cycle of AI.
  • Automated driving or flying and robotics need advanced AI systems.
  • Focal points: Digitalisation, safety
  • "Safe AI is important because we will find AI-based systems in all areas of life in the future. This also includes areas in which high safety requirements must be met," says Prof Frank Köster, founding director of the Institute of AI Safety and Security at the German Aerospace Centre (DLR). Last week, representatives from almost 30 countries and technology companies met in England to discuss the safety of artificial intelligence (AI). In a joint declaration, they pledged to work together, among other things. The establishment of an "AI Safety Institute" was also announced.

    The requirements for AI systems are also an issue at DLR. The Institute for AI Safety and Security was officially opened about a year ago. The key technology of artificial intelligence is hardly conceivable without security aspects: "This is particularly evident in advanced assistance and automation functions - such as automated driving and flying, as well as in robotics. But a high level of security is also essential in areas where sensitive personal or company data is processed. Furthermore, it is fundamentally important that we always utilise AI in such a way that clear added value is created for the benefit of society and risks can always be appropriately mitigated," adds Frank Köster.

    Considering the entire life cycle of AI

    "The biggest challenge in AI safety and security I see at the moment is that we don't just generate safety and security in a single component of a system. We have to consider the entire life cycle of an AI and how it is embedded," says Frank Köster. This includes data management for training and validation as well as the operational data supply. The result is a comprehensive view of risks and their potential negative effects. At the same time, the researchers identify different approaches to minimising risks. "At DLR in particular, we are very well positioned for this, as we are developing all of our knowledge on the development of safety-critical solutions at the Institute for AI Safety and Security. We use the term safety here in the sense of safety and security." This means that both operational safety and protection against external attacks (security) are taken into account.

    For example, the DLR Institute for AI Safety and Security is currently working on ensuring that assistance and automation systems for ground-based transport and aviation applications correctly recognise their surroundings. This works by safeguarding AI modules for object recognition and object classification - an essential step for an AI to be able to plan appropriate behaviour. The scientists also ensure that the methods are compatible with established standards in order to promote the transfer to industry.

    In the GAIA-X 4 Future Mobility project group and also in Catena-X, the institute is focusing on decentralised data and service ecosystems. In the future, these will be an important operational environment for various AI-based applications. These include the condition-based maintenance of vehicles and machines. Research focuses on the origin of data as well as the protection of personal and sensitive data. The institute also deals with ethical, legal and social aspects of digitalisation and AI in particular.

    AI risks in connection with social impacts

    "The declaration of the AI Safety Summit points in an important direction, as it links AI risks with their potential social impact. Understanding the risks is of great value, as this is the only way to make rational decisions on the use or regulation of AI," says Frank Köster. "The statement also shows that approaches to managing AI safety and security can vary depending on national circumstances and legal frameworks. This is very important, because we are still at the beginning of the use of AI. With different approaches, societies can learn more quickly how to deal with AI - if findings are shared with others from the outset."