April 15, 2024

Safety of AI Training Data

Visit by members of NRW State Parliament Julia Höller and Tim Achtermeyer to the Institute for AI Safety and Security in Sankt Augustin.

The visit to the Institute for AI Safety and Security in Sankt Augustin by MPs Julia Höller and Tim Achtermeyer focused on discussing safety-critical data infrastructures and the important role that artificial intelligence plays in them.
In addition to providing an insight into the work and structure of the institute, we were able to bring our work to life through live demonstrations.

With data poisoning, attackers deliberately inject manipulated data into an AI system to compromise the predictions of the AI model.
Research into data poisoning, using the example of hidden images in images used for AI training, highlights the importance of AI safety issues in relation to such potentially manipulated images: For example, if a right of way sign is hidden in an image of a STOP sign in the training data, this can have fatal consequences for the training of autonomous vehicles in terms of misinterpretation by the AI. We are developing methods to validate and clean up such images so that the training data is always clean and safe. The method that we have developed is already in use at the German Aerospace Center.

The training of our AI-equipped telepresence robots in our "living lab" is primarily aimed at gaining insights into their learning behaviour, which can be scaled up and applied as required at a higher level of maturity. The focus here is on the safety and reliability of AI applications and on the question of how adaptations and optimisations can be implemented more quickly in the operational environment.

Many thanks to Tim Achtermeyer and Julia Höller for their visit, their interest in our topics and the good discussions we had. We will continue working on solutions to keep our data and AI systems safe.