Robustness of AI systems (Rob AI)

created with ChatGPT
The 'Robustness of AI Systems' project is a collaboration between Albstadt-Sigmaringen University of Applied Sciences and the DLR Institute for AI Safety and Security. The project involves researching the security of AI systems, investigating possible attacks and how to prevent them.
Contribution Institute for AI Safety and Security
The project aims to protect AI systems used in sensitive areas, such as facial recognition and autonomous driving, from cyberattacks. Specifically, the project is investigating how attackers can steal or manipulate training data or AI models to compromise these systems. The project team is also developing methods to ward off such attacks and enhance system security. The effectiveness of these protective measures in distributed AI applications will then be tested, and adaptations will be made if necessary. A demonstrator will be used to visualise the attacks and countermeasures. The plan is to develop new security concepts. These concepts should enable the development of trustworthy and reliable AI systems.
Institutes and facilities involved (DLR & external)
- Institute for AI Safety and Security
- Albstadt-Sigmaringen University of Applied Sciences