Cybersecurity and Resilience for AI

Illustrative graphic on the focus topic 'Cybersecurity and Resilience for AI'
Credit:

all icons: https://icon-icons.com/

How can AI systems be developed to recognise cyber-attacks and other disruptions while ensuring the most stable operation possible?

How can we demonstrate the security of AI systems, and how can we adapt design processes to incorporate security from the outset?

The rapid development of artificial intelligence (AI) brings a host of new opportunities for society. However, it also creates new cybersecurity challenges. At the DLR Institute of AI Safety and Security, we are developing innovative approaches to make AI systems resistant to cyberattacks and ensure their resilience in critical application areas.

This is based on our holistic research approach, which combines the security of the used algorithms with approaches to the resilience of distributed system infrastructures and their associated development processes.

As part of this approach, we analyse the impact of both intentional and unintentional manipulation on AI systems, and then derive appropriate security measures. This allows these complex ecosystems to quickly adapt to new and unexpected conditions while ensuring maximum functionality.

Specifically, we focus on human-machine interaction and ensuring information and processing sovereignty and the verifiability of AI algorithms and execution environments.

Projects on the topic of Cybersecurity and Resilience for AI

Loading