Safe AI-Methods

https://icon-icons.com/
From design to operation: What is important to ensure that AI systems remain secure throughout their entire lifecycle?
How can the robustness and traceability of AI-based decisions in critical infrastructures be guaranteed?
The development of safe, secure and trustworthy artificial intelligence (AI) lies at the heart of our research activities. We develop AI-related methods, processes, algorithms, technologies and system environments to ensure the operational and cybersecurity of AI-based solutions in demanding application areas.
Our interdisciplinary approach includes developing robust evaluation and testing methods for AI-based components in safety-critical applications. We integrate our results into decentralised DevOps environments (AI-in-the-loop) to enhance the connectivity, resilience and cost-effectiveness of AI.
We focus on ensuring that AI is safe and standard-compliant, and that cyber security is in place for open data and service ecosystems, as well as for automation in mobility and logistics. From aerospace and transport systems to the energy industry, the highest safety standards are essential wherever AI systems make critical decisions.
In our DLR research, we consider not only individual aspects, but also the entire life cycle of an AI system. By collaborating with industry and research partners, we lay the foundations for the practical, safe use of AI technologies and contribute to trustworthy digital transformation.