Proposals for the use of artificial intelligence and autonomous systems have made groundbreaking progress in recent years and are advancing more and more into safety-critical applications such as traffic systems, space travel or robotics.
Getting everyone on board - Development of AI methods and technologies that are demonstrably safe in terms of safety and security and that embed themselves in distributed data and service ecosystems. Social, human and technological AI research and further development in synergy.
This topic area encompasses research into the comprehensive verifiability of secure AI and, in particular, its operational and attack security. Specifically, we are systematically further developing artificial-intelligence methods and technologies in order to make them either accessible for security-related detection methods or to synthesise them in a verifiable manner. This therefore addresses the formal verification of the correctness of AI algorithms and the predictability and explicability of their components: How does the algorithm come up with this solution, and why does it propose this solution? In securing strong AI, these questions take on a special complexity and their answers take on a special significance. Our goal is to get everyone on board: the AI community, its supporters and its critics.
Hybrid solutions are characterised by the fact that they not only draw conclusions on the basis of big data but also simultaneously incorporate human observations and experience. Data knowledge and expertise complement one another. Hybrid solutions, however, do not only combine humans and AI but also merge sub-symbolic (impossible or difficult to interpret) and symbolic (very easy to interpret) AI methods and technologies.
“Safety and security by design” is a defining concept for us in the development of algorithms. This means making operational and attack security an essential key quality from the very beginning and throughout the entire process - especially as we at DLR primarily conduct research for ambitious application classes, i.e. the AI must be able to withstand particularly security-critical requirements, such as those which are commonplace in aeronautics, space travel, energy and transport.
Projects in air traffic control and aircraft component control are concrete examples of our work. The aim is to embed AI as a reliable and robust component in the respective process chain, while at the same time aligning human and technological development.