The increasing global demand for automation leads to ever larger code bases and new applications of artificial intelligence (AI). More and more frequently, domain-specific skills have to be supplemented with software engineering and data analysis. This trend of "citizen development" is enabled by easy-to-use tools such as low-code/no-code platforms and business intelligence suites. However, side effects are an increasing number of software vulnerabilities that are exacerbated by modularity and reuse. Attack surfaces of applications are increased and new risks emerge, e.g. when AI replaces traditional rule-bused automation solutions.
The group Secure Software Engineering monitors these developments with regards to safety and security and considers AI a source of risks, but also of opportunities. Our goals are to manage risks intelligently, and to benefit from opportunities safely and securely - in space travel, there is no margin for errors and attacks.
AI offers many possibilities of improving the security of software engineering processes. To this end, we research reliable vulnerability detection in source code, code clone detection, automatic type inference and code quality assessment. When software engineering is regarded as simply a means to an end, such as in scientific settings at the DLR, security considerations take a back seat. Our automatic solutions can improve security and safety without spending too much time on it. Besides software engineering, AI can also improve safety aspects. It cannot become distracted, is more adaptive than rule-based systems and does not make careless mistakes. We leverage these properties in our development of AI applications that ascertain the robustness and reliability of systems through monitoring and anomaly detection. However, operating AI systems also brings novel security and safety considerations, which we take on in our research. These include adversarial examples, input that is purposely manipulated to force certain predictions, but cannot be detected by humans. We further consider privacy and data protection aspects with privacy-preserving learning algorithms and investigate the overall information security of AI systems.