The department Systems Theory and Design (TD) researches methodologies, methods, and tools of systems engineering for requirements analysis of automated and autonomous systems. These new systems also require a unification of their functional properties and limitations with respect to their quality and efficiency. This includes their integrity, accountability and certifiability. In this context, concepts for trustworthiness and self-reflection are formalized. Based on these requirements for new systems, architecture patterns are designed during software development. These enable the anchoring of integrity, self-reflection and certifiability in system architectures on different levels of abstraction. The architecture patterns are supplemented by analysis and optimization methods for the resulting architectures, which are the basis for verification and early validation. Comprehensive research on this will extend the already established research approaches, including testing methods for virtual integration, trusted-by-construction techniques, self-X and fail-operational mechanisms, and modeling frameworks for human-machine-cooperation. The contents described above will be advanced towards system configuration and variant handling. For the verification phase, verification and validation methods and tools for automated and autonomous systems will be developed. Simulation-based, statistical methods will be addressed as well as formal verification methods. These methods will form the basis for the virtual and physical certification methods of new systems in the respective testbeds, which will also be developed by this department.
System Concepts and Design Methods
The group "System Concepts and Design Methods" deals with the development of methods, tools, and processes for the realization of trustworthy systems in the area of systems engineering. In particular, approaches for capturing and formalizing functional as well as non-functional requirements and for risk assessment are investigated. In addition, the researchers are driving the investigation of architectural patterns and design principles of security-relevant systems.
The research of the group "Human-Centered Engineering" focuses on the question of how systems can be developed that humans understand and with which they can interact intuitively. In particular, various methods and techniques of human modeling are explored for this purpose. With the help of human models, insights into the way humans interact with machines to be verifiably taken into account in the development process. Models that can serve as "virtual testers" of system designs or as "virtual assistants" play an important role.
Evidence for Trustworthiness
The "Evidence for Trustworthiness" group is dedicated to research questions concerning the proof of technical aspects of the trustworthiness of systems, especially with regard to their safety and security properties. Among other things, research will be conducted into how extremely rare events can be efficiently simulated and how analyses of the effects of errors and attacks in systems can automated. Another core topic is proving the validity of the models and simulations used in the analyses in order to establish a consistent chain of reasoning.