Current cooperations, projects and research projects


Figure: Human in a highly automated vehicle (symbolic image), source: DLR

Methods, Processes and Tool Chains for Validation & Verification of NextGenerationCar

The development and introduction of future highly and fully automated road vehicles requires new verification and validation (V&V) methods that cover the entire product life cycle. The aim is to continuously improve automation functions not only during development, but also with field data support during operation. Issues are raised that only arise in a dynamically changing environment and therefore represent unknown traffic situations for automation.

This is where the V&V4NGC project comes in, creating the conditions for simulation-based verification and validation of new driving functions for highly automated vehicles during development and operation.

V&V4NGC will implement the development of a mixed virtual/physical simulation environment for the Next Generation Car concept (NGC) and all future vehicle concepts. The simulation environment will be used for continuous design, verification and validation up to virtual certification of automated driving functions and fully automated road vehicles. It will also provide a modelling and testing environment for the integration of methods and tools for the development of migration strategies and their monitoring and evaluation.

Contribution by the Institute for AI Safety and Security

The Institute for AI Safety and Security is leading the sub-project 'TP 5000 Online Diagnosis and Online Verification' and is researching how to ensure safe system behaviour of the driving functions of automated and networked road vehicles during operation. Here, a common use case from the KoKoVI project of the project family is examined in more detail: on the basis of data collected from the vehicle, the feedback effect on the development process is examined, e.g. via a digital twin/shadow and defined scenario descriptions.

Other key research topics at the Institute for AI Safety and Security include the development of a DevOps IT infrastructure concept, the associated data management and a suitable scenario representation for verification.

Figure: DevOps (= Development + Operations) cycle with the repeating phases of continuous system development, source: DLR
The diagram shows that the DevOps cycle relies both on experiments and tests in the simulator (increasingly in the 'Dev' design phase) and on data and information from operation in the test field or in real operation (Operations). Linking data collected in the field with corresponding simulation runs is a particular challenge here.

The project is scheduled to run from January 2022 to December 2025, with a project volume of €34 million, and is commissioned by DLR's Program Directorate Transport.

DLR institutes involved in the project;

  • Institute of Systems Engineering for Future Mobility
  • Institute of Vehicle Concepts
  • Institute of Flight Systems Microwaves and Radar
  • Institute Institute for AI Safety and Security
  • Institute of Communications and Navigation
  • Remote Sensing Technology Institute
  • Institute of Optical Sensor Systems
  • Institute for Software Technology
  • Institute of System Dynamics and Control
  • Institute of Transportation Systems
  • Systemhaus Technik
  • Flight Experiments
Figure: Networks of digital twins, source: DLR
Using mobile robots as an example, the illustration shows how networks of digital twins can be organised to support validation and verification. The digital twins of the drives can be analysed and verified both with the digital twins of the displays and with those of other mobile robots during operation.


Dr.-Ing. Sven Hallerbach

Head of Department
German Aerospace Center (DLR)
Institute for AI Safety and Security
AI Engineering
Wilhelm-Runge-Straße 10, 89081 Ulm

Karoline Bischof

Consultant Public Relations
German Aerospace Center (DLR)
Institute for AI Safety and Security
Business Development and Strategy
Rathausallee 12, 53757 Sankt Augustin