AI-supported civil protection and disaster prevention
The ACEAD (Autonomous Civil protection Emergency Aid Devices) project brings together DLR developments and technologies for various disaster scenarios and prepares them for acute use.
The flooding of the Ahr valley in 2021 highlighted the difficulties of reaching and supplying people in such disasters and showed the complexity of such an event. Among other things, the affected areas have to be identified, the emergency forces have to be coordinated, and critical infrastructures such as hospitals or the power supply have to be kept in operation and secured.
Cutting-edge technologies such as drone, aerial or satellite imagery and their AI-based applications to analyse them are needed to get a quick overview in such a disaster. However, accessing and supplying certain areas often proves to be particularly challenging.
The challenge for the ACEAD project is therefore to enable the availability of communication links and the collection of up-to-date situation data, as well as the accessibility of the disaster area despite the destruction of infrastructure, through technological innovations. Technologies to be used include the following:
- Simulation of hazard/disaster situations
- Remote and semi-autonomous driving
- Advanced driver assistance systems
- intermodal global telecommunication solutions
- subsystem communications for long- and short-range intelligence
- Situational awareness for positioning, navigation, route planning and drone reconnaissance
These technologies have been developed in the previous projects MUSERO, AHEAD, KI4HE and MaiSHU. They will now be adapted and tested in ACEAD for civil protection and disaster management. To this end, the robustness of the solutions and components must be increased, the communication options must become more multimodal, the data must be transparent and traceable, and the trustworthiness of the AI must be ensured. The requirements of civil protection and disaster management must also be implemented and tested in relevant environments with stakeholders.
Contribution Institute for AI Safety and Security
The Institute for AI Safety and Security focuses primarily on developing and investigating data quality assessment as well as protecting data integrity through traceability and provenance in data creation as well as processing. In addition, the validation of non-technical requirements in the context of the campaign and the derivation of recommendations for action on legal, ethical acceptance issues in connection with technology development (e.g. ethics of sensitive data, anonymisation of camera data, e.g. of persons) are also being researched.