MaiSHU

The project MaiSHU (Multimodal Perception and Human-Machine Interfaces of Semi-Autonomous Intelligent Systems for Humanitarian Assistance in Uncertain and Unstructured Environments) aims to develop innovative technologies that enable assisted teleoperation of vehicles for humanitarian assistance and delivery of goods in challenging, unstructured environments.

  
Runtime:
2022-01-01 to 2024-12-31
Project partners:
Website:
Fields of application:
Teleoperation of vehicles for the transport of humanitarian goods

Project details

In the two projects KI4HE and MaiSHU the work from the AHEAD project is continued and extended by modern technologies of environment perception, localization, semantic scene analysis, human-machine interface and teleoperation. The core of the MaiSHU project is to functionally integrate a robust multimodal perception strategy, complemented by AI-based semantic environment reasoning, and a powerful human-machine interface to enable reliable remote vehicle control. Based on satellite imagery and vehicle and perception data, a global control center provides high-level route plans to ensure safe passage over rough terrain. The technologies developed in this project complement current developments in autonomous driving on roads and allow high technology readiness levels (TRL) for their future commercialization for diverse mobility tasks in partially structured or unstructured outdoor terrains.

The work is distributed among the following work packages:

  • Simulation
  • Diversity for robust perception and navigation
  • Artificial intelligence for shared autonomy
  • Human-machine interface, control and visualization
  • Mission planning, operation and evaluation using a multimodal 3D situational awareness system
  • Validation of the developed work

This collaborative project involves the industrial partners Blickfeld GmbH and SENSODRIVE, as well as the "Innovation Accelerator" of the United Nations World Food Programme (WFP), together with DLR.