LOKI
Because safety is a top priority in aviation, trustworthy collaboration between humans and AI systems is of central importance. The European Union Aviation Safety Agency (EASA) and EUROCONTROL are still working on a roadmap for the introduction of AI systems.
According to this, both the results of the AI system and the collaboration between the AI system and the human users must be safe, reliable, predictable and transparent. In the LOKI project, scientists research and develop fundamentals for introducing AI systems in air traffic control and flight control. The results of the four-year DLR project LOKI (04/2022-03/2026) will be guidelines, prototypes and research insights for human-centered design of trustworthy collaboration between human users and AI systems.
We, as the DLR Institute of Data Science, are represented in two different areas. Together with the Institute of Aerospace Medicine we ...
- consider AI-powered systems which shall be able to recognize intentional states such as fatigue and attention of aviation operateurs. In order to enhance the trust into AI systems, the evaluations and decisions of the system need to be explainable to the humans. One goal is to create a prototypical explainable classification algorithm that classifies intentional states based on behavioral- and gaze-data of the aviation operateurs.
- build the basis to establish AI-based concepts and tools for the decision-making process on who is, for example, suitable to be a pilot or air traffic controller. The aim is to (partially) automate and optimise decision-making in aptitude diagnostics.
In the LOKI team, scientists from six DLR institutes work together on an interdisciplinary basis. In addition, LOKI is supported by the following DLR-external partners from application and research: