For a human being it's completely natural, for a technical system however it has not yet been achieved: the ability to listen to what another person says and adapt actions accordingly. Highly complex assistance systems, such as approach, surface movement, turnaround or takeoff management systems support operators without having this ability which means that the support provided by these systems is lagging far behind the technical possibilities. A user-friendly assistance function should, as with a human agent, offer suggestions for the solution of other tasks and give warnings with regard to the negative impacts of the declared intention purely by recognising the intentions of the operator.
This is brilliantly achieved for communication between air-traffic controllers and pilots as part of the AcListant project (Active Listening Assistant). Present-day aviation assistance systems such as arrival managers (AMAN) are currently able to plan efficient approach sequences and suggest flight path instructions for optimal approaches. Deviations from the plan such as changes to the landing sequence are exchanged between controllers and pilots over a voice channel. The electronic system first finds out about this through changes to the aircraft's flight path acquired from the radar data (blue arrow in picture) with up to 30 seconds' delay. This means that the assistance system's planning processes can only catch up with some delay.
The AcListant system will eliminate this disadvantage by analysing communication between air traffic controller and pilot and taking it into account as an additional source of information (red arrow in picture). This improvement is possible thanks to the addition of a speech processing component to the assistance system. Planning informs the speech processing system about expected air traffic control instructions, i.e. it is constantly delivering up-to-date context information. Using this externally provided context, the speech processing system recognises voice commands. The assistance system is thus able to constantly and promptly improve the base data which it uses for planning, i.e. its knowledge about possible future system states.
AcListant relieves the user of an assistance system of the task of manually entering data into the system via a possibly complex user interface. Large parts of human interaction are coordinated with the help of human speech. This applies particularly when we are talking about complex contexts such as meta-concepts. The assistance system supports the operator and no longer calls for the now conversely less effective support from the operator.
Follow-up project AcListant-Strips
In the AcListant project, the performance of approach management systems (AMAN) was improved through the use of assistant-based speech recognition (ABSR). ABSR is however also considered to have great potential as an input device to replace mouse and keyboard commands given by air traffic controllers.
This was the reason for starting follow-up project AcListant Strips in May 2015. Air traffic control centres have increasingly replaced or are replacing paper flight progress strips with electronic solutions. However, this can increase the workload of the air traffic controller if he has to trace the new electronic flight progress strips and radar labels using mouse and keyboard. AcListant-Strips describes a concept which uses ABSR to minimise the time spent by the controller inputting information when using electronic flight progress strips. It quantifies the advantages which result when speech recognition is used as the primary input device for tracing electronic flight progress strips. Mouse and keyboard inputs are then only necessary in cases where speech recognition does not work.
DLR-Institut für Flugführung (Lead)
Universität des Saarlandes