“Say again” is what air traffic controllers or aircraft pilots have to ask, if they did not reasonably understand radio telephony communication utterances. Automatic speech recognition systems convert speech to text to be displayed in a human machine interface. Controllers can be supported by Assistant Based Speech Recognition (ABSR) with command error rates below 2%, as developed in the projects AcListant® and AcListant®-Strips. However, transferring ABSR research prototypes to different operational environments, i.e. different approach areas, causes high costs, because each ABSR system requires adaptation to specific location needs such as speaker accents, special airspace characteristics or local deviations from ICAO standard phraseology.
The Horizon 2020 SESAR project MALORCA (Machine Learning of Speech Recognition Models for Controller Assistance) is partly funded by SESAR Joint Undertaking (Grant Number 698824) and will help to reduce the adaptation costs of ABSR systems to local needs. Machine learning algorithms analyse large amounts of speech data samples from real controllers to automatically adapt ABSR for operational use. There is no need to “say again” controller commands to train systems or staff.
The German Aerospace Center (DLR), Saarland University (USAAR), Idiap Research Institute (Idiap), Austro Control and the Air Navigation Services of the Czech Republic (ANS CR) work together to automatically and more efficiently improve speech recognition models for assistance at different controller working positions.
This project has received funding from the SESAR Joint Undertaking under Grant Agreement No. 698824, under European Union’s Horizon 2020 Research and Innovation programme.