Operational processors which are part of the ground segment of an earth observation mission have the purpose of continuously and rapidly generating precisely defined products from measurement data. They work in a configuration-controlled environment and have to function almost entirely without interactive intervention. These requirements place high demands on the software employed, which is never off-the-shelf but rather the end product of a long and unique development process. It begins with algorithms which must be adapted to the requirements of the particular ground segment. If a Level 1 processor is to be developed, then the algorithms often come from a ground calibration environment. For Level 2 processors a scientific retrieval code is the starting point. In these particular cases the algorithm, which is often quite complex, must be optimized so that it still provides results of the required precision but does so with far fewer resources (time and service personnel). In addition, for both processor types the interfaces between communicating modules have to be effectively designed and also allow control and monitoring functions to be carried out by external ground segment components. These aspects lead to different development paths for operational processors and data processing systems for science applications. In an ideal case, new calibration or retrieval methods are first employed as parts of scientific systems, and after their worth has been confirmed they are implemented in an operational environment.
Operational processors are required to generate data which are as precise as possible, either in near-real-time (NRT) or in offline mode. Sometimes the algorithmic core for both types of processors is almost identical, but there are also cases in which especially customized algorithms must be used in order to meet the stringent time demands of NRT operation. In addition to the specifications for operational processors, a prototype is created at every processing level for testing and development purposes, which permits interactive control and, as necessary, detailed investigation of individual processing steps. Which algorithms are used in the processors is described in detail in the Algorithm and Technical Baseline Document (ATBD). For the mission operator, these ATBDs, along with other specifications, are the foundation for industrial implementation of the processor software which is finally to be installed in the ground segment. This outcome requires close cooperation among the team’s algorithm experts, mission operators and industry and includes processor testing, for which test scenarios and data need to be created in the course of the development work. The final step is industrial implementation which ends with a Factory Acceptance Test (FAT) before the products generated by the newly installed processors are verified and validated with actual measurement data.
Processors and the algorithms which form their basis are improved further in the course of a mission. Calibration of the instrument improves with the length of the mission, as does the capability to extract already defined or even new geophysical parameters from the measurement data. For this reason, ideally all measurement data are routinely subjected to reprocessing which each time employs the state-of-the-art algorithms being implemented in the operational software at that time. This means that the responsibility for the algorithms and processors given to the Atmospheric Processors department and its Processor Development team is a long-term commitment which accompanies a mission from Phase C/D (design and development) to operation in orbit and beyond.