Team: Trustworthy AI4EO
The Team Trustworthy AI4EO focuses on research of novel AI based methods for remote sensing and earth observation. Our aim is to utilize the physical properties of remotely sensing data and the mathematical properties of AI, in particular those of Deep Learning (DL), to make the latter safe to use. This is not in general the case. For example, DL based image reconstructions suffer from AI hallucinations - tiny, realistic looking artefacts that are not detectable by experts in the respective field of the application. A potential worst-case scenario, could be an artefact that resembles a nuclear missile station appearing in the DL based super-resolution of multi-spectral satellite data and that is not real, but "hallucinated" by the DL based method. Another example, could be an artefact in DL based aerosol retrieval from hyper-spectral satellite data yielding locally erroneous predictions. Overall, cross validating DL based predictions with other data sources and methods to indicate erroneous or uncertain predictions can be important in many applications in earth observation. Another relevant application of trustworthy AI4EO is agriculture. Degrading soil health, hazards, unpredictable weather conditions, increased fuel costs are only few of the problems farmers are seeing. To address these problems, remote sensing and trustworthy AI technologies can come handy. For example, food security could be improved by collectively providing an AI based open source software for crop monitoring using remote sensing data. For the development of this, in-situ data provided by farmers, satellite data, as well as modern computer vision methods can be used. To cope with the increasingly fast changes of ecosystems and resulting distribution shifts in the data and provide risk assessments, adaptable and uncertainty-aware DL models are a key tool. Additionally, to enable autonomous decisions of the end-users, explanations of the analysis results based on modern explainable AI methods are paramount. Providing physically consistent explanations of AI methods, preventing and detecting AI hallucinations and, in general, alleviating AI and DL based methods of their black box nature is inherently difficult. This entails interpreting non-linear and potentially stochastic functions - deep neural networks, convolutional neural networks and many more - with high-dimensional in- and outputs, that have millions of parameters, which were obtained from optimization procedures using only parts of the data of the problem at hand. Using central research areas of Trustworthy AI - such as, for example, Uncertainty Quantification, Explainable AI and Robustness - in combination with expert domain knowledge, the team aims to equip AI based methods for remote sensing with mechanisms, such that AI can be used marginally responsibly in earth observation.