<p "="" style="text-align: justify;">Today's artificial intelligence systems are often based on machine learning (ML) approaches. Thanks to a large amount of data, large computers and sophisticated algorithms, one is nowadays able to solve some demanding tasks, for example classification tasks in image processing. Such systems are large optimizers that can be traced back to simple basic functions. The parameterization of such systems takes place in a complex learning process. However, ML systems are black boxes in which only inputs and outputs are known, but the inner workings remain hidden. We do not sufficiently understand why decisions are made. This is what distinguishes them from classical physical models. This hurdle has to be overcome on the way to really intelligent systems - AI systems have to become interpretable, transparent and thus linked to the physical world in order to ultimately make them safe and reliable to use on a large scale in different areas.
In the SKIAS project, we research and develop selected, fundamental methods and technologies to better understand artificial intelligence systems, make them safer and prepare their transfer to industry and society.
Our machine learning group pays particular attention to the explainability of deep learning-based predictions and the quantification of uncertainties that can influence such predictions. In both aspects, new methods are to be developed with a special focus on use in real and safe applications.
Project runtime: 11/2021 - 12/2024
Spokeperson: Jakob Gawlokowski