Supervised Machine Learning, and Deep Learning in particular, relies on large labeled datasets for training. Erroneous labels are often abundant in those datasets, so it is crucial to understand the impact that erroneous labels might have on model outcomes. For this purpose, benchmark datasets are designed and used to empirically assess the role of label noise for the performance of Deep Learning models.
Project runtime: 10/2019 - 10/2022
Partner: TUM - Data Science in Earth Observation
Spokeperson: Jonas Aaron Gütter