Deep learning is the combination of automatically extracting features from data and using them in a model free function approximation. Unfortunately, a trade-off between interpretability of the features and their usefulness for prediction in the function approximation occurs. This trade-off calls for new methods to understand the features and their dependency on the various hyper parameters of deep learning methods. To achieve this goal, we are, at the moment, looking at adversarial examples, tiny perturbations of the input that lead to misclassifications. Adversarial examples demonstrate a flaw in deep learning where the classifier uses seemingly meaningless features for classification.
Christian Reimers (FSU Jena)
Joachim Denzler (FSU Jena)