Diese Seite drucken

Right for the Wrong Scientific Reasons: Revising Deep Networks by Interacting with their Explanations

  • Autor/in: Schramowski, P., W. Stammer, S. Teso, A. Brugger, H.-G. Luigs, A.-Ka. Mahlein, K. Kersting
  • Jahr: 2020
  • Seite/n: 1-22, https://arxiv.org/pdf/2001.05371.pdf


Deep neural networks have shown excellent performances in many real-world applications such as plant phenotyping. Unfortunately, they may show “Clever Hans”-like behaviour— making use of confounding factors within datasets—to achieve high prediction rates. Rather than discarding the trained models or the dataset, we show that interactions between the learning system and the human user can correct the model. Specifically, we revise the models decision process by adding annotated masks during the learning loop and penalize decisions made for wrong reasons. In this way the decision strategies of the machine can be improved, focusing on relevant features, without considerably dropping predictive performance.
FaLang translation system by Faboba