Plausible Counterfactuals: Auditing Deep Learning Classifiers with Realistic Adversarial Examples

No Thumbnail Available
Identifiers
Publication date
2020-07
Advisors
Journal Title
Journal ISSN
Volume Title
Publisher
Institute of Electrical and Electronics Engineers Inc.
Citations
Google Scholar
Export
Research Projects
Organizational Units
Journal Issue
Abstract
The last decade has witnessed the proliferation of Deep Learning models in many applications, achieving unrivaled levels of predictive performance. Unfortunately, the black-box nature of Deep Learning models has posed unanswered questions about what they learn from data. Certain application scenarios have highlighted the importance of assessing the bounds under which Deep Learning models operate, a problem addressed by using assorted approaches aimed at audiences from different domains. However, as the focus of the application is placed more on non-expert users, it results mandatory to provide the means for him/her to trust the model, just like a human gets familiar with a system or process: by understanding the hypothetical circumstances under which it fails. This is indeed the angular stone for this research work: to undertake an adversarial analysis of a Deep Learning model. The proposed framework constructs counterfactual examples by ensuring their plausibility, e.g. there is a reasonable probability that a human could generate them without resorting to a computer program. Therefore, this work must be regarded as valuable auditing exercise of the usable bounds a certain model is constrained within, thereby allowing for a much greater understanding of the capabilities and pitfalls of a model used in a real application. To this end, a Generative Adversarial Network (GAN) and multi-objective heuristics are used to furnish a plausible attack to the audited model, efficiently trading between the confusion of this model, the intensity and plausibility of the generated counterfactual. Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
Description
Publisher Copyright: © 2020 IEEE.
Citation
Barredo-Arrieta , A & Del Ser , J 2020 , Plausible Counterfactuals : Auditing Deep Learning Classifiers with Realistic Adversarial Examples . in 2020 International Joint Conference on Neural Networks, IJCNN 2020 - Proceedings . , 9206728 , Proceedings of the International Joint Conference on Neural Networks , Institute of Electrical and Electronics Engineers Inc. , 2020 International Joint Conference on Neural Networks, IJCNN 2020 , Virtual, Glasgow , United Kingdom , 19/07/20 . https://doi.org/10.1109/IJCNN48605.2020.9206728
conference