Projects per year
Abstract
Explainable Artificial Intelligence (XAI) attempts to help humans understand machine learning decisions better and has been identified as a critical component towards increasing the trustworthiness of complex black-box systems, such as deep neural networks (DNNs). In this paper, we propose a generic and comprehensive framework named SNIPPET and create a user interface for the subjective evaluation of visual explanations, focusing on finding human-friendly explanations. SNIPPET considers human-centered evaluation tasks and incorporates the collection of human annotations. These annotations can serve as valuable feedback to validate the qualitative results obtained from the subjective assessment tasks. Moreover, we consider different user background categories during the evaluation process to ensure diverse perspectives and comprehensive evaluation. We demonstrate SNIPPET on a DeepFake face dataset. Distinguishing real from fake faces is a non-trivial task even for humans, that depends on rather subtle features, making it a challenging use case. Using SNIPPET, we evaluate four popular XAI methods which provide visual explanations: Gradient-weighted Class Activation Mapping (GradCAM), Layer-wise Relevance Propagation (LRP), attention rollout (rollout), and Transformer Attribution (TA). Based on our experimental results, we observe preference variations among different user categories. We find that most people are more favorable to the explanations of rollout. Moreover, when it comes to XAI-assisted understanding, those who have no or lack relevant background knowledge often consider that visual explanations are insufficient to help them understand.
Original language | English |
---|---|
Article number | 253 |
Pages (from-to) | 1-29 |
Number of pages | 29 |
Journal | ACM Transactions on Multimedia Computing, Communications, and Applications |
Volume | 20 |
Issue number | 8 |
DOIs | |
Publication status | Published - 2024 |
Bibliographical note
Funding Information:This research received funding from the FWO (grants G0A4720N and 1SB5721N), the Flemish Government under the \u201COnderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen\u201D programme, and imec through AAA project Trustworthy AI Methods (TAIM).
Publisher Copyright:
© 2024 Copyright held by the owner/author(s).
Fingerprint
Dive into the research topics of 'SNIPPET: A Framework for Subjective Evaluation of Visual Explanations Applied to DeepFake Detection'. Together they form a unique fingerprint.Projects
- 1 Active
-
VLAAI1: Subsidie: Onderzoeksprogramma Artificiële Intelligentie (AI) Vlaanderen
1/07/19 → 31/12/24
Project: Applied