Abstract
Explainable AI is important for improving transparency, accountability, trust, and ethical considerations in AI systems, and for enabling users to make informed decisions based on the outputs of these systems. It provides insights into the factors that drove a particular machine learning model prediction. In the context of deep learning models, invariance refers to the property whereby diverse input transformations, such as data augmentations, result in similar feature spaces and predictions. The aim of this work is to unveil what invariant features the model has learned. We propose a method coined as Pixel Invariance, which measures the invariance of each pixel of the input. Our investigation involves an analysis of four self-supervised models, as these models are pre-trained to learn invariance to input transformations. We additionally perform quantitative evaluation measures to assess the faithfulness, reliability and confidence of the explanation map, and analyze the four self-supervised models both qualitatively and quantitatively.
Original language | English |
---|---|
Title of host publication | 2023 24th International Conference on Digital Signal Processing (DSP) |
Publisher | IEEE |
Pages | 1-5 |
Number of pages | 5 |
ISBN (Electronic) | 979-8-3503-3959-8 |
DOIs | |
Publication status | Published - 5 Jul 2023 |
Publication series
Name | International Conference on Digital Signal Processing, DSP |
---|---|
Volume | 2023-June |
Bibliographical note
Funding Information:This research received funding from the FWO (Grants G014718N, G0A4720N and 1SB5721N) and from imec through AAA project Trustworthy AI Methods (TAIM).
Publisher Copyright:
© 2023 IEEE.
Copyright:
Copyright 2023 Elsevier B.V., All rights reserved.