Visualizing Invariant Features in Vision Models

Research output: Chapter in Book/Report/Conference proceedingConference paper

1 Citation (Scopus)

Abstract

Explainable AI is important for improving transparency, accountability, trust, and ethical considerations in AI systems, and for enabling users to make informed decisions based on the outputs of these systems. It provides insights into the factors that drove a particular machine learning model prediction. In the context of deep learning models, invariance refers to the property whereby diverse input transformations, such as data augmentations, result in similar feature spaces and predictions. The aim of this work is to unveil what invariant features the model has learned. We propose a method coined as Pixel Invariance, which measures the invariance of each pixel of the input. Our investigation involves an analysis of four self-supervised models, as these models are pre-trained to learn invariance to input transformations. We additionally perform quantitative evaluation measures to assess the faithfulness, reliability and confidence of the explanation map, and analyze the four self-supervised models both qualitatively and quantitatively.
Original languageEnglish
Title of host publication2023 24th International Conference on Digital Signal Processing (DSP)
PublisherIEEE
Pages1-5
Number of pages5
ISBN (Electronic)979-8-3503-3959-8
DOIs
Publication statusPublished - 5 Jul 2023

Publication series

NameInternational Conference on Digital Signal Processing, DSP
Volume2023-June

Bibliographical note

Funding Information:
This research received funding from the FWO (Grants G014718N, G0A4720N and 1SB5721N) and from imec through AAA project Trustworthy AI Methods (TAIM).

Publisher Copyright:
© 2023 IEEE.

Copyright:
Copyright 2023 Elsevier B.V., All rights reserved.

Fingerprint

Dive into the research topics of 'Visualizing Invariant Features in Vision Models'. Together they form a unique fingerprint.

Cite this