Crafting effective visual explanations by attributing the impact of datasets, architectures, and data compression techniques

Onderzoeksoutput: PhD Thesis

Samenvatting

Explainable Artificial Intelligence (XAI) plays an important role in modern AI
research, motivated by the desire for transparency and interpretability within AI-
driven decision-making. As AI systems become more advanced and complicated, it
becomes increasingly important to ensure they are reliable, responsible, and ethical.
These imperatives are particularly acute in domains where stakes are high, such as
medical diagnostics, autonomous driving, and security frameworks.
In computer vision, XAI aims to provide understandable, straightforward ex-
planations for AI model predictions, allowing users to grasp the decision-making
processes of these complex systems. Visualizations such as saliency maps are
frequently employed to identify input data regions significantly impacting model
predictions, thus enhancing user understanding of AI visual data analysis. However,
there are still concerns about the effectiveness of visual explanations, especially
regarding their robustness, trustworthiness, and human-friendliness.
Our research aims to advance this field by evaluating how various factors—such
as the diversity of datasets, the architecture of models, and data compres-
sion—influence the performance of visual explanations. Through thorough analysis
and careful refinement, we attempt to enhance these explanations, ensuring they
are both highly informative and accessible to users in diverse XAI applications.
During our evaluation process, we conduct a detailed investigation using both
automatic metrics and subjective evaluation methods to assess the effectiveness of
visual explanations thoroughly. Automatic metrics, such as task performance and
localization accuracy, provide quantifiable measures of the effectiveness of these
explanations in real-world scenarios. For subjective evaluation, we have developed a
framework named SNIPPET, which enables a detailed and user-oriented assessment
of visual explanations. Additionally, our research explores how these objective
metrics correlate with subjective human judgments, aiming to integrate quantitative
data with the more nuanced, qualitative feedback from users. Ultimately, our goal is
to provide comprehensive insights into the practical aspects of XAI methodologies,
particularly focusing on their implementation in the field of computer vision.
Originele taal-2English
Toekennende instantie
  • Vrije Universiteit Brussel
Begeleider(s)/adviseur
  • Deligiannis, Nikolaos, Promotor
Datum van toekenning25 okt 2024
StatusPublished - 2024

Citeer dit