GF-LRP: A Method for Explaining Predictions Made by Variational Graph Auto-Encoders

Onderzoeksoutput: Articlepeer review

Samenvatting

Variational graph autoencoders (VGAEs) combine the best of graph convolutional networks (GCNs) and variational inference and have been used to address various tasks such as node classification or link prediction. However, the lack of explainability is a limiting factor when trustworthy decisions are required. In this paper, we present a novel post-hoc explainability framework for VGAEs that considers their encoder-decoder architecture. Specifically, we propose a layer-wise-relevance-propagation-based (LRP-based) explanation technique coined GF-LRP which, to our knowledge, is the first explanation method for VGAEs. GF-LRP goes beyond existing LRP techniques for GCNs by taking into account, in addition to input features and the graph structure of the data, the VGAE branch-specific architecture. The explanations are branch-specific in the sense that we explain the mean and standard deviation branches of the Gaussian distribution learned by the model. For a node's prediction, GF-LRP infers the most relevant features, nodes and its edges. To prove the effectiveness of our explanation method, we compute fidelity, sparsity and contrastivity as well as commonly employed evaluation metrics. Extensive experiments and visualizations on two real-world datasets demonstrate the effectiveness of the proposed explanation method.
Originele taal-2English
Pagina's (van-tot)1-11
Aantal pagina's11
TijdschriftIEEE Transactions on Emerging Topics in Computational Intelligence
Nummer van het tijdschrift2471-285X
DOI's
StatusPublished - 2024

Vingerafdruk

Duik in de onderzoeksthema's van 'GF-LRP: A Method for Explaining Predictions Made by Variational Graph Auto-Encoders'. Samen vormen ze een unieke vingerafdruk.

Citeer dit