GF-LRP: A Method for Explaining Predictions Made by Variational Graph Auto-Encoders

Research output: Contribution to journalArticlepeer-review

Abstract

Variational graph autoencoders (VGAEs) combine the best of graph convolutional networks (GCNs) and variational inference and have been used to address various tasks such as node classification or link prediction. However, the lack of explainability is a limiting factor when trustworthy decisions are required. In this paper, we present a novel post-hoc explainability framework for VGAEs that considers their encoder-decoder architecture. Specifically, we propose a layer-wise-relevance-propagation-based (LRP-based) explanation technique coined GF-LRP which, to our knowledge, is the first explanation method for VGAEs. GF-LRP goes beyond existing LRP techniques for GCNs by taking into account, in addition to input features and the graph structure of the data, the VGAE branch-specific architecture. The explanations are branch-specific in the sense that we explain the mean and standard deviation branches of the Gaussian distribution learned by the model. For a node's prediction, GF-LRP infers the most relevant features, nodes and its edges. To prove the effectiveness of our explanation method, we compute fidelity, sparsity and contrastivity as well as commonly employed evaluation metrics. Extensive experiments and visualizations on two real-world datasets demonstrate the effectiveness of the proposed explanation method.
Original languageEnglish
Pages (from-to)1-11
Number of pages11
JournalIEEE Transactions on Emerging Topics in Computational Intelligence
Issue number2471-285X
DOIs
Publication statusPublished - 2024

Fingerprint

Dive into the research topics of 'GF-LRP: A Method for Explaining Predictions Made by Variational Graph Auto-Encoders'. Together they form a unique fingerprint.

Cite this