TY - JOUR
T1 - GF-LRP: A Method for Explaining Predictions Made by Variational Graph Auto-Encoders
AU - Rodrigo-Bonet, Esther
AU - Deligiannis, Nikos
PY - 2024
Y1 - 2024
N2 - Variational graph autoencoders (VGAEs) combine the best of graph convolutional networks (GCNs) and variational inference and have been used to address various tasks such as node classification or link prediction. However, the lack of explainability is a limiting factor when trustworthy decisions are required. In this paper, we present a novel post-hoc explainability framework for VGAEs that considers their encoder-decoder architecture. Specifically, we propose a layer-wise-relevance-propagation-based (LRP-based) explanation technique coined GF-LRP which, to our knowledge, is the first explanation method for VGAEs. GF-LRP goes beyond existing LRP techniques for GCNs by taking into account, in addition to input features and the graph structure of the data, the VGAE branch-specific architecture. The explanations are branch-specific in the sense that we explain the mean and standard deviation branches of the Gaussian distribution learned by the model. For a node's prediction, GF-LRP infers the most relevant features, nodes and its edges. To prove the effectiveness of our explanation method, we compute fidelity, sparsity and contrastivity as well as commonly employed evaluation metrics. Extensive experiments and visualizations on two real-world datasets demonstrate the effectiveness of the proposed explanation method.
AB - Variational graph autoencoders (VGAEs) combine the best of graph convolutional networks (GCNs) and variational inference and have been used to address various tasks such as node classification or link prediction. However, the lack of explainability is a limiting factor when trustworthy decisions are required. In this paper, we present a novel post-hoc explainability framework for VGAEs that considers their encoder-decoder architecture. Specifically, we propose a layer-wise-relevance-propagation-based (LRP-based) explanation technique coined GF-LRP which, to our knowledge, is the first explanation method for VGAEs. GF-LRP goes beyond existing LRP techniques for GCNs by taking into account, in addition to input features and the graph structure of the data, the VGAE branch-specific architecture. The explanations are branch-specific in the sense that we explain the mean and standard deviation branches of the Gaussian distribution learned by the model. For a node's prediction, GF-LRP infers the most relevant features, nodes and its edges. To prove the effectiveness of our explanation method, we compute fidelity, sparsity and contrastivity as well as commonly employed evaluation metrics. Extensive experiments and visualizations on two real-world datasets demonstrate the effectiveness of the proposed explanation method.
UR - https://doi.org/10.1109/TETCI.2024.3419714
U2 - 10.1109/TETCI.2024.3419714
DO - 10.1109/TETCI.2024.3419714
M3 - Article
SP - 1
EP - 11
JO - IEEE Transactions on Emerging Topics in Computational Intelligence
JF - IEEE Transactions on Emerging Topics in Computational Intelligence
SN - 2471-285X
IS - 2471-285X
ER -