Depth Scaling in Graph Neural Networks: Understanding the Flat Curve Behavior

Diana Sousa Gomes, Kyriakos Efthymiadis, Ann Nowe, Peter Vrancx

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)
31 Downloads (Pure)

Abstract

Training deep Graph Neural Networks (GNNs) has proved to be a challenging task. A key goal of many new GNN architectures is to enable the depth scaling seen in other types of deep learning models. However, unlike deep learning methods in other domains, deep GNNs do not show significant performance boosts when compared to their shallow counterparts (resulting in a flat curve of performance over depth). In this work, we investigate some of the reasons why this goal of depth still eludes GNN researchers. We also question the effectiveness of current methods to train deep GNNs and show evidence of different types of pathological behavior in these networks. Our results suggest that current approaches hide the problems with deep GNNs rather than solve them, as current deep GNNs are only as discriminative as their respective shallow versions.
Original languageEnglish
Number of pages22
JournalTransactions on Machine Learning Research (TMLR)
Publication statusPublished - 2024

Bibliographical note

Publisher Copyright:
© 2024, Transactions on Machine Learning Research. All rights reserved.

Keywords

  • graph neural networks
  • depth evaluation

Fingerprint

Dive into the research topics of 'Depth Scaling in Graph Neural Networks: Understanding the Flat Curve Behavior'. Together they form a unique fingerprint.

Cite this