Graph Neural Networks (GNNs) have achieved great success in semi-supervised learning. Existing GNNs typically aggregate the features via message passing with the aid of rich labels. However, real-world graphs have limited labels, and overfitting weakens the classification ability when labels are insufficient. Besides, traditional message passing is sensitive to structure noise, such as perturbations on edges. The performance of GNNs would drop sharply when trained on such graphs. To mitigate these issues, we present a noise-resistant framework via contrastive message passing. Except for the limited labelled nodes as supervision widely used in GNNs, we model the topology structure by graph likelihood as extra supervision. Specifically, we first propose contrastive graph likelihood, which is defined as a product of the edge likelihood on all connected node pairs. Then, we apply two unfolding updated steps via descent iterations. The first step updates the features in a single view with the aid of initializing the edge probability. Then the second step applies a binary edge for homophily and heterophily views, respectively. The homophily view applies attractive force to pull the positive-connected nodes close; otherwise, the heterophily view utilizes repulsive force to push away the negative-connected nodes. Extensive experiments show that our method has superior performance on semi-supervised node classification tasks with sparse labels and excellent robustness under perturbations in structure.
Keywords: Contrastive message passing; Graph neural networks; Objective function; Sparse labels.
Copyright © 2024 Elsevier Ltd. All rights reserved.