Abstract
Graph Contrastive Learning (GCL) is a self-supervised learning algorithm designed for graph data and has received widespread attention in the field of network representation learning. However, existing GCL-based network representation methods mostly use a single-branch contrastive approach, which makes it difficult to learn deeper semantic relationships and is easily affected by noisy connections during the process of obtaining global structural information embedding. Therefore, this paper proposes a network representation learning method based on a dual-branch contrastive approach. Firstly, the clustering idea is introduced into the process of embedding global structural information, and irrelevant nodes are selected and removed based on the clustering results, effectively reducing the noise in the embedding process. Then, a dual-branch contrastive method, similar to ensemble learning, is proposed, in which the two generated views are compared with the original graph separately, and the joint optimization method is used to continuously update the two views, allowing the model to learn more discriminative feature representations. The proposed method was evaluated on three datasets, Cora, Citeseer, and Pubmed, for node classification and dimensionality reduction visualization experiments. The results show that the proposed method achieved better performance compared to existing baseline models.
Project supported by the National Natural Science Foundation of China (62176145).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Ju, W., Fang, Z., Gu, Y., et al.: A Comprehensive Survey on Deep Graph Representation Learning. arXiv preprint arXiv:2304.05055 (2023)
Zhou, J., Cui, G., Hu, S., Zhang, Z., Yang, C., Liu, Z., et al.: Graph neural networks: a review of methods and applications. AI Open 1, 57–81 (2020)
Hafidi, H., Ghogho, M., Ciblat, P., Swami, A.: Graphcl: contrastive self-supervised learning of graph representations. arXiv 2020. arXiv preprint arXiv:2007.0802 (2020)
Qiu, J., et al.: Gcc: graph contrastive coding for graph neural network pre-training. In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1150–1160 (2020)
Velickovic, P., Fedus, W., Hamilton, W.L., Li, P., Bengio, Y., Hjelm, R.D.: Deep graph infomax. ICLR (Poster) 2(3), 4 (2019)
Hassani, K., Khasahmadi, A.H.: Contrastive multi-view representation learning on graphs. In: International Conference on Machine Learning, pp. 4116–4126. PMLR (2020)
Zhu, Y., Xu, Y., Yu, F., Liu, Q., Wu, S., Wang, L.: Deep graph contrastive representation learning. arXiv preprint arXiv:2006.04131 (2020)
Zhu, Y., Xu, Y., Yu, F., Liu, Q., Wu, S., Wang, L.: Graph contrastive learning with adaptive augmentation. In: Proceedings of the Web Conference 2021, pp. 2069–2080 (2021)
Peng, Z., et al.: Graph representation learning via graphical mutual information maximization. In: Proceedings of the Web Conference 2020, pp. 259–270 (2020)
Xia, J., Wu, L., Chen, J., Hu, B., Li, S.Z.: Simgrace: a simple framework for graph contrastive learning without data augmentation. In: Proceedings of the ACM Web Conference 2022, pp. 1070–1079 (2022)
Zhang, Y., Zhu, H., Song, Z., Koniusz, P., King, I.: COSTA: covariance-preserving feature augmentation for graph contrastive learning. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 2524–2534 (2022)
Zeng, L., Li, L., Gao, Z., Zhao, P., Li, J.: ImGCL: revisiting graph contrastive learning on imbalanced node classification. arXiv preprint arXiv:2205.11332 (2022)
Goldberg, Y., Levy, O.: word2vec explained: deriving Mikolov et al.’s negative-sampling word-embedding method. arXiv preprint arXiv:1402.3722 (2014)
Xia, J., Zhu, Y., Du, Y., Li, S.Z.: A survey of pretraining on graphs: taxonomy, methods, and applications. arXiv preprint arXiv:2202.07893 (2022)
Lee, N., Lee, J., Park, C.: Augmentation-free self-supervised learning on graphs. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 7, pp. 7372–7380 (2022)
Wang, C., Liu, Z.: Graph representation learning by ensemble aggregating subgraphs via mutual information maximization. arXiv preprint arXiv:2103.13125 (2021)
Hjelm, R.D., et al.: Learning deep representations by mutual information estimation and maximization. arXiv preprint arXiv:1808.06670 (2018)
Grill, J.B., Strub, F., Altch, F., Tallec, C., Richemond, P., Buchatskaya, E., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Adv. Neural. Inf. Process. Syst. 33, 21271–21284 (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Zhang, H., Cao, J., Li, K., Wang, Y., Li, R. (2024). Dual-Branch Contrastive Learning for Network Representation Learning. In: Luo, B., Cheng, L., Wu, ZG., Li, H., Li, C. (eds) Neural Information Processing. ICONIP 2023. Communications in Computer and Information Science, vol 1966. Springer, Singapore. https://doi.org/10.1007/978-981-99-8148-9_15
Download citation
DOI: https://doi.org/10.1007/978-981-99-8148-9_15
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8147-2
Online ISBN: 978-981-99-8148-9
eBook Packages: Computer ScienceComputer Science (R0)