Reading List for Graph Representation Learning

O-Joun Lee·2024년 4월 1일
0

Reading Lists

목록 보기
5/5
post-thumbnail

I. Shallow models

  1. Bryan Perozzi, Rami Al-Rfou, Steven Skiena: DeepWalk: online learning of social representations. KDD 2014: 701-710
  2. Aditya Grover, Jure Leskovec: node2vec: Scalable Feature Learning for Networks. KDD 2016: 855-864
  3. Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, Qiaozhu Mei: LINE: Large-scale Information Network Embedding. WWW 2015: 1067-1077
  4. Mingdong Ou, Peng Cui, Jian Pei, Ziwei Zhang, Wenwu Zhu: Asymmetric Transitivity Preserving Graph Embedding. KDD 2016: 1105-1114
  5. Daixin Wang, Peng Cui, Wenwu Zhu: Structural Deep Network Embedding. KDD 2016: 1225-1234
  6. Shaosheng Cao, Wei Lu, Qiongkai Xu: GraRep: Learning Graph Representations with Global Structural Information. CIKM 2015: 891-900
  7. Bryan Perozzi, Vivek Kulkarni, Haochen Chen, Steven Skiena: Don't Walk, Skip!: Online Learning of Multi-scale Network Embeddings. ASONAM 2017: 258-265
  8. Leonardo Filipe Rodrigues Ribeiro, Pedro H. P. Saverese, Daniel R. Figueiredo: struc2vec: Learning Node Representations from Structural Identity. KDD 2017: 385-394.
  9. Annamalai Narayanan, Mahinthan Chandramohan, Lihui Chen, Yang Liu, Santhoshkumar Saminathan: subgraph2vec: Learning Distributed Representations of Rooted Sub-graphs from Large Graphs. CoRR abs/1606.08928 (2016)
  10. Yuxiao Dong, Nitesh V. Chawla, Ananthram Swami: metapath2vec: Scalable Representation Learning for Heterogeneous Networks. KDD 2017: 135-144
  11. Aleksandar Bojchevski, Stephan Günnemann: Deep Gaussian Embedding of Graphs: Unsupervised Inductive Learning via Ranking. ICLR 2018

II. Deep models

1. Recurrent GNNs

  • Li Y, Tarlow D, Brockschmidt M, Zemel R. 2015c. Gated graph sequence neural networks.
    arXiv preprint arXiv:1511.05493
  • Gilmer, J.; Schoenholz, S.S.; Riley, P.F.; Vinyals, O.; Dahl, G.E. Neural Message Passing for Quantum Chemistry. In Proceedings of the 34th International Conference on Machine Learning (ICML 2017); PMLR: Sydney, NSW, Australia, 2017; Vol. 70, Proceedings of Machine Learning Research, pp. 1263–1272.

2. Graph autoencoders

  • Wang, D.; Cui, P.; Zhu, W. Structural Deep Network Embedding. In Proceedings of the 22nd International Conference on Knowledge Discovery and Data Mining (SIGKDD 2016); Association for Computing Machinery: New York, NY, USA, 2016; KDD ’16, p. 1225–1234. https://doi.org/10.1145/2939672.2939753.
  • Tu, K.; Cui, P.; Wang, X.; Wang, F.; Zhu, W. Structural Deep Embedding for Hyper-Networks. In Proceedings of the 32nd Conference on Artificial Intelligence (AAAI 2018); AAAI Press: New Orleans, Louisiana, USA, 2018; pp. 426–433.

3. Spatial CGNNs

  • Hamilton, W.L.; Ying, Z.; Leskovec, J. Inductive Representation Learning on Large Graphs. In Proceedings of the 30th Annual Conference on Neural Information Processing Systems (NIPS 2017); 2017; pp. 1024–1034
  • Chen, J.; Ma, T.; Xiao, C. FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling. In Proceedings of the 6th International Conference on Learning Representations (ICLR 2018); OpenReview.net: Vancouver, BC, Canada, 2018
  • Chiang, W.; Liu, X.; Si, S.; Li, Y.; Bengio, S.; Hsieh, C. Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks. In Proceedings of the 25th International Conference on Knowledge Discovery & Data Mining (KDD 2019); ACM: Anchorage, Alaska, USA, 2019; pp. 257–266. https://doi.org/10.1145/3292500.3330925.
  • Li, G.; Xiong, C.; Thabet, A.K.; Ghanem, B. DeeperGCN: All You Need to Train Deeper GCNs. CoRR 2020, abs/2006.07739 [2006.07739].
  • Chen, M.; Wei, Z.; Huang, Z.; Ding, B.; Li, Y. Simple and Deep Graph Convolutional Networks. In Proceedings of the 37th International Conference on Machine Learning (ICML 2020); PMLR: Virtual Event, 2020; Vol. 119, Proceedings of Machine Learning Research, pp. 1725–1735

4. Attentive GNNs

  • Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio: Graph Attention Networks. CoRR abs/1710.10903 (2017)
  • Yang Ye, Shihao Ji: Sparse Graph Attention Networks. IEEE Trans. Knowl. Data Eng. 35(1): 905-916 (2023)
  • Haonan, L.; Huang, S.H.; Ye, T.; Xiuyan, G. Graph Star Net for Generalized Multi-Task Learning. arXiv e-prints 2019, p.arXiv:1906.12330, [arXiv:cs.SI/1906.12330].
  • Kim, D.; Oh, A. How to Find Your Friendly Neighborhood: Graph Attention Design with Self-Supervision. In Proceedings of the 9th International Conference on Learning Representations (ICLR 2021); OpenReview.net: Virtual Event, Austria, 2021
  • Wang, G.; Ying, R.; Huang, J.; Leskovec, J. Improving Graph Attention Networks with Large Margin-based Constraints. CoRR 2254 2019, abs/1910.11945, [1910.11945].

5. GNNs limitations and solutions

  • Christopher Morris, Martin Ritzert, Matthias Fey, William L. Hamilton, Jan Eric Lenssen, Gaurav Rattan, Martin Grohe: Weisfeiler and Leman Go Neural: Higher-Order Graph Neural Networks. AAAI 2019: 4602-4609.
  • Hualei Yu, Jinliang Yuan, Yirong Yao, Chongjun Wang: Not all edges are peers: Accurate structure-aware graph pooling networks. Neural Networks 156: 58-66 (2022)
  • Juan Shu, Bowei Xi, Yu Li, Fan Wu, Charles A. Kamhoua, Jianzhu Ma: Understanding Dropout for Graph Neural Networks. WWW (Companion Volume) 2022: 1128-1138

6. Transformer with GNNs

  • Shi, Y.; Huang, Z.; Feng, S.; Zhong, H.; Wang, W.; Sun, Y. Masked Label Prediction: Unified Message Passing Model for Semi-Supervised Classification. In Proceedings of the Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence (IJCAI 2021); Zhou, Z., Ed.; ijcai.org: Virtual Event / Montreal, Canada, 2021; pp. 1548–1554. https://doi.org/10.24963/ijcai.2021/214
  • Nguyen, D.Q.; Nguyen, T.D.; Phung, D. Universal Graph Transformer Self-Attention Networks. In Proceedings of the Companion Proceedings of the Web Conference 2022; Association for Computing Machinery: New York, NY, USA, 2022; WWW ’22, p. 193–196. https://doi.org/10.1145/3487553.3524258

7. Structure-aware Graph Transformer

  • Dexiong Chen, Leslie O'Bray, Karsten M. Borgwardt: Structure-Aware Transformer for Graph Representation Learning. ICML 2022: 3469-3489
  • Jinwoo Kim, Tien Dat Nguyen, Seonwoo Min, Sungjun Cho, Moontae Lee, Honglak Lee, Seunghoon Hong: Pure Transformers are Powerful Graph Learners. CoRR abs/2207.02505 (2022)
  • Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen, Tie-Yan Liu: Do Transformers Really Perform Badly for Graph Representation? NeurIPS 2021: 28877-28888.
  • Vijay Prakash Dwivedi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio, Xavier Bresson:
    Graph Neural Networks with Learnable Structural and Positional Representations. ICLR 2022
  • Md. Shamim Hussain, Mohammed J. Zaki, Dharmashankar Subramanian:
    Global Self-Attention as a Replacement for Graph Convolution. KDD 2022: 655-665
  • Haiteng Zhao, Shuming Ma, Dongdong Zhang, Zhi-Hong Deng, Furu Wei: Are More Layers Beneficial to Graph Transformers? ICLR 2023
  • Liheng Ma, Chen Lin, Derek Lim, Adriana Romero-Soriano, Puneet K. Dokania, Mark Coates, Philip H. S. Torr, Ser-Nam Lim: Graph Inductive Biases in Transformers without Message Passing. ICML 2023: 23321-23337
  • Xiaojun Ma, Qin Chen, Yi Wu, Guojie Song, Liang Wang, Bo Zheng: Rethinking Structural Encodings: Adaptive Graph Transformer for Node Classification Task. WWW 2023: 533-544
  • Qiheng Mao, Zemin Liu, Chenghao Liu, Jianling Sun: HINormer: Representation Learning On Heterogeneous Information Networks with Graph Transformer. WWW 2023: 599-610
  • Grégoire Mialon, Dexiong Chen, Margot Selosse, Julien Mairal: GraphiT: Encoding Graph Structure in Transformers. CoRR abs/2106.05667 (2021)
  • Zhanghao Wu, Paras Jain, Matthew A. Wright, Azalia Mirhoseini, Joseph E. Gonzalez, Ion Stoica: Representing Long-Range Context for Graph Neural Networks with Global Attention. NeurIPS 2021: 13266-13279
  • Ladislav Rampásek, Michael Galkin, Vijay Prakash Dwivedi, Anh Tuan Luu, Guy Wolf, Dominique Beaini: Recipe for a General, Powerful, Scalable Graph Transformer. NeurIPS 2022
  • Hamed Shirzad, Ameya Velingker, Balaji Venkatachalam, Danica J. Sutherland, Ali Kemal Sinop: Exphormer: Sparse Transformers for Graphs. ICML 2023: 31613-31632

8. Transformer and sampling strategies

  • Jianan Zhao, Chaozhuo Li, Qianlong Wen, Yiqi Wang, Yuming Liu, Hao Sun, Xing Xie, Yanfang Ye: Gophormer: Ego-Graph Transformer for Node Classification
  • Coarformer: Transformer for large graph via graph coarsening. Openreview 2021.
  • Zaixi Zhang, Qi Liu, Qingyong Hu, Chee-Kong Lee: Hierarchical Graph Transformer with Adaptive Node Sampling. NeurIPS 2022
  • Jinsong Chen, Kaiyuan Gao, Gaichao Li, Kun He: NAGphormer: A Tokenized Graph Transformer for Node Classification in Large Graphs. ICLR 2023
profile
Graphs illustrate intricate patterns in our perception of the world and ourselves; graph mining enhances this comprehension by highlighting overlooked details.

0개의 댓글