Let Your Features Tell The Differences: Understanding Graph Convolution By Feature Splitting
Authors: Yilun Zheng, Xiang Li, Sitao Luan, Xiaojiang Peng, Lihui Chen
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that after applying GFS to 8 baseline and state-of-the-art (SOTA) GNN architectures across 10 datasets, 90% of the GFS-augmented cases show significant performance boosts. |
| Researcher Affiliation | Academia | 1Nanyang Technological University, Centre for Info. Sciences and Systems, 2College of Big Data and Internet, Shenzhen University, 3Mila Quebec Artificial Intelligence Institute. Email address: EMAIL, 2210413014@email. szu.edu.cn, EMAIL, EMAIL, EMAIL. |
| Pseudocode | No | The paper describes the Graph Feature Selection (GFS) method and Topological Feature Informativeness (TFI) using textual descriptions and mathematical formulas, but it does not include a distinct pseudocode block or algorithm section. |
| Open Source Code | Yes | To facilitate reproducibility and further research, we have made our code publicly available at https://github.com/KTTRCDL/graph-feature-selection. |
| Open Datasets | Yes | The datasets used in our experiments include Children, Computers, Fitness, History, and Photo from Yan et al. (2023), and Amazon-Ratings, Minesweeper, Questions, Roman Empire, and Tolokers from Platonov et al. (2023). |
| Dataset Splits | Yes | For all datasets, we randomly split the data into training, validation, and test sets in a ratio of 50% 25% 25% for 10 runs. |
| Hardware Specification | Yes | We run experiments on a machine with 4 NVIDIA RTX A6000 GPUs, each with 48GB of memory. |
| Software Dependencies | No | All models are implemented using the Py Torch (Ansel et al., 2024) framework and the DGL (Wang et al., 2019) library. Specific version numbers for PyTorch and DGL are not provided. |
| Experiment Setup | Yes | During training, we utilize the Adam optimizer (Kingma & Ba, 2014) to update model parameters. Each model is trained for 1000 epochs. ... The searching space for the ratio r is {0.1,0.2,...,0.9} in GFS3 and k = 1 for the number of the hop of neighbors in TFI4. ... Number of layers: {2,3} Hidden dimension: {128,256,512} Learning rate: {3 10 5,10 4,3 10 4,10 3,3 10 3,10 2} Weight decay: {0,10 5,10 3} Dropout rate: {0.1,0.2,0.4,0.6,0.8} |