Generate or Re-Weight? A Mutual-Guidance Method for Class-Imbalanced Graphs
Authors: Zhongying Zhao, Gen Liu, Qi Meng, Chao Li, Qingtian Zeng
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental results on five class-imbalanced datasets demonstrate the superiority of the proposed method. The source codes are available at https://github.com/ ZZY-Graph Mining Lab/Graph Mu Gu. 4 Experiments In this section, we conduct experiments to verify the effectiveness of our proposed method. |
| Researcher Affiliation | Academia | College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China EMAIL, EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1 The Proposed Graph Mu Gu Method |
| Open Source Code | Yes | The source codes are available at https://github.com/ ZZY-Graph Mining Lab/Graph Mu Gu. |
| Open Datasets | Yes | We evaluate the effectiveness of our proposed Graph Mu Gu on five widely-used datasets, including three citation datasets (Cora, Citeseer, and Pubmed) and two co-purchase datasets (Amazon-Photo and Amazon-Computers). |
| Dataset Splits | No | The paper mentions the use of 'long-tailed class-imbalance experiments on the citation datasets and set the imbalance ratio ρ = 100' and 'step class-imbalance experiments are implemented on the copurchase datasets with an imbalance ratio ρ = 20'. It also states 'We refer to the reference [Li et al., 2023] to construct the class-imbalance datasets.' However, it does not explicitly provide specific percentages, sample counts, or a detailed methodology for train/validation/test splits within the paper itself. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware used to run its experiments, such as GPU or CPU models. It mentions leveraging GNNs as backbones but provides no hardware details. |
| Software Dependencies | No | The paper mentions using GCN, GAT, and Graph SAGE as backbones, but does not provide specific version numbers for these or any other ancillary software components (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We leverage various GNNs (i.e., GCN [Kipf and Welling, 2017], GAT [Veliˇckovi c et al., 2018], and Graph SAGE [Hamilton et al., 2017]) as the backbones. All of them are set to a 2-layer pattern. The number of multi-head is set to 8 for GAT. The dimension of the hidden layer is set to 64. Specifically, for our proposed Graph Mu Gu, α is set to 0.05 to measure the diffusion matrix, K is set to 128 to sparsify the diffusion matrix, and δ is sampled from β(1, 100). |