OS-GCL: A One-Shot Learner in Graph Contrastive Learning
Authors: Cheng Ji, Chenrui He, Qian Li, Qingyun Sun, Xingcheng Fu, Jianxin Li
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The paper includes a dedicated section titled "4 Experiments" which further contains subsections like "4.1 Experimental Setup", "4.2 Main Results", "4.3 Ablation Study", "4.4 Hyperparameter Sensitivity", and "4.5 Training Time Analysis". It presents tables (e.g., Table 1, Table 2) with test accuracy results, standard deviations, and comparisons against various baselines on multiple datasets, which are clear indicators of experimental research. |
| Researcher Affiliation | Academia | The affiliations listed for all authors are universities: "Beihang University", "Beijing University of Posts and Telecommunications", and "Guangxi Normal University". The email domains like `@buaa.edu.cn`, `@bupt.edu.cn`, and `@gxnu.edu.cn` also confirm these are academic institutions. There are no indications of industry affiliations. |
| Pseudocode | No | The paper describes the methodology, including probability distribution estimation, probabilistic message passing, and Prob NCE loss, through textual explanations and mathematical formulations, along with a framework diagram (Figure 4). However, it does not contain any structured pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | In Section 4.1, Experimental Setup, a footnote explicitly states: "1The code and appendix are available at https://github.com/RingBDStack/OS-GCL." |
| Open Datasets | Yes | In Section 4.1, Datasets, the paper states: "We evaluate OS-GCL1 on seven node classification benchmarks: (1) Citation networks [Kipf and Welling, 2017]: Cora, Cite Seer, and Pub Med. (2) Co-purchase networks [Shchur et al., 2018]: Amazon Photo. (3) Co-author networks [Shchur et al., 2018]: Coauthor CS and Coauthor Physics. (4) Large dataset [Hu et al., 2020]: ogbn-ar Xiv." These are well-known public datasets with proper citations. |
| Dataset Splits | Yes | The paper refers to using "seven node classification benchmarks" including "Cora, Cite Seer, and Pub Med", "Amazon Photo", "Coauthor CS and Coauthor Physics", and "ogbn-ar Xiv". It also mentions evaluating performance on "validation and test sets" for ogbn-ar Xiv. While specific percentages are not given, the use of these standard benchmark datasets implies the use of their predefined or commonly accepted train/validation/test splits, which are necessary for reproduction. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory used for running the experiments. It only discusses the experimental setup and results without mentioning the underlying computational resources. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x, CUDA x.x) needed to replicate the experiment. While the code is open-source, this information is not present in the paper itself. |
| Experiment Setup | Yes | Section 4.4, "Hyperparameter Sensitivity", explicitly discusses hyperparameters like the "Feature-Topology Trade-Off Weight α" (Eq.(6)) and the "Number of Orders k", showing their impact on performance in Figure 5 and Figure 6. This indicates that specific experimental setup details, including hyperparameter values, are provided and analyzed within the paper. |