Multi-Label Node Classification with Label Influence Propagation

Authors: Yifei Sun, Zemin Liu, Bryan Hooi, Yang Yang, Rizal Fathony, Jia Chen, Bingsheng He

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, our framework is evaluated on comprehensive benchmark datasets, consistently outperforming SOTA methods across various settings, demonstrating its effectiveness on MLNC tasks.
Researcher Affiliation Collaboration 1Zhejiang University, 2National University of Singapore, 3Capital One, 4Grab Taxi Holdings Pte. Ltd.
Pseudocode No The paper describes methods and calculations using mathematical equations and textual explanations, but it does not contain a clearly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code Yes Corresponding authors. 1Our code is available at https://github.com/Xtra-Computing/LIP_MLNC.
Open Datasets Yes To comprehensively validate our framework, we conduct experiments on 2 classical MLNC datasets (DBLP (Akujuobi et al., 2019), Blog Cat (Shi et al., 2020a)), 1 large scale OGB dataset (Ogbn-proteins (Hu et al., 2020), OGB-p in short), and 3 new biological datasets (PCG, Hum Loc, Euk Loc (Zhao et al., 2023)) from different domains.
Dataset Splits Yes To fully evaluate the effectiveness against baselines, we adopt 2 split settings: node split and label split. Refer to App. E.1 to see the details and differences between them. We also evaluate under different training ratio (App. E.2). From Tab. 1, we can draw several conclusions. First and foremost, LIP is the most effective one most of the time, which verifies the effectiveness of our approach. Our method outperforms other baselines by 3.06% on AUC and 2.54% on AUC on average.
Hardware Specification Yes Experiments are conducted using 2 NVIDIA 3090 GPUs.
Software Dependencies No The paper mentions software like "Adam optimizer" and various GNN backbones (GCN, GAT, Graph Sage, H2GCN) but does not provide specific version numbers for any libraries or frameworks (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup Yes For training process, we use Adam optimizer with early stopping at 100 epochs to train LIP. Moreover, other hyper-parameters are decided using random search strategy and the range of hyper-parameters are listed in Tab. 5. When comparing with other baselines, we set the same number of layers for the backbone if the same backbone is used. [...] Table 5: The hyperparameter setting in this paper for all datasets. Hyper-parameter Range Value: Hidden size {32,64,128,256,512}, Learning Rate {1e-3 5e-1}, Weight decay {1e-2,5e-3,1e-4,5e-4,1e-5,5e-6,1e-7,0}, Dropout rate {0 0.8}, Optimizer Adam, Epoch 1000, Early stopping patience 100