Discrete Curvature Graph Information Bottleneck

Authors: Xingcheng Fu, Jian Wang, Yisen Gao, Qingyun Sun, Haonan Yuan, Jianxin Li, Xianxian Li

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on various datasets demonstrate the superior effectiveness and interpretability of Curv GIB. We evaluate Curv GIB on two tasks: node classification and graph denoising, to verify whether Curv GIB can retain critical structures conducive to message passing to improve the efficiency and robustness of graph representation learning. Node Classification. We perform 10-fold cross-validation and report the average accuracy, average F1 score, and the standard deviation across the 10 folds in Table 2.
Researcher Affiliation Academia 1Guangxi Key Lab of Multi-source Information Mining & Security, Guangxi Normal University, Guilin, China 2Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University, China 3Institute of Artificial Intelligence, Beihang University, Beijing, China 4School of Computer Science and Engineering, Beihang University, Beijing, China EMAIL, EMAIL, EMAIL
Pseudocode Yes Algorithm 1: The overall process of Curv GIB
Open Source Code Yes We evaluate Curv GIB 1 on two tasks: node classification and graph denoising, to verify whether Curv GIB can retain critical structures conducive to message passing to improve the efficiency and robustness of graph representation learning. 1https://github.com/RingBDStack/Curv GIB
Open Datasets Yes We conduct experiments on several real-world datasets: (1) Citation network: Cora and Citeseer (Kipf and Welling 2017) are citation networks of machine learning academic papers and Pub Med is a citation network of biomedical academic papers. (2) Co-occurrence network (Shchur et al. 2019): Coauthor CS and Coauthor Physics are co-authorship graphs based on the Microsoft Academic Graph from the KDD Cup 2016 challenge. Amazon Computers and Amazon Photos (Shchur et al. 2019) which are segments of the Amazon co-purchase graph.
Dataset Splits Yes Node Classification. We perform 10-fold cross-validation and report the average accuracy, average F1 score, and the standard deviation across the 10 folds in Table 2. Graph Denoising. To evaluate the robustness of our framework by adding or deleting edges on Cora. Specifically, for each graph in the dataset, we randomly remove (if edges exist) or add (if no such edges) 10%, 20%, 30%, 40%, 50% edges.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, memory, or specific computing environments used for running the experiments.
Software Dependencies No The paper mentions various baselines (e.g., GCN, GAT, GIN) but does not provide specific version numbers for any software libraries, frameworks, or programming languages used in their implementation.
Experiment Setup Yes Parameter Setting. We set both the information bottleneck size K and the embedding dimension of baseline methods as 64. For Curv GIB,we perform the depth search of l {2, 4, 6, 8} for each dataset,and perform hyperparameter search of β {10 1, 10 2, 10 3, 10 4, 10 5, 10 6} for each dataset. Training stability. ... with a learning rate of 0.001.