Towards Precise Prediction Uncertainty in GNNs: Refining GNNs with Topology-grouping Strategy
Authors: Hyunjin Seo, Kyusung Seo, Joonhyung Park, Eunho Yang
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the effectiveness of our framework across diverse datasets on different GNN architectures, achieving up to 13.79% error reduction compared to uncalibrated GNN predictions. |
| Researcher Affiliation | Collaboration | Hyunjin Seo1,3, Kyusung Seo1 , Joonhyung Park1 , Eunho Yang1,2 1Korea Advanced Institute of Science and Technology (KAIST) 2AITRICS 3Polymerize EMAIL |
| Pseudocode | No | The paper describes methods using mathematical formulations (e.g., equations 7-10) and prose but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code or a direct link to a code repository. |
| Open Datasets | Yes | The performance of our SIMI-MAILBOX is evaluated across eight smalland mediumscale benchmark graphs adopted in (Hsu et al. 2022): Cora, Citeseer, Pubmed (Sen et al. 2008), Cora Full (Bojchevski and G unnemann 2017), Coauthor CS, Computers, and Photo (Shchur et al. 2018). To further demonstrate the versatility, we extended our experiments to large-scale graphs, Arxiv (Hu et al. 2020) and Reddit (Zeng et al. 2019). |
| Dataset Splits | No | The paper mentions that the "validation set is used for training to enhance generalization to unseen data" and that experiments follow "the experimental protocols of GATS (Hsu et al. 2022)" with "Details of the experiment configurations are provided in the Appendix." However, the main text itself does not explicitly provide the specific train/validation/test splits (e.g., percentages or counts) for the datasets used in this paper. |
| Hardware Specification | No | The paper does not contain any specific details about the hardware used for running the experiments. |
| Software Dependencies | No | The paper mentions GNN architectures like GCN, GAT, and Graph SAGE, but does not specify any software libraries or frameworks with their version numbers that were used in the implementation. |
| Experiment Setup | No | The paper states, "Details of the experiment configurations are provided in the Appendix." However, no specific hyperparameters (e.g., learning rate, batch size, number of epochs) or other system-level training settings are provided in the main text. |