Training-Free Message Passing for Learning on Hypergraphs
Authors: Bohan Tang, Zexi Liu, Keyue Jiang, Siheng Chen, Xiaowen Dong
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments based on seven real-world hypergraph benchmarks in node classification and hyperlink prediction show that, compared to state-of-the-art HNNs, TF-HNN exhibits both competitive performance and superior training efficiency. Specifically, on the large-scale benchmark, Trivago, TF-HNN outperforms the node classification accuracy of the best baseline by 10% with just 1% of the training time of that baseline. |
| Researcher Affiliation | Academia | 1University of Oxford 2Shanghai Jiao Tong University 3University College London 4Shanghai AI Laboratory |
| Pseudocode | No | The paper describes methods using mathematical formulations and textual descriptions, but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor structured code-like procedures. |
| Open Source Code | Yes | See our code here. More details are in Appendix L and S. |
| Open Datasets | Yes | We conduct experiments on seven real-world hypergraphs: Cora-CA, DBLP-CA, Citeseer, Congress, House, Senate, which are from (Chien et al., 2022), and Trivago from (Kim et al., 2023). |
| Dataset Splits | Yes | For node classification, we follow previous works (Wang et al., 2023a; Duta et al., 2023) to use a 50%/25%/25% train/validation/test data split and adapt the baseline classification accuracy from them4. [...] We use a 50%/25%/25% train/validation/test data split and ensure each split contains five times as many fake hyperedges as real hyperedges. |
| Hardware Specification | Yes | Experiments were on RTX 3090 GPUs by Py Torch. |
| Software Dependencies | No | The paper mentions 'Py Torch' but does not specify a version number. Other software dependencies with specific version numbers are not provided. |
| Experiment Setup | Yes | For node classification, we follow previous works (Wang et al., 2023a; Duta et al., 2023) to use a 50%/25%/25% train/validation/test data split and adapt the baseline classification accuracy from them4. Additionally, similar to these works, we implement the classifier based on MLP for our TF-HNN and report the results from ten runs. [...] More details are in Appendix L and S. Appendix L details hyperparameter settings including 'The number of layers of TF-MP-Module', 'The α of TF-MP-Module', 'The number of layers of the node classifier', 'The hidden dimension of the node classifier', 'The learning rate for node classifier', and 'The dropout rate for node classifier' with specific search spaces. |