Let There be Direction in Hypergraph Neural Networks
Authors: Stefano Fiorini, Stefano Coniglio, Michele Ciavotta, Alessio Del Bue
TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive computational experiments against state-of-the-art methods on real-world and synthetically-generated datasets demonstrate the efficacy of our proposed HNN. |
| Researcher Affiliation | Academia | Stefano Fiorini EMAIL Pattern Analysis & Computer Vision (PAVIS) Italian Institute of Technology (IIT) Stefano Coniglio EMAIL Department of Economics University of Bergamo Michele Ciavotta EMAIL Department of Informatics, Systems and Communication University of Milano-Bicocca Alessio Del Bue EMAIL Pattern Analysis & Computer Vision (PAVIS) Italian Institute of Technology (IIT) |
| Pseudocode | No | The paper describes the methodology using mathematical formulations and descriptive text, but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code written for this research work is available at https://github.com/Stefa1994/Ge Di-HNN and freely distributed under the Apache 2.0 license.6 |
| Open Datasets | Yes | The Texas, Wisconsin, Cornell, Wiki CS, and Telegram datasets were obtained from the Py Torch Geometric Signed Directed (He et al., 2022b) library (distributed under the MIT license). The Cora, Citeseer, and Pub Med datasets are available at https://linqs.org/datasets/. The email-Eu and email-Enron datasets are available at https://www.cs.cornell.edu/~arb/data/. |
| Dataset Splits | Yes | We adopt the split proposed by Zhang et al. (2021b) for Telegram, Texas, Wisconsin, and Cornell and the split of Chien et al. (2021) for the other ones. All the experiments are conducted using 10-fold cross-validation. ... For these datasets, we implement a 50%/25%/25% split for training, validation, and testing, respectively. The experiments are conducted using 10-fold cross-validation. |
| Hardware Specification | Yes | The experiments were conducted on 2 different machines: 1. An Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz with 380 GB RAM, equipped with an NVIDIA Ampere A100 40GB. 2. A 12th Gen Intel(R) Core(TM) i9-12900KF CPU @ 3.20GHz CPU with 64 GB RAM, equipped with an NVIDIA RTX 4090 GPU. |
| Software Dependencies | No | The paper mentions using Python and PyTorch implicitly through the reference to 'Py Torch Geometric Signed Directed (He et al., 2022b) library', but it does not specify any version numbers for these or other software components. |
| Experiment Setup | Yes | We trained every learning model considered in this paper for up to 500 epochs. We adopted a learning rate of 5e-3 and employed the optimization algorithm Adam with weight decays equal to 5e-4 (in order to avoid overfitting). For all the models that adopt the classification layer, we set it to 2. ... For Ge Di-HNN and Ge Di-HNN w/o directionality, the number of convolutional layers is chosen in {1, 2, 3}, the number of filters in {64, 128, 256, 512}, and the classifier hidden dimension in {64, 128, 256}. |