SPHINX: Structural Prediction using Hypergraph Inference Network

Authors: Iulia Duta, Pietro Lio

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive ablation studies and experiments conducted on four challenging datasets, we demonstrate that our model is capable of inferring suitable latent hypergraphs in both transductive and inductive tasks. Moreover, the inferred latent hypergraphs are interpretable and contribute to enhancing the final performance, outperforming existing methods for hypergraph prediction.
Researcher Affiliation Academia 1Department of Computer Science, University of Cambridge. Correspondence to: Iulia Duta <EMAIL>.
Pseudocode No The paper describes the model architecture and components (Hypergraph predictor, Discrete constrained sampling, Hypergraph processing) in detail within Section 3, but it does not present any explicitly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code No We will soon release the full code associated with the paper.
Open Datasets Yes To evaluate our model on an inductive real-world dataset, where we need to predict a distinct hypergraph for each example, we use the NBA Sport VU dataset... Model Net40 (Wu et al., 2014) contains 12311 objects of 40 types, while NTU (Chen et al., 2003) contains 2012 objects with 67 types.
Dataset Splits Yes The dataset contains 1000 trajectories for training, 1000 for validation and 1000 for test. In all experiments we use the split from (Xu et al., 2022). For all transductive experiments we adopt the split from (Zhou et al., 2023).
Hardware Specification No The paper mentions training on 'a single GPU' in Appendix E.2 but does not provide specific details such as the model or type of GPU, CPU, or memory.
Software Dependencies No The paper mentions 'Adam optimizer' and refers to the code for other methods (AIMLE, IMLE, SIMPLE) but does not provide version numbers for any specific software libraries, frameworks, or programming languages used in their own implementation.
Experiment Setup Yes We use Adam optimizer for 1000 epochs, trained on a single GPU. For the NBA dataset, we are training for 300 epochs, using Adam with lr. 0.001, decreased by a factor of 10 when reaching a plateau. We are performing bayesian hyperparameter tuning, setting the base learning rate at 0.001, multiplied with a factor of d {0.1, 1.0, 10.0} when learning the parameters corresponding to the hypergraph predictor, a batch size of 128, self-loop added into the structure. The hidden state is picked from the set of values {32, 64, 128, 256}, the number of All Deep Sets layers from {1, 2}, the number of layers for the used MLPs from {2, 3, 4}, the number of hyperedges from {1, 2, 3, 5, 7} (except for the synthetic setup where the number of hyperedges is set to 1 and 2 respectively), the dimension of hyperedges from {3, 4, 5, 6} (except for the synthetic setup where the number of hyperedges is set to 3), nonlinearities used for the similarity score are either sigmoid, sparsemax or softmax, the algorithm for k-subset sampling is either AIMLE or SIMPLE, with their associated noise distribution sum of gamma or gumbel. For the NBA dataset, we are training for 300 epochs, while for the Particle Simulation we are training for 1000 epochs.