HYGENE: A Diffusion-Based Hypergraph Generation Method

Authors: Dorian Gailhard, Enzo Tartaglione, Lirida Naviner, Jhony H. Giraldo

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we detail our experimental setup, covering datasets and evaluation metrics. Then, we compare our approach against the following baselines: Hyper PA (Do et al. 2020), a Variational Autoencoder (VAE) (Kingma and Welling 2013), a Generative Adversarial Network (GAN) (Goodfellow et al. 2020), and a standard 2D diffusion model (Ho, Jain, and Abbeel 2020) trained on incidence matrix images, where hyperedge membership is represented by white pixels and absence by black pixels. Finally, we ablate on the spectrum-preserving coarsening and the upper bound for the number of hyperedges defined in Proposition 6.
Researcher Affiliation Academia Dorian Gailhard, Enzo Tartaglione, Lirida Naviner, Jhony H. Giraldo LTCI, T el ecom Paris, Institut Polytechnique de Paris, France {name.surname}@telecom-paris.fr
Pseudocode Yes The complete coarsening sampling procedure incorporating this approach is detailed in Algorithm 1 of Appendix E.
Open Source Code Yes Code https://github.com/Dorian Gailhard/HYGENE
Open Datasets Yes We evaluate our method on four synthetic hypergraph datasets: Erd os R enyi (ER) (Erd os and R enyi 1960), Stochastic Block Model (SBM) (Kim, Bandeira, and Goemans 2018), Ego (Comrie and Kleinberg 2021), and Tree (Nieminen and Peltola 1999). Furthermore, we also test HYGENE on topologies of low-poly feature-less versions of three classes of Model Net40 (Wu et al. 2015) converted to hypergraphs: plant, piano, and bookshelf.
Dataset Splits Yes Each dataset is split into 128 training, 32 validation, and 40 test hypergraphs.
Hardware Specification No The paper discusses training deep learning models and conducting experiments but does not provide specific details on the hardware used (e.g., GPU/CPU models, memory).
Software Dependencies No The paper mentions using the EDM denoising diffusion framework and PPGN as the model architecture, but it does not specify any software versions for libraries (e.g., PyTorch, TensorFlow, Python) or other dependencies.
Experiment Setup No The paper describes the overall experimental setup, including the methods (coarsening, expansion, refinement), datasets, and evaluation metrics, but it does not provide specific hyperparameters (e.g., learning rate, batch size, number of epochs, optimizer settings) or detailed system-level training configurations in the main text.