GLAD: Improving Latent Graph Generative Modeling with Simple Quantization

Authors: Van Khoa Nguyen, Yoann Boget, Frantzeska Lavda, Alexandros Kalousis

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present experiments on a series of graph benchmark datasets that demonstrates GLAD as the first equivariant latent graph generative method achieves competitive performance with the state of the art baselines.
Researcher Affiliation Academia Geneva School for Business Administration (HES-SO) University of Geneva, 1214 Geneva, Switzerland EMAIL, EMAIL
Pseudocode Yes Algorithm 1: Graph Discrete Latent Diffusion Bridge
Open Source Code Yes Code https://github.com/v18nguye/GLAD
Open Datasets Yes We measure GLAD s ability to capture the underlying structures of generic graphs on three datasets: (a) egosmall (Sen et al. 2008), (b) community-small, and (c) enzymes (Schomburg et al. 2004). We conduct experiments on two standard datasets: QM9 (Ramakrishnan et al. 2014) and ZINC250k (Irwin et al. 2012).
Dataset Splits Yes We use the same train- and test-splits as the baselines for a fair comparison.
Hardware Specification No The computations were performed at the University of Geneva on Baobab and Yggdrasil HPC clusters.
Software Dependencies No Following (Jo, Lee, and Hwang 2022), we remove hydrogen atoms and kekulize molecules by RDKit Landrum et al. (2016).
Experiment Setup No Algorithm 1 outlines the training procedure, which includes 'Adam-optim' for optimization, but specific hyperparameters like learning rate, batch size, or number of epochs are not provided in the main text.