Directed Graph Generation with Heat Kernels

Authors: Marc T. Law, Karsten Kreis, Haggai Maron

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide an experimental analysis of our approach on different types of synthetic datasets and show that our model is able to generate directed graphs that follow the distribution of the training dataset even if it is multimodal.
Researcher Affiliation Collaboration Marc T. Law EMAIL NVIDIA Karsten Kreis EMAIL NVIDIA Haggai Maron EMAIL NVIDIA Technion
Pseudocode Yes Algorithm 1 Generation of digraphs at inference time Algorithm 2 Training algorithm (for each mini-batch)
Open Source Code No The paper does not provide a direct link to a source code repository, an explicit statement of code release, or mention code in supplementary materials for the methodology described.
Open Datasets No The paper describes generating synthetic datasets based on models like Erdős-Rényi (Erdős et al., 1960) and stochastic block models (Holland et al., 1983) with specific parameters, but does not provide concrete access information (e.g., links or repositories) to the specific instances of the generated datasets used in their experiments.
Dataset Splits Yes Each category contains m = 100 training graphs with n = 15 nodes each... We sample 10,000 test graphs per category by using Algorithm 1 with class-conditional generation... The two categories follow the same properties as in Section 6.1, and contain 3,000 non-isomorphic training graphs per category.
Hardware Specification Yes We ran all our experiments on a single desktop with a NVIDIA Ge Force RTX 3090 GPU (with 24 GB of VRAM) and 64 GB of RAM.
Software Dependencies No The paper mentions that the project was coded in Pytorch but does not specify the version number for Pytorch or any other software libraries or dependencies.
Experiment Setup Yes In practice, we set d = 150 and we initialize each element of O by sampling from the normal distribution parameterized by a mean of 1 and standard deviation of 1. We train the model for 60,000 iterations. ... The training algorithm takes about one hour for 10,000 iterations... We use a regularization parameter of γ = 100, and a step size/learning rate of 0.0001.