Quality Measures for Dynamic Graph Generative Models

Authors: Ryien Hosseini, Filippo Simini, Venkatram Vishwanath, Rebecca Willett, Henry Hoffmann

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We also provide a comprehensive empirical evaluation of metrics for continuous-time dynamic graphs, demonstrating the effectiveness of our approach compared to existing methods. Our implementation is available at https://github.com/ryienh/jl-metric. ... We now turn to an empirical evaluation of CTDG metrics, comparing several existing metrics with our proposed Johnson-Lindenstrauss-based metric (JL-Metric). The comparison focuses on common desiderata of generative metrics outlined in Section 2.4: fidelity, diversity, sample efficiency, and computational efficiency.
Researcher Affiliation Academia 1Department of Computer Science, University of Chicago 2Leadership Computing Facility, Argonne National Laboratory 3Department of Statistics, University of Chicago 4 NSF-Simons National Institute for Theory and Mathematics in Biology EMAIL, EMAIL
Pseudocode No The paper describes the proposed method in detail in Section 3, but it does not present a formal pseudocode block or algorithm.
Open Source Code Yes Our implementation is available at https://github.com/ryienh/jl-metric.
Open Datasets Yes We evaluate each metric on four real-world datasets and one synthetic dataset. The realworld datasets are adapted from user interactions on online platforms: Reddit, Wikipedia, Last FM, and MOOC. We use a subset of these data (details in Appendix C), which were originally introduced by Jodie (Kumar et al., 2019) and have become standard CTDG benchmarks. ... We refer interested readers to Kumar et al., 2019 for additional dataset details.
Dataset Splits Yes We additionally keep all training details the same: We use the Adam optimizer, binary cross entropy loss, and a 70% 15% 15% chronological train-validation-test split.
Hardware Specification Yes All metrics are tested on a AMD EPYC 7713 64-Core Processor and the Red Hat Enterprise Linux 9.3 operating system.
Software Dependencies No We primarily rely on the Pytorch geometric (Fey & Lenssen, 2019) and Network X (Hagberg et al., 2008) open-source Python libraries for static graph representations. Given its importance to runtime benchmarking and overall reproducability, we provide a full list of software libraries used in our experiments, as well as their respective versions, in the Supplementary material.
Experiment Setup Yes In our case, we select n = 100 and o = 100. ... memory dimension = 172, node embedding dimension = 100, time embedding dimension = 100, number of attention heads = 2, and dropout = 0.1.