TopoDiffusionNet: A Topology-aware Diffusion Model

Authors: Saumya Gupta, Dimitris Samaras, Chao Chen

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments across four datasets demonstrate significant improvements in topological accuracy. 4 EXPERIMENTS Datasets. We train ADM-T and TDN on four datasets: Shapes, COCO (Caesar et al., 2018), CREMI (Funke et al., 2016), and Google Maps (Isola et al., 2017).
Researcher Affiliation Academia Saumya Gupta, Dimitris Samaras & Chao Chen Department of Computer Science Stony Brook University Stony Brook, NY 11794, USA EMAIL, EMAIL
Pseudocode No The paper describes the methodology in narrative text and mathematical formulas, without explicitly formatted pseudocode or algorithm blocks.
Open Source Code Yes Code available at https://github.com/Saumya-Gupta-26/ Topo Diffusion Net
Open Datasets Yes Datasets. We train ADM-T and TDN on four datasets: Shapes, COCO (Caesar et al., 2018), CREMI (Funke et al., 2016), and Google Maps (Isola et al., 2017).
Dataset Splits No The paper mentions generating samples for evaluation, but does not provide specific training/test/validation splits for the datasets used to train the models.
Hardware Specification No The paper does not specify any particular hardware (e.g., GPU models, CPU types, or memory) used for running the experiments.
Software Dependencies No To compute persistent homology, we use the Cubical Ripser (Kaji et al., 2020) library. The paper mentions using methods like cosine noise scheduler and DDIM sampling, but does not provide specific version numbers for these or other software libraries.
Experiment Setup Yes For every dataset, we use 256 256 as the image resolution. Our diffusion models use a cosine noise scheduler (Nichol & Dhariwal, 2021), with T = 1000 timesteps for training. During inference, however, we use only 50 steps of DDIM (Song et al., 2020a) sampling. When λ = 1e 5, TDN achieves the best performance.