Neural Guided Diffusion Bridges
Authors: Gefan Yang, Frank Van Der Meulen, Stefan Sommer
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate the method through numerical experiments ranging from one-dimensional linear to high-dimensional nonlinear cases, offering qualitative and quantitative analyses. Section 5. Experiments. |
| Researcher Affiliation | Academia | 1Department of Computer Science, University of Copenhagen, Universitetsparken 1, 2100 København, Denmark 2Department of Mathematics, Vrije Universiteit Amsterdam, De Boelelaan 1111, 1081HV Amsterdam, The Netherlands. |
| Pseudocode | Yes | Algorithm 1 Neural guided bridge training |
| Open Source Code | Yes | The codebase for reproducing all the experiments conducted in the paper is available in https://github.com/bookdiver/neuralbridge |
| Open Datasets | No | The paper's experiments use mathematical models (Linear Processes, Cell Diffusion Model, Fitz Hugh-Nagumo Model, Stochastic Landmark Matching) which generate data through simulation. The models' definitions and parameters are described or referenced in the paper, meaning there isn't a separate, pre-existing external dataset file requiring a specific link or repository for access, beyond the open-sourced code that generates the simulation data. |
| Dataset Splits | No | The paper's experiments involve simulating stochastic processes and generating trajectories (e.g., '25,000 independently sampled full trajectories'). It does not use pre-defined external datasets that would require training, validation, or test splits. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments, such as exact GPU/CPU models, processor types, or memory amounts. |
| Software Dependencies | No | The paper mentions using JAX and Julia for implementation, but it does not specify version numbers for these software components or any other libraries that would be necessary for reproducibility. |
| Experiment Setup | Yes | The map ϑθ is modeled by a fully connected neural network with 3 hidden layers and 20 hidden dimensions for each layer. The model is trained with 25,000 independently sampled full trajectories of X . The batch size was taken to be N = 50 and the time step size δt = 0.002, leading to in total M = 500 time steps. The network was trained using the Adam (Kingma & Ba, 2017) optimizer with learning rate 0.001. |