Topological Schrödinger Bridge Matching
Authors: Maosheng Yang
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate the theoretical results and demonstrate the practical applications of T SB-based models on both synthetic and real-world networks, emphasizing the role of topology. Additionally, we discuss the connections of T SB-based models to other emerging models, and outline future directions for topological signal matching. |
| Researcher Affiliation | Academia | Maosheng Yang Delft University of Technology EMAIL |
| Pseudocode | No | The paper describes methods and equations in prose and mathematical notation but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | For reproducibility of the experiments, we refer to the Git Hub repository at topological SB matching. |
| Open Datasets | Yes | We validate the theoretical results and demonstrate the practical implications of T SB-models on synthetic and real-world networks involved with brain signals, single-cell data, ocean currents, seismic events and traffic flows. We first consider matching two sets of f MRI brain signals from the Human Connectome Project [Van Essen et al., 2013]... We then consider the single-cell embryoid body data that describes cell differentiation over 5 timepoints [Moon et al., 2019]... We model the magnitudes of yearly seismic events from IRIS as node signals... We also consider the traffic flow from Pe MSD4 dataset... We consider the Global Lagrangian Drifter Data, which was collected by NOAA Atlantic Oceanographic and Meteorological Laboratory. |
| Dataset Splits | Yes | Here we split the data into two parts, before and after 2010 (there is a significant change in the heat flow pattern), to understand the evolution of the heat flow by modeling them as initial and terminal data. We aim to transport the observed data from day 0 3 to day 24 27, i.e., from t = 1 to t = 5... We then have the prediction labels given by St that indicate the nodes supporting the data points predicted at timepoint t. |
| Hardware Specification | Yes | We measure them using SB-VE and TSB-VE models on different-sized 10-nearest neighbour graphs built from Swiss roll point clouds. This comparison is done in a single training stage with 2,000 iterations, running on a single NVIDIA RTX 3080 GPU. |
| Software Dependencies | No | Our implementation is built upon the SB-framework by Chen et al. [2022a]. We use Adam W optimizer with a learning rate of 10 4 and Exponential Moving Average (EMA) with the decay rate of 0.99... (here implemented using torch.tensor.to sparse). |
| Experiment Setup | Yes | Our implementation is built upon the SB-framework by Chen et al. [2022a]. We use Adam W optimizer with a learning rate of 10 4 and Exponential Moving Average (EMA) with the decay rate of 0.99. For the reference processes with BM involved, we treat the noise scale g as a hyperparameter and optimize it by grid search. For the reference processes with VE and VP involved, we grid search the noise scales σmin, σmax and βmin, βmax. For T SB-based models, we grid search the optimal diffusion rate c and the noise scales involved in the T SHeat. |