DSBRouter: End-to-end Global Routing via Diffusion Schrödinger Bridge

Authors: Liangliang Shi, Shenhui Zhang, Xingbo Du, Nianzu Yang, Junchi Yan

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results show that it achieves SOTA performance on the overflow reduction in ISPD98 and part of ISPD07. Extensive experiments show that DSBRouter significantly improves overflow and achieves state-of-the-art performance on public benchmarks.
Researcher Affiliation Academia 1Shanghai Institute for Mathematics and Interdisciplinary Sciences, Shanghai, China 2School of Artificial Intelligence and School of Computer Science, Shanghai Jiao Tong University, China. Correspondence to: Junchi Yan <EMAIL>.
Pseudocode Yes Algorithm 1 Expectation Route Generate, Algorithm 2 Evaluation-Based Guidance, Algorithm 3 RSMT Construct, Algorithm 4 Training of DSB, Algorithm 5 Sampling Routes with evaluation-based guidance
Open Source Code Yes Code available at https://github.com/Thinklab-SJTU/EDA-AI.
Open Datasets Yes For training, We use ISPD07 benchmarks (Nam et al., 2007) to build the marginal distribution ps and nthurouter (Chang et al., 2008) to perform routing to construct distribution pr. ... Additionally, we also introduce the ISPD98 routing benchmarks (Alpert, 1998) to perform global routing and compare metrics between different methods.
Dataset Splits Yes In line with (Du et al., 2023), we construct the expert training datasets with low overflow using nthurouter (Chang et al., 2008) to route on parts of ISPD07 benchmarks, including bigblue4, newblue4, newblue5, newblue6 and newblue7. Each case has about 60k samples. Thus the training datasets have a total of nearly 300K samples. ... For the tested cases in Tab. 2, we choose newblue1, newblue2, bigblue1 and bigblue2 from ISPD07, outside the training sets, with a total 10k samples.
Hardware Specification Yes Training of the backbone of DSB is conducted on a machine with an Intel Xeon Platinum 8480+ CPU, 8 NVIDIA H800 GPUs, and 2.0TB RAM. All experiments in this work are conducted on a machine with an Intel Xeon Platinum 8480+ CPU, 8 NVIDIA RTX 4090 GPUs, and 460GB RAM.
Software Dependencies No The paper mentions using specific architectures like 'uvit b architecture, based on the Vi T structure' and references 'SGM (Ho et al., 2020; Song & Ermon, 2019)', but does not provide specific version numbers for software dependencies like Python, PyTorch, or CUDA.
Experiment Setup Yes We use a fixed learning rate of lr = 0.001 and a batch size of 256. During each epoch, we repeat the training process for each batch 4 times. We fixed the training (inferencing) steps to 64 (192 in all other experiments mentioned above) and set the inferencing steps to 10, 24 (the default value in all other experiments mentioned above), and 50, respectively, to train three different DSB backbones.