Chip Placement with Diffusion Models
Authors: Vint Lee, Minh Nguyen, Leena Elzeiny, Chun Deng, Pieter Abbeel, John Wawrzynek
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | When trained on our synthetic data, our models generate high-quality placements on unseen, realistic circuits, achieving competitive performance on placement benchmarks compared to state-of-the-art methods. We evaluate the performance of our model on circuits in the publicly available ICCAD04 (Adya et al., 2004) and ISPD2005 (Nam et al., 2005b) benchmarks. |
| Researcher Affiliation | Collaboration | 1Department of EECS, UC Berkeley, CA, USA 2Samba Nova Systems Inc., Palo Alto, CA, USA 3Computer Science Department, Stanford University, CA, USA. Correspondence to: Vint Lee <EMAIL>. |
| Pseudocode | No | The paper describes the steps for generating synthetic data in Section 4.1, accompanied by Figure 2, but these are presented as descriptive text and a visualization diagram, not as structured pseudocode or an algorithm block. |
| Open Source Code | No | The paper does not provide a specific repository link, an explicit statement of public code release (e.g., 'We release our code'), or code in supplementary materials. While Appendix A mentions 'We refer the reader to our code for more details', it does not specify how or where this code can be accessed publicly, making the statement ambiguous regarding public availability. |
| Open Datasets | Yes | We evaluate the performance of our model on circuits in the publicly available ICCAD04 (Adya et al., 2004) and ISPD2005 (Nam et al., 2005b) benchmarks. |
| Dataset Splits | No | The paper mentions using synthetic datasets for training and fine-tuning, and evaluates on the ICCAD04 and ISPD2005 benchmarks. However, it does not specify any training/test/validation splits for these benchmarks or how the data from these benchmarks was partitioned for experimental reproduction. |
| Hardware Specification | Yes | Our models are implemented using Pytorch (Paszke et al., 2019) and Pytorch-Geometric (Fey & Lenssen, 2019), and trained on machines with Intel Xeon Gold 6326 CPUs, using a single Nvidia A5000 GPU. |
| Software Dependencies | No | The paper mentions 'Pytorch (Paszke et al., 2019) and Pytorch-Geometric (Fey & Lenssen, 2019)' but does not provide specific version numbers for these software libraries, which are required for a reproducible description of ancillary software. |
| Experiment Setup | Yes | We train our models using the Adam optimizer (Kingma & Ba, 2014) for 3M steps, with 250k steps of fine-tuning where applicable. Hyperparameters for our model are listed in Table 7, including model dimensions, input encodings, GNN layers, and guidance parameters such as w HPWL (0.0001), x0 optimizer (SGD), x0 optimizer learning rate (0.008), wlegality optimizer (Adam), wlegality optimizer learning rate (0.0005), wlegality initial value (0), Gradient descent steps (10), and ε (0.0001). |