DeepLayout: Learning Neural Representations of Circuit Placement Layout

Authors: Yuxiang Zhao, Zhuomin Chai, Xun Jiang, Qiang Xu, Runsheng Wang, Yibo Lin

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on large-scale industrial datasets, demonstrating that Deep Layout surpasses state-of-the-art (SOTA) methods specialized for individual tasks on two crucial layout quality assessment benchmarks. The experiment results underscore the framework s robust capability to learn the intrinsic properties of circuits.
Researcher Affiliation Collaboration 1Peking University 2National Technology Innovation Center for EDA 3Wuhan University 4The Chinese University of Hong Kong 5Institute of Electronic Design Automation, Wuxi, China 6Beijing Advanced Innovation Center for Integrated Circuits . Correspondence to: Yibo Lin <EMAIL>.
Pseudocode Yes Algorithm 1 Layout-oriented Masking Algorithm.
Open Source Code No The paper does not provide any explicit statement about releasing source code for the described methodology, nor does it include a link to a code repository. While it mentions using a public dataset, it doesn't extend to the code.
Open Datasets Yes We conduct experiments on Circuit Net (Chai et al., 2022; 2023), a large-scale public dataset of IC designs for realworld industrial applications.
Dataset Splits Yes In our experimental setup, the pre-training set contains four designs (RISCY-a, RISCY-b, RISCY-FPU-a, RISCY-FPU-b), totaling over 6,000 samples that only utilize the raw input data to represent a large corpus of unlabeled samples. The fine-tuning and test sets each introduce two additional designs, zero-riscy-a and zero-riscy-b. Specifically, the fine-tuning set comprises a small amount of labeled data configured as 5, 10, or 20 samples while the test set contains 100 samples.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU models, CPU types) used for running the experiments.
Software Dependencies No The paper mentions "deep-learning methodologies" and "AI methods" but does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes Table 2. Pre-training and downstream learning parameters, Pred and Esti are abbreviation of Prediction and Estimation. Tasks Lr Epoch Weight Decay Decoder Pre-training 4e-3 100 1e-2 U-Net + MLP Congestion Pred. 3e-4 50 1e-4 U-Net Wirelength Esti. 4e-4 50 0 MLP