DeepGate4: Efficient and Effective Representation Learning for Circuit Design at Scale

Authors: Ziyang Zheng, Shan Huang, Jianyuan Zhong, Zhengyuan Shi, Guohao Dai, Ningyi Xu, Qiang Xu

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our extensive experiments on the ITC99 and EPFL benchmarks show that Deep Gate4 significantly surpasses state-of-the-art methods, achieving 15.5% and 31.1% performance improvements over the next-best models.
Researcher Affiliation Collaboration 1The Chinese University of Hong Kong 2Shanghai Jiao Tong University 3Infinigence-AI 4Shanghai Innovation Institute 5National Technology Innovation Center for EDA EMAIL EMAIL
Pseudocode Yes Algorithm 1 Graph Partition Input: AIG G = (V, E), cone depth k, stride δ < k 1: L max v V level(v), l k 2: while l L do 3: conesl list(), i 0 4: for v in {v V : level(v) = l} do 5: Get sub-graph conel i conek(v) 6: Add conel i to conesl, i i + 1 7: end for 8: l l + δ 9: end while 10: for v in {v V : out-degree(v) = 0} do 11: Get sub-graph g conek(v) 12: Add g to coneslevel(v) 13: end for 14: return cones list [conesk, conesk+δ, ...] Algorithm 2 Training Pipeline cone depth k, stride δ, partitioned cones [conesk, conesk+δ, ...], mini-batch size B 1: for l in [k, k + δ, ...] do 2: if l = k then 3: Inter-Level Updating on [conesk, conesk+δ, ..., conesl] 4: end if 5: m len(conesl)/B 6: for i in range(0,m) do 7: sample mini-batch batchl i in conesl 8: Intra-Level Updating on batchi 9: end for 10: end for
Open Source Code No The paper does not explicitly state that source code for the described methodology is publicly available, nor does it provide a link to a code repository.
Open Datasets Yes We collect the circuits from various sources, including benchmark netlists in ITC99 (Davidson, 1999) and EPFL (Amar u et al., 2015). All designs are transformed into AIGs by ABC tool (Brayton & Mishchenko, 2010). The statistical details of datasets can be found in Section A.1. We collect the circuits from Open ABC-D (Chowdhury et al., 2021). All designs are transformed into AIGs by ABC tool (Brayton & Mishchenko, 2010).
Dataset Splits Yes We trained our model on the ITC99 dataset, following the split outlined in Table 7. During training, the average graph size is 15k, while for evaluation, we used circuits of different scales, as shown in Table 2.
Hardware Specification Yes The training is performed with a batch size of 1 and mini-batch size of 128 on one Nvidia A800 GPU. We utilize the Adam optimizer with a learning rate of 10 4. All experiments are performed on one L40 GPU with 48GB maximum memory.
Software Dependencies No The paper mentions software like Py Torch Geometric and the Ca Di Cal SAT solver, but does not provide specific version numbers for these or other software dependencies. It only mentions using the Adam optimizer and a learning rate without specifying the software environment version.
Experiment Setup Yes In Algorithm 1, we set k to 8 and δ to 6. The dimensions of both the structural and functional embedding are set to 128. The depth of Sparse Transformer is 12 and the depth of Pooling Transformer is 2. All training task heads are 3-layer multilayer perceptrons (MLPs). We train all models for 200 epochs to ensure convergence. The training is performed with a batch size of 1 and mini-batch size of 128 on one Nvidia A800 GPU. We utilize the Adam optimizer with a learning rate of 10 4.