AnalogCoder: Analog Circuit Design via Training-Free Code Generation

Authors: Yao Lai, Sungyoung Lee, Guojin Chen, Souradip Poddar, Mengkang Hu, David Z. Pan, Ping Luo

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on a benchmark designed to cover a wide range of analog circuit tasks show that Analog Coder outperforms other LLM-based methods. It has successfully designed 20 circuits, 5 more than standard GPT-4o. We believe Analog Coder can significantly improve the labor-intensive chip design process, enabling non-experts to design analog circuits efficiently.
Researcher Affiliation Academia 1 The University of Hong Kong, Hong Kong 2 The University of Texas at Austin, Austin, Texas, United States 3 The Chinese University of Hong Kong, Hong Kong EMAIL, EMAIL
Pseudocode No The paper describes the 'Feedback-Enhanced Design Flow' in Figure 4 and elaborates on 'Prompt Engineering' and 'Circuit Tool Library' in text, but it does not present these as structured pseudocode or algorithm blocks.
Open Source Code Yes Code https://github.com/laiyao1/Analog Coder
Open Datasets No Third, we introduce the first benchmark specifically designed to evaluate the ability of LLMs in designing analog circuits. This benchmark comprises 24 unique circuits, three times the number included in the Chip Chat benchmark (Chang et al. 2023) and offers 40% more circuits than the Veri Gen benchmark (Thakur et al. 2023a). It features detailed task descriptions, sample designs, and test-benches, enhancing resources for future research.
Dataset Splits Yes We employed a 3-fold cross-validation for fine-tuning evaluation, using two subsets of design tasks for fine-tuning and the remaining one for testing.
Hardware Specification Yes Open-source models were evaluated on 4 Nvidia A100 GPUs.
Software Dependencies No The paper mentions generating Python code compatible with the Py Spice library and utilizing the GPT-3.5 API, but it does not provide specific version numbers for Python, Py Spice, or other software dependencies required to replicate the experiment.
Experiment Setup Yes Fine-tuning was performed using the GPT-3.5 API with two epochs.