LaMAGIC2: Advanced Circuit Formulations for Language Model-Based Analog Topology Generation

Authors: Chen-Chia Chang, Wan-Hsuan Lin, Yikang Shen, Yiran Chen, Xin Zhang

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments demonstrate that La MAGIC2 achieves 34% higher success rates under a tight tolerance of 0.01 and 10X lower MSEs compared to a prior method. La MAGIC2 also exhibits better transferability for circuits with more vertices with up to 58.5% improvement. These advancements establish La MAGIC2 as a robust framework for analog topology generation.
Researcher Affiliation Collaboration 1Duke University 2University of California, Los Angeles 3MITIBM Watson AI Lab 4IBM T. J. Watson Research Center. Correspondence to: Chen-Chia Chang <EMAIL>, Xin Zhang <EMAIL>.
Pseudocode No The paper describes methods and formulations but does not contain a clearly labeled pseudocode or algorithm block.
Open Source Code Yes Code available at https: //github.com/turtleben/La MAGIC.
Open Datasets Yes We utilize the same dataset in La MAGIC (Chang et al., 2024). It contains 3, 4, 5-component circuits with 120k data points for training and 12k for evaluation. To assess the transferability of models to more complex circuits, the dataset has 76k unique 6-component circuits and split 9k data points for evaluation.
Dataset Splits Yes It contains 3, 4, 5-component circuits with 120k data points for training and 12k for evaluation. To assess the transferability of models to more complex circuits, the dataset has 76k unique 6-component circuits and split 9k data points for evaluation. In our experiments, we randomly select subsets of 500, 1k, and 2k 6-component circuits to fine-tune models initially trained on the 120k 3, 4, 5-component circuits.
Hardware Specification Yes Training runs on one NVIDIA V100 GPU using Adam W with the following hyperparameter: learning rate 3 10 4, cosine scheduler with 300 warm-up steps, batch size 128, L2 regularization 10 5, dropout 0.1, and epochs
Software Dependencies Yes We run simulator NGSPICE (Nenzi P, 2011) on each generated circuit to get its actual voltage conversion ratio and efficiency for real-world applications. Nenzi P, V. H. Ngspice users manual version 23., 2011. URL https://pkgs.fedoraproject.org/ repo/extras/ngspice/ngspice23-manual. pdf/eb0d68eb463a41a0571757a00a5b9f9d/ ngspice23-manual.pdf.
Experiment Setup Yes Training runs on one NVIDIA V100 GPU using Adam W with the following hyperparameter: learning rate 3 10 4, cosine scheduler with 300 warm-up steps, batch size 128, L2 regularization 10 5, dropout 0.1, and epochs