Computing Circuits Optimization via Model-Based Circuit Genetic Evolution
Authors: Zhihai Wang, Jie Wang, Xilin Xia, Dongsheng Zuo, Lei Chen, Yuzhe Ma, Jianye HAO, Mingxuan Yuan, Feng Wu
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate MUTE on several fundamental computing circuits, including multipliers, adders, and multiply-accumulate circuits. Experiments on these circuits demonstrate that MUTE significantly Pareto-dominates state-of-the-art approaches in terms of both area and delay. (Abstract) and Section 5 EXPERIMENTS |
| Researcher Affiliation | Collaboration | 1Mo E Key Laboratory of Brain-inspired Intelligent Perception and Cognition, University of Science and Technology of China 2 Noah s Ark Lab, Huawei Technologies 3 Microelectronics Thrust, Hong Kong University of Science and Technology (Guangzhou) 4 College of Intelligence and Computing, Tianjin University |
| Pseudocode | No | The paper describes methods in prose and figures (e.g., Figure 3 illustrates the MUTE framework) but does not include explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | 2. Source Code. To facilitate the evaluation process and support a thorough review, we have released our source code at the following link: https://anonymous.4open.science/r/AI4MUL-4199. |
| Open Datasets | Yes | Throughout our experiments, we utilize the Open ROAD flow (Ajayi & Blaauw, 2019) alongside the Nan Gate 45nm open-cell library (Nangate Inc., 2008) for circuit synthesis, coupled with Open STA (Parallax Software Inc.) for timing analysis. (Section 5.1) and Nangate45 is a widely used standard cell library in the semiconductor industry. It is open source and free, and we can obtain it at https://silvaco.com/services/library-design/ (Appendix F.2) |
| Dataset Splits | No | The paper evaluates MUTE on different problem instances (e.g., '8-bit, 16-bit, 32-bit, and 64-bit multipliers') rather than using a single dataset with explicit training, validation, and test splits. No specific dataset split information is provided. |
| Hardware Specification | Yes | Our experiments were executed on a Linux-based system equipped with a 3.60 GHz Intel Xeon Gold 6246R CPU and NVIDIA RTX 3090 GPU. |
| Software Dependencies | No | The paper mentions software such as 'Open ROAD flow', 'Nan Gate 45nm open-cell library', 'Open STA', 'Adam optimizer', and 'Py Torch framework' but does not provide specific version numbers for these software dependencies, only citation years for some. |
| Experiment Setup | Yes | Table 5: Common parameters used in the comparative evaluation and ablation study. Learning-Based Population Initialization Module: environment steps per learning episode 25, policy updates per environment step 1, optimizer Adam, discount (𝛾) 0.8, total learning episodes for initialization 40. Genetic Variation Module: samples generated by sequential mutation operator at each iteration 100, samples generated by genetic crossover operator at each iteration 200, total iterations for evolution 400. Model-Based Module: samples for circuit synthesis evaluation at each iteration 5. |