Intelligent OPC Engineer Assistant for Semiconductor Manufacturing
Authors: Guojin Chen, Haoyu Yang, Bei Yu, Haoxing Ren
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate that our methodology can efficiently build OPC recipes on various chip designs with specially handled design topologies, a task that typically requires the full-time effort of OPC engineers with years of experience. |
| Researcher Affiliation | Collaboration | Guojin Chen1*, Haoyu Yang2, Bei Yu1, Haoxing Ren2 1Chinese University of Hong Kong 2NVIDIA EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methodology in natural language and mathematical formulas. While it contains diagrams (Figure 5, Figure 7) illustrating the process and examples of generated recipes in JSON and Tcl formats, it does not present a formal pseudocode block or algorithm. |
| Open Source Code | No | The paper states: "The OPC model is built on top of the open-source OPC engine (Zheng et al. 2023), which is widely used in the OPC community." and references "Open ILT: An Open-source Platform for Inverse Lithography Technique Research. https://github.com/Open OPC/Open ILT/". This refers to a third-party tool used, not the specific implementation code for the methodology described in this paper. The statement "Implementation details and prompt scripts are provided in the supplementary material." does not constitute a full code release. |
| Open Datasets | Yes | To evaluate the effectiveness of our framework, we utilized datasets from two distinct processes. The first dataset is derived from the 2013 ICCAD contest (Banerjee, Li, and Nassif 2013)... The second dataset is sourced from the NVIDIA Deep Learning Accelerator (NVDLA) (NVIDIA 2024). |
| Dataset Splits | Yes | From the full-chip layout of the NVDLA, fabricated using Nan Gate 45nm standard cells (Stine et al. 2007), we extracted nearly one million clips. We then randomly selected 800 clips for the training set and 200 clips for the test set. |
| Hardware Specification | No | The paper does not explicitly describe any specific hardware (e.g., GPU models, CPU models, or cloud computing instance specifications) used for running its experiments or training its models. |
| Software Dependencies | Yes | In this study, we utilize GPT-4o (Open AI 2024), an optimized version of GPT-4 with multi-modal capabilities that process both text and images, enhancing performance and versatility. The OPC model is built on top of the open-source OPC engine (Zheng et al. 2023), which is widely used in the OPC community. |
| Experiment Setup | Yes | The hyperparameters of the OPC loss are set to α = 1, β = 100, and γ = 1. In the OPC context, the state st includes the current positions of the measurement points, the fragment points, and the rasterized image features. The action at consists of permissible adjustments to these points within a specified range of 40nm. The policy update is constrained by a proximity term to ensure stability: LCLIP(θ) =Et [ r(at|st) ˆAt, clip( r(at|st), 1 ϵ, 1 + ϵ) ˆAt] where ˆAt is the advantage estimate and ϵ is a clipping parameter. The overall training objective combines the clipped surrogate objective for policy optimization and the value function loss, along with an entropy bonus S[πθ](st) to encourage exploration: L(θ, ϕ) = Et [LCLIP(θ) c1LV (ϕ) + c2S[πθ](st)], where c1 and c2 are coefficients that balance the importance of the value loss and the entropy bonus, respectively. |