Multi-Timescale Dynamics Model Bayesian Optimization for Plasma Stabilization in Tokamaks

Authors: Rohit Sonker, Alexandre Capone, Andrew Rothstein, Hiro Josep Farre Kaga, Egemen Kolemen, Jeff Schneider

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our approach by controlling tearing instabilities in the DIII-D nuclear fusion plant. Offline testing on historical data shows that our method significantly outperforms several baselines. Results on live experiments on the DIII-D tokamak, conducted under high-performance plasma scenarios prone to instabilities, shows a 50% success rate marking a 117% improvement over historical outcomes.
Researcher Affiliation Academia 1Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA 2Princeton University, Princeton, NJ, USA 3Princeton Plasma Physics Laboratory, Princeton, NJ, US. Correspondence to: Rohit Sonker <EMAIL>.
Pseudocode No The paper describes its methodology in natural language and presents a high-level pipeline diagram (Fig. 1) but does not include any explicitly labeled pseudocode blocks or algorithms.
Open Source Code No The paper does not contain any explicit statement about releasing source code, nor does it provide a link to a code repository or mention code in supplementary materials.
Open Datasets No The paper refers to using "historical data" and a "large dataset from past tokamak experiments" from the DIII-D Tokamak, specifically stating, "Our complete dataset consists of 15000 plasma trajectories from historical experiments at DIII-D Tokamak." However, no specific access information such as a link, DOI, or a citation to a publicly available version of this dataset is provided.
Dataset Splits Yes Early stopping is applied with a patience of 250 epochs based on performance on a validation set comprising 10% of the total data.
Hardware Specification No The paper mentions experiments conducted "at the DIII-D Tokamak" which is the system being controlled, but it does not specify any computing hardware (e.g., GPU/CPU models, memory specifications) used for running the experiments or training the models.
Software Dependencies No The paper mentions using "OMFIT software (Meneghini et al., 2015)" but does not provide a specific version number for it or any other software component used in the experiments.
Experiment Setup Yes We use the Adam optimizer with a learning rate of 3 10 4 and a weight decay of 1 10 3. Early stopping is applied with a patience of 250 epochs based on performance on a validation set comprising 10% of the total data.