Optimizing Cycle Life Prediction of Lithium-ion Batteries via a Physics-Informed Model
Authors: Nathan Sun, Daniel Nicolae, Sara Sameer, Karena Yan
TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our model exhibits comparable performances to existing models while predicting more information: the entire capacity loss curve instead of cycle life. This provides more robustness and interpretability: our model does not need to be retrained for a different notion of end-of-life and is backed by physical intuition. The results of our model evaluation are presented in Figure 6, which compares the performance of the elastic net baseline with that of the self-attention model in terms of the train RMSE, primary test RMSE, and secondary test RMSE. |
| Researcher Affiliation | Academia | Constantin-Daniel Nicolae EMAIL University of Cambridge, Sara Sameer EMAIL National University of Computer and Emerging Sciences, Nathan Sun EMAIL Harvard University, Karena Yan EMAIL Harvard University |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. Methods are described using mathematical equations and explanatory text. |
| Open Source Code | Yes | Code Availability The Github repository can be found at https://github.com/nathan99sun/Hybrid Pred. |
| Open Datasets | Yes | Data Availability The datasets used in this paper are available at https://data.matr.io/1. We utilize the public dataset provided by Severson et al. (2019). |
| Dataset Splits | Yes | In our work, we preserve the train/test/secondary test data split in Severson et al. (2019), allowing for a direct comparison between our result and theirs. The primary test set was obtained using the same batch of cells as the train set and similar charge policies; therefore we use it to evaluate the model s ability to interpolate in the input space. On the other hand, the secondary test set was obtained from a different batch of cells and using significantly different scheduling, and we use it to examine the model s ability to extrapolate. |
| Hardware Specification | No | The paper does not provide specific details regarding the hardware used for running its experiments, such as exact GPU/CPU models, processor types, or memory amounts. |
| Software Dependencies | No | The paper mentions 'sklearn s Elastic Net model' but does not specify its version number. No other software dependencies are listed with specific version numbers. |
| Experiment Setup | Yes | Training took place with 800 epochs and a learning rate of 10 3 in step 1 and with 3000 epochs and a learning rate of 5 10 5 in step 2. For hyperparameter tuning, we vary alpha and l1_ratio on a logscale over 100 and 101 and 10 5 and 102, respectively. |