ELoRA: Low-Rank Adaptation for Equivariant GNNs

Authors: Chen Wang, Siyu Hu, Guangming Tan, Weile Jia

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We prove that ELo RA maintains equivariance and demonstrate its effectiveness through comprehensive experiments. On the r MD17 organic dataset, ELo RA achieves a 25.5% improvement in energy prediction accuracy and a 23.7% improvement in force prediction accuracy compared to full-parameter finetuning. Similarly, across 10 inorganic datasets, ELo RA achieves average improvements of 12.3% and 14.4% in energy and force predictions, respectively.
Researcher Affiliation Academia 1State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences 2University of Chinese Academy of Sciences. Correspondence to: Siyu Hu <EMAIL>, Guangming Tan <EMAIL>, Weile Jia <EMAIL>.
Pseudocode No The paper includes multiple propositions and lemmas (e.g., Proposition 4.1, Lemma D.1) which are mathematical statements and proofs, but it does not contain any structured pseudocode or algorithm blocks.
Open Source Code No Code will be made publicly available at https://github.com/hyjwpk/ELo RA.
Open Datasets Yes For organic downstream tasks, we use the revised MD17 (r MD17) (Christensen & Von Lilienfeld, 2020), 3BPA (Kov acs et al., 2021), and Ac Ac (Batatia et al., 2022a) datasets, which are representative benchmarks for organic systems. For inorganic downstream tasks, we employ 10 datasets that reflect a variety of real-world scenarios. ... Details of these datasets are given in Appendix E.2. ... Table 5. Sizes and links of different datasets. Dataset Size Link r MD17 1,000,000 https://dx.doi.org/10.6084/m9.figshare.12672038
Dataset Splits Yes Revised MD17: To verify whether fine-tuning pre-trained models offers an advantage on downstream tasks with limited data, we follow the previous setting (Batatia et al., 2022b), using only 50 configurations for each organic molecule in the r MD17 dataset for training. ... 3BPA: The models are trained using datasets collected at 300 K, and the temperatures of the test set range from 300 K to 1200 K. ... Ac Ac: The training set is sampled at 300 K, and the test set is sampled independently at 300 K and 600 K.
Hardware Specification No The AI-driven experiments, simulations and model training were performed on the robotic AI-Scientist platform of Chinese Academy of Science. This statement is too general and does not specify any particular hardware components such as CPU, GPU models, or memory.
Software Dependencies Yes The MACE model code is modified from the main branch of the open source Git Hub repository at https://github.com/ACEsuit/mace (commit hash: 346a829f).
Experiment Setup Yes To facilitate reproducibility, Table 6 summarizes the training hyperparameters for different MACE models. The settings for MACE-From-scratch (Batatia et al., 2022b), MACE-MP (Batatia et al., 2023), and MACE-OFF (Kov acs et al., 2023) are derived from previously published studies, while those for MACE-MP-Fine-tune and MACE-OFF-Fine-tune reflect the fine-tuning hyperparameters used in this work. Table 6 includes specific hyperparameters such as rmax, num radial basis, num channels, max L, loss, forces weight, energy weight, lr, weight decay, scheduler patience, ema decay, and clip grad.