Quadruple Attention in Many-body Systems for Accurate Molecular Property Predictions

Authors: Jiahua Rao, Dahao Xu, Wentao Wei, Yicong Chen, Mingjun Yang, Yuedong Yang

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental MABNet achieves state-of-the-art performance on benchmarks like MD22 and SPICE. These improvements underscore its capability to accurately capture intricate many-body interactions in large molecules. By unifying rigorous many-body physics with computational efficiency, MABNet advances molecular simulations for applications in drug design and materials discovery, while its extensible framework paves the way for modeling higher-order quantum effects.
Researcher Affiliation Collaboration 1School of Computer Science and Engineering, Sun Yat-sen University, China 2Shenzhen Jingtai Technology Co., Ltd. (Xtal Pi), Shenzhen, China
Pseudocode No The paper describes its methodology using text and mathematical equations, but it does not contain a dedicated pseudocode block or algorithm section.
Open Source Code Yes Our code is publicly available at https://github.com/ biomed-AI/MABNet.
Open Datasets Yes We consider two challenging benchmarks of MD22 and SPICE, following (Wang et al., 2024) and (Eastman et al., 2023). MD22 was introduced by (Chmiela et al., 2023)... SPICE, collected by (Eastman et al., 2023)...
Dataset Splits Yes The SPICE dataset follows a consistent 8:1:1 split for training, validation, and testing.
Hardware Specification Yes All experiments are conducted on either NVIDIA A800 Tensor Core GPUs.
Software Dependencies No All models are implemented in Py Torch (Paszke et al., 2019) and trained using the Adam optimizer (Kingma & Ba) with Mean Squared Error (MSE) loss, unless otherwise specified. While PyTorch and Adam are mentioned, specific version numbers for these software dependencies are not provided in the main text.
Experiment Setup Yes Table 9. Hyperparameters used for MD22, SPICE and MD17. Parameter MD22 SPICE MD17 initial LR 1e-4 1e-4 3e-4 min LR 1e-7 1e-7 1e-7 LR warm up steps 1000 1000 1000 LR decay factor 0.8 0.8 0.8 LR patience (epochs) 30 30 30 optimizer Adam Adam Adam energy loss weight 0.05 1.0 0.05 forces loss weight 0.95 10.0 0.95 embedding dimension 256 256 256 attention heads 8 8 8 batch size 2,4 2,4 4 number of layers 9 9 9 number of RBFs 32 32 32 cutoff ( A) 5.0 5.0 5.0