Relational Invariant Learning for Robust Solvation Free Energy Prediction

Authors: Yeyun Chen

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that RILOOD significantly outperforms state-of-the-art methods across various distribution shifts, highlighting its effectiveness in improving solvation free energy prediction under diverse conditions. ... In this section, we conduct extensive experiments to answer the research questions: ... We use six datasets to evaluate our method. ... Table 1. Performance comparison with baselines on 3 out-of-distribution real-world datasets...
Researcher Affiliation Academia 1Institute of Artificial Intelligence, Xiamen University, China 2Shanghai Innovation Institute, China. Correspondence to: Yeyun Chen <EMAIL>.
Pseudocode No The paper describes the methodology in Section 4: Methodology, but it does not include any clearly labeled pseudocode or algorithm blocks. The steps are described in paragraph form.
Open Source Code No The paper does not contain any explicit statement about releasing source code, nor does it provide a link to a code repository.
Open Datasets Yes We use six datasets to evaluate our method. Specifically, the Minnesota Solvation Database (MNSolv) (Marenich et al., 2012), QM9Solv (Ward et al., 2021), Comp Solv (Moine et al., 2017), Combi Solv (Vermeire & Green, 2021), Mol Merger (Ramani & Karmakar, 2024), and Abraham (Grubbs et al., 2010).
Dataset Splits Yes B.1. Data splitting. To evaluate the OOD generalization performance of molecule relational learning models, we employed both random scaffold splitting and random solvent splitting strategies. ... These scaffold groups were then randomly shuffled and split into training, validation, and test sets according to a predefined ratio (e.g., 8:1:1). ... These solvent groups are then randomly shuffled and divided into training, validation, and test sets in a fixed ratio.
Hardware Specification Yes The proposed method is implemented on a single NVIDA 3090 GPU with Py Torch.
Software Dependencies No The proposed method is implemented on a single NVIDA 3090 GPU with Py Torch. While PyTorch is mentioned, a specific version number is not provided, nor are other software dependencies with their versions.
Experiment Setup Yes We select 168 for the dimension (dz) of latent variables. The learning rate was decreased on plateau by a factor of 10 3 from 10 3 to 10 5. ... we systematically vary α within {10 7, 10 6, 10 5, 10 4, 10 3} and β within {10 8, 10 7, 10 6, 10 5, 10 4}.