MIPT: Multilevel Informed Prompt Tuning for Robust Molecular Property Prediction
Authors: Yeyun Chen, Jiangming Shi
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that MIPT surpasses all baselines, aligning graph space and task space while achieving significant improvements in molecule-related tasks, demonstrating its scalability and versatility for molecular tasks. |
| Researcher Affiliation | Academia | 1Institute of Artificial Intelligence, Xiamen University, Xiamen Fujian, China 2Shanghai Innovation Institute, Shanghai, China. Correspondence to: Jiangming Shi <EMAIL>. |
| Pseudocode | Yes | Pseudocode is presented in Algorithm 1. |
| Open Source Code | No | The paper does not provide concrete access to source code. It does not contain an explicit code release statement, a specific repository link, or mention of code in supplementary materials. |
| Open Datasets | Yes | We employ eight common datasets from Molecule Net (Wu et al., 2018) as our benchmark datasets: BBBP, Tox21, Tox Cast, SIDER, Clin Tox, MUV, HIV and BACE. |
| Dataset Splits | No | Random splits and scaffold splits for these datasets are adopted. |
| Hardware Specification | Yes | All experiments were conducted on a high-performance computing server equipped with an NVIDIA 3090 GPU (24 GB memory). |
| Software Dependencies | Yes | The implementation was based on Python 3.9, Py Torch 1.12, and the torc geometric library. |
| Experiment Setup | Yes | For the GNN architecture, we utilized Graph Isomorphism Network (GIN), configured with a hidden dimension of 300, 3 graph convolutional layers, Re LU activation, and batch normalization. The optimizer was Adam with a learning rate of 0.001, dropout rate is 0.5, the mask probability is 0.2. ... The experiments were conducted on Molecule Net datasets, running for 100 epochs with a batch size of 32. |