Enhancing Ligand Validity and Affinity in Structure-Based Drug Design with Multi-Reward Optimization

Authors: Seungbeom Lee, Munsun Jo, Jungseul Ok, Dongwoo Kim

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that our method generates more realistic ligands than baseline models while achieving higher binding affinity, expanding the Pareto front empirically observed in previous studies.
Researcher Affiliation Collaboration 1Graduate School of Artificial Intelligence, POSTECH, South Korea 2KT Corporation, South Korea 3Department of Computer Science & Engineering, POSTECH, South Korea. Correspondence to: Dongwoo Kim <EMAIL>.
Pseudocode No The paper describes methods through mathematical formulations and textual explanations (e.g., Section 3.2 Bayesian Flow Networks, Section 4 Multi-Reward Optimization for BFNs) but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not explicitly provide concrete access to source code for the methodology described. It mentions using 'the official source code provided by the author' for a baseline model (Ali Diff) but makes no such statement for its own implementation.
Open Datasets Yes Datasets We fine-tune a pretrained model on the Cross Docked dataset (Francoeur et al., 2020).
Dataset Splits Yes The Cross Docked dataset comprises 100K training pairs and 100 target proteins for testing.
Hardware Specification Yes Training converges within two epochs on a single RTX 4090 GPU, requiring approximately six hours for completion.
Software Dependencies No The paper mentions software components like 'ReLU activation functions', 'Layer Normalization (Ba et al., 2016)', and the 'Adam optimizer' but does not provide specific version numbers for any programming languages or libraries used for their implementation.
Experiment Setup Yes For the SE(3)-equivariant network, we utilize k-nearest neighbors (k NN) graphs with a 32-nearest neighbor search to construct the graph. The model consists of nine layers, each with a hidden dimension of 128, employing a 16-head attention mechanism. Regarding the noise schedules, we set the parameters as follows: β1 = 1.5 for atom types and σ1 = 0.03 for atom coordinates. The model is trained using a discrete-time loss function over 1000 training steps. For finetuning, we employ the Adam optimizer with a learning rate of 0.005 and a batch size of 32. Additionally, an exponential moving average (EMA) of model parameters is maintained with a decay factor of 0.999. We set γ = 0.4 in Equation (11).