Motif-Oriented Representation Learning with Topology Refinement for Drug-Drug Interaction Prediction

Authors: Ran Zhang, Xuezhi Wang, Guannan Liu, Pengyang Wang, Yuanchun Zhou, Pengfei Wang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results demonstrate that MOTOR exhibits superior performance with interpretable insights in DDI prediction tasks across three real-world datasets, thereby opening up new avenues in AI-driven DDI prediction. We evaluate MOTOR on three real-world DDI datasets: Zhang DDI (Zhang et al. 2017), Ch Ch Miner (ZMarinka Zitnik and Leskovec 2017), and Deep DDI (Ryu, Kim, and Lee 2018). Following (Wang et al. 2021b), we remove unidentifiable SMILES during preprocessing. We then perform a stratified splitting to divide all drug pairs into a training set, a validation set, and a test set at a ratio of 6:2:2. To verify the performance of DDI prediction, four widely used metrics are selected: Area Under the Receiver Operating Characteristic curve (AUROC), Average Precision (AP), F1-score (F1), and Accuracy (ACC). We conduct ablation studies to explore the contributions of different components and modules within MOTOR.
Researcher Affiliation Academia 1Computer Network Information Center, Chinese Academy of Science 2University of Chinese Academy of Sciences 3Beihang University 4University of Macau EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes its methodology using mathematical equations and textual explanations, but it does not include a clearly labeled pseudocode block or algorithm section.
Open Source Code No The paper does not contain any explicit statement about providing source code, nor does it include links to a code repository or mention code in supplementary materials.
Open Datasets Yes We evaluate MOTOR on three real-world DDI datasets: Zhang DDI (Zhang et al. 2017), Ch Ch Miner (ZMarinka Zitnik and Leskovec 2017), and Deep DDI (Ryu, Kim, and Lee 2018).
Dataset Splits Yes Following (Wang et al. 2021b), we remove unidentifiable SMILES during preprocessing. We then perform a stratified splitting to divide all drug pairs into a training set, a validation set, and a test set at a ratio of 6:2:2.
Hardware Specification Yes All experiments are conducted with EPYC 7742 CPU, and TESLA A100 GPU.
Software Dependencies No The paper mentions using "Xavier initialization" and "Adam optimizer" but does not specify version numbers for any key software components or libraries (e.g., Python, PyTorch, TensorFlow).
Experiment Setup Yes We set L1 = L2 = L3 = 3, P = 3, C = 8, λ1 = 0.8, λ2 = 0.6, T = 15, and fix the number of epochs to 100, the learning rate to 0.001. We initialize MOTOR by Xavier initialization and use Adam optimizer to update the parameters.