Rewarding Explainability in Drug Repurposing with Knowledge Graphs

Authors: Susana Nunes, Samy Badreddine, Catia Pesquita

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our approach in drug repurposing using three popular knowledge graph benchmarks. The results clearly demonstrate its ability to generate explanations that validate predictive insights against biomedical knowledge and that outperform the state-of-the-art approaches in predictive performance, establishing REx as a relevant contribution to advance AI-driven scientific discovery. ... 6 Results and Discussion 6.1 Predictive Performance Evaluation We evaluated the predictive performance of REx s explanatory paths against several baseline methods, including MINERVA, a RL-based method that answers queries through multi-hop reasoning [Das et al., 2017] and Po Lo, that extends it with logical constraints to improve interpretability [Liu et al., 2021] (details in Supp. Material).
Researcher Affiliation Collaboration Susana Nunes1,2 , Samy Badreddine2,3,4 , Catia Pesquita1 1LASIGE, Faculty of Sciences, University of Lisbon, Lisbon, Portugal 2Sony AI, Barcelona, Spain 3University of Trento, Trento, Italy 4Bruno Kessler Institute, Trento, Italy EMAIL
Pseudocode No The paper describes the policy network and its mechanisms in narrative text (Section 4.3) but does not present any structured pseudocode or algorithm blocks labeled as 'Pseudocode' or 'Algorithm'.
Open Source Code Yes 1Code and Supplementary Material available at https://github.com/liseda-lab/REx.
Open Datasets Yes As benchmarks for our experiments, we used well-known biomedical KGs that describe drugs, diseases and other relevant entities for drug repurposing2: Hetionet [Himmelstein et al., 2017], Prime KG [Chandak et al., 2023], and OREGANO [Boudin et al., 2023]. 2Data repository links in Supplementary Material.
Dataset Splits No The paper mentions using training for paths to target nodes and generalizing to unseen targets during inference, but it does not provide specific details such as percentages, absolute counts, or methodology for how the datasets (Hetionet, Prime KG, OREGANO) were split into training, validation, or test sets.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU types, or memory amounts used for running the experiments.
Software Dependencies No The paper mentions using 'OWL2vec*' for generating embeddings and 'K-means' for clustering, but it does not specify version numbers for these or any other key software components, libraries, or programming languages used in the implementation.
Experiment Setup Yes Training. We extend training to 30 rollouts, following [Liu et al., 2021].