ARMR: Adaptively Responsive Network for Medication Recommendation

Authors: Feiyue Wu, Tianxing Wu, Shenqi Jing

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on the MIMIC-III and MIMICIV datasets indicate that ARMR has better performance compared with the state-of-the-art baselines in different evaluation metrics, which contributes to more personalized and accurate medication recommendations. We conduct comprehensive experiments on two public medical datasets. ARMR outperforms the state-of-the-art baselines by 2.16% improvement in Jaccard similarity and 2.55% in PRAUC, respectively.
Researcher Affiliation Academia 1School of Computer Science and Engineering, Southeast University, Nanjing, China 2Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, China 3The First Affiliated Hospital with Nanjing Medical University (Jiang Su Province Hospital), Nanjing, China EMAIL, EMAIL
Pseudocode No The paper describes the proposed method using textual descriptions, mathematical equations, and figures (e.g., Figure 3 for the overall architecture) but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes The source code is publicly avaiable at: https://github.com/seucoin/armr2.
Open Datasets Yes Experiments on the MIMIC-III and MIMICIV datasets indicate that ARMR has better performance compared with the state-of-the-art baselines in different evaluation metrics, which contributes to more personalized and accurate medication recommendations. Datasets. We conducted experiments using the MIMICIII [Johnson et al., 2016] and MIMIC-IV [Johnson et al., 2018] datasets.
Dataset Splits No The paper mentions using the MIMIC-III and MIMIC-IV datasets and following preprocessing procedures outlined in [Chen et al., 2023], but it does not explicitly state the specific training, validation, and test splits (e.g., percentages or counts) used for these datasets within the main text.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU types, or memory used for running the experiments.
Software Dependencies No The paper describes the use of a Mamba block for processing distant health changes (Gu and Dao, 2023) but does not provide specific ancillary software details with version numbers (e.g., programming language version, library versions like PyTorch or TensorFlow).
Experiment Setup Yes Our training strategy employs two complementary loss functions... The overall loss is computed as a weighted combination [Dosovitskiy and Djolonga, 2019] of the above two losses as follows: L = αLbce + (1 α)Lmulti where α [0, 1] controls the relative importance of each loss term, and we emprically set α = 0.7. ... For inference, we follow a similar process in the training phase. The final drug recommendations are determined by applying a threshold δ to the output probabilities ˆo(t). Specifically, we recommend drugs corresponding to the entries where ˆo(t) > δ, and δ is emprically set as 0.5.