MedualTime: A Dual-Adapter Language Model for Medical Time Series-Text Multimodal Learning
Authors: Jiexia Ye, Weiqi Zhang, Ziyue Li, Jia Li, Meng Zhao, Fugee Tsung
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, Medual Time demonstrates superior performance on medical data, achieving notable improvements of 8% accuracy and 12% F1 in supervised settings. Furthermore, Medual Time s transferability is validated by few-shot transfer experiments from coarse-grained to fine-grained medical data. |
| Researcher Affiliation | Academia | Jiexia Ye1 , Weiqi Zhang2 , Ziyue Li3 , Jia Li2 , Meng Zhao4 , Fugee Tsung2 1The Hong Kong University of Science and Technology (Guangzhou) 2The Hong Kong University of Science and Technology 3University of Cologne 4Columbia University EMAIL, EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methodology using textual explanations and mathematical formulas in Section 3 'Methodology', but it does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is in https://github.com/start2020/ Medual Time. |
| Open Datasets | Yes | Datasets Since our approach focuses on medical tasks involving coupled time-series and text modalities, where each modality provides valuable information for decision-making, we conduct experiments using publicly available datasets that meet the necessary criteria above: (1) PTB-XL 1 [Wagner et al., 2020]: This electrocardiogram (ECG) corpus... (2) TUSZ v1.5.2 2 [Shah et al., 2018]: The Temple University Seizure Corpus (TUSZ) is a large-scale corpus of EEG (electroencephalogram) signals... |
| Dataset Splits | No | More details about the datasets, including the label sets, data splits, and preprocessing steps, are provided in Appendix 1.1. The paper defers the details about dataset splits to Appendix 1.1, which is not provided in the given text. Without access to Appendix 1.1, specific details regarding split percentages, sample counts, or splitting methodology are not available in the provided paper text. |
| Hardware Specification | Yes | All experiments are implemented by Py Torch Framework with NVIDIA A6000 (48G) GPU. |
| Software Dependencies | No | All experiments are implemented by Py Torch Framework with NVIDIA A6000 (48G) GPU. Adam is adopted as the optimizer [Kingma, 2014]. The paper mentions "Py Torch Framework" but does not specify a version number for PyTorch or any other software library. |
| Experiment Setup | Yes | All hidden dimensions are set to 768 to match the backbone (i.e., GPT-2). The time series patch size and stride are both set to 25. Adam is adopted as the optimizer [Kingma, 2014]. |