Multimodal Lego: Model Merging and Fine-Tuning Across Topologies and Modalities in Biomedicine

Authors: Konstantin Hemker, Nikola Simidjievski, Mateja Jamnik

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate MM-Lego (Lego Merge and Lego Fuse) and its components (Lego Block) on seven multimodal medical datasets from three separate studies: The Cancer Genome Atlas (TCGA) (Institute, 2006), Medical Information Mart for Intensive Care (MIMIC) (Johnson et al., 2016) and the International Skin Imaging Collaboration (ISIC)) (Collaboration, 2020).
Researcher Affiliation Academia Konstantin Hemker1, Nikola Simidjievski2,1 & Mateja Jamnik1 1Department of Computer Science & Technology 2PBCI, Department of Oncology University of Cambridge Cambridge, UK EMAIL
Pseudocode No The paper describes methods with equations and figures but does not contain a distinct pseudocode or algorithm block.
Open Source Code Yes The code implementation for MM-Lego is available at https://github.com/konst-int-i/ mm-lego.
Open Datasets Yes We evaluate MM-Lego (Lego Merge and Lego Fuse) and its components (Lego Block) on seven multimodal medical datasets from three separate studies: The Cancer Genome Atlas (TCGA) (Institute, 2006), Medical Information Mart for Intensive Care (MIMIC) (Johnson et al., 2016) and the International Skin Imaging Collaboration (ISIC)) (Collaboration, 2020).
Dataset Splits Yes For each experiment and dataset, we perform a 5-fold repeated random sub-sampling with a 70-15-15 train-test-validation split.
Hardware Specification Yes The experiments were run on a single Nvidia A100 80GB GPU on a Ubuntu 22.04 virtual machine.
Software Dependencies No The experiments were run on a single Nvidia A100 80GB GPU on a Ubuntu 22.04 virtual machine. This mentions the operating system but does not specify software libraries with version numbers.
Experiment Setup Yes Scope Parameter Value Learning Rate 0.003 Epochs 40 Early Stopping Patience 7 L1 Regularization 0.0002 Batch size 512 Optimizer Adam LR Scheduler Reduce LROn Plateau