Position: Medical Large Language Model Benchmarks Should Prioritize Construct Validity
Authors: Ahmed Alaa, Thomas Hartvigsen, Niloufar Golchini, Shiladitya Dutta, Frances Dean, Inioluwa Deborah Raji, Travis Zack
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To put these ideas into practice, we use real-world clinical data in proof-of-concept experiments to evaluate popular medical LLM benchmarks and report significant gaps in their construct validity. |
| Researcher Affiliation | Academia | 1UC Berkeley 2UCSF 3University of Virginia. Correspondence to: Ahmed Alaa <EMAIL>. |
| Pseudocode | No | The paper describes methodologies for data processing and evaluation but does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access information for source code, such as a repository link, an explicit code release statement, or code in supplementary materials. |
| Open Datasets | Yes | As a proof of concept, we evaluate the Med QA benchmark (Jin et al., 2021) using real-world EHR data from the University of California, San Francisco medical center. |
| Dataset Splits | Yes | For Med QA and MMLU, we draw from the test split and for Med MCQ we draw from the dev split. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions evaluating specific LLMs (GPT-4, Llama 3.0, etc.) and using a system (cTAKES), but it does not provide specific version numbers for any software libraries or frameworks needed to replicate the experiment. |
| Experiment Setup | No | The paper describes the experimental methodology, such as matching Med QA questions to EHR data and evaluation metrics, but it does not specify concrete hyperparameters, model initialization, training schedules, or system-level training settings for the LLMs evaluated, as it uses pre-trained models. |