Understanding Overadaptation in Supervised Fine-Tuning: The Role of Ensemble Methods

Authors: Yifan Hao, Xingyuan Pan, Hanning Zhang, Chenlu Ye, Rui Pan, Tong Zhang

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate that the same holds for language models, and, more strikingly, we observe an overadaptation phenomenon: the ensemble model not only retains general knowledge from the foundation model but also outperforms the fine-tuned model even on the fine-tuning domain itself... supported by empirical experiments consistent with our analysis. Specifically, we start with presenting empirical evidence in Section 3 that highlights the harmful effects of overadaptation and demonstrates the efficiency benefits of ensembling in both improving fine-tuning performance and mitigating forgetting.
Researcher Affiliation Academia 1University of Illinois Urbana-Champaign, Illinois.
Pseudocode No The paper describes mathematical frameworks and theoretical analysis in sections 4, 5, and 6, but does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes We make our implementation publicly available 1. https://github.com/xypan0/LLMForgetting
Open Datasets Yes Our experiments utilize the Dolly dataset (Conover et al., 2023), a popular instruction-following dataset... The LLMs instruction-following ability is evaluated on MT-Bench (Zheng et al., 2023)... We also assess LLMs general ability on MMLU (Hendrycks et al., 2021) and Commonsense-QA (Talmor et al., 2019).
Dataset Splits Yes We have a carefully curated instruction-following dataset. The validation dataset consists of multi-turn conversations... our validation dataset contains 600 samples, evenly distributed across the 8 categories in MT-Bench.
Hardware Specification Yes Our training and evaluation are conducted on 8 NVIDIA H100 GPUs.
Software Dependencies No We implemented our fine-tuning code based on Huggingface Transformers3 and Accelerate4 libraries, where Fully Sharded Data Parallel (Zhao et al., 2023) is utilized for model parallel training and acceleration. Specific version numbers for these libraries are not provided in the text.
Experiment Setup Yes We fine-tune the models with a global batch size of 16, and an epoch of 1 using Adam optimizer on 8 GPUs. To select a suitable learning rate and penalty, we search the learning rate on {5 10 6, 2 10 6, 10 6}, and penalty coefficient λ on {10 2, 5 10 3, 2 10 3, 10 3}. We also search the ensemble weight τ uniformly on {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9}.