Param$\Delta$ for Direct Mixing: Post-Train Large Language Model At Zero Cost
Authors: Sheng Cao, Mingrui Wu, Karthik Prasad, Yuandong Tian, Zechun Liu
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Results indicate Param Model effectively replicates traditional post-training. For example, the Param Model obtained from 70B Llama3-inst, Llama3-base, Llama3.1-base models attains approximately 95% of Llama3.1-inst model s performance on average. Param brings a new perspective on how to fully leverage models in the open-weight community, where checkpoints for base and instruct models are readily available and frequently updated, by providing a cost-free framework to accelerate the iterative cycle of model development. |
| Researcher Affiliation | Industry | Sheng Cao Meta Platforms, Inc EMAIL Mingrui Wu Meta Platforms, Inc EMAIL Karthik Prasad Meta Platforms, Inc EMAIL Yuandong Tian Meta FAIR EMAIL Zechun Liu Meta Reality Labs EMAIL |
| Pseudocode | No | The paper describes the methodology using mathematical equations and textual descriptions, but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing code, nor does it include links to a code repository. |
| Open Datasets | Yes | We utilize open-weight checkpoints of Llama3 and Llama3.1 (Dubey et al., 2024). The evaluation datasets employed are sourced from open-source collections as reported in the Llama3 paper, which include MMLU(Hendrycks et al., 2021a), IFEval(Zhou et al., 2023), Human Eval(Chen et al., 2021), MBPP(Austin et al., 2021), GSM8K(Cobbe et al., 2021), MATH(Hendrycks et al., 2021b), ARC Challenge (Clark et al., 2018), GPQA (Rein et al., 2023), BFCL(Yan et al., 2024), API-Bank(Li et al., 2023), and MGSM(Shi et al., 2022). |
| Dataset Splits | No | The paper mentions that evaluation datasets are sourced from open-source collections as reported in the Llama3 paper, but it does not explicitly provide the specific training/test/validation splits used for its own experiments. |
| Hardware Specification | Yes | We use lr=1e 5, batch size=1, seq len=512, steps=125, 8 H100 GPUs to continually pretrain the 8B model; and lr=1e 5, batch size=1, seq len=512, steps=60, 16 H100 GPUs to continually pretrain the 70B model. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies used in the experiments. |
| Experiment Setup | Yes | We use lr=1e 5, batch size=1, seq len=512, steps=125, 8 H100 GPUs to continually pretrain the 8B model; and lr=1e 5, batch size=1, seq len=512, steps=60, 16 H100 GPUs to continually pretrain the 70B model. |