Reward-Guided Speculative Decoding for Efficient LLM Reasoning
Authors: Baohao Liao, Yuhui Xu, Hanze Dong, Junnan Li, Christof Monz, Silvio Savarese, Doyen Sahoo, Caiming Xiong
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive evaluations on challenging reasoning benchmarks, including Olympiad-level tasks, show that RSD delivers significant efficiency gains against decoding with the target model only (up to 4.4 fewer FLOPs), while achieving significant better accuracy than parallel decoding method on average (up to +3.5). |
| Researcher Affiliation | Collaboration | 1Language Technology Lab, University of Amsterdam 2Salesforce AI Research. Correspondence to: Yuhui Xu <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 RSD: Reward-Guided Speculative Decoding Algorithm 2 Acceptance Criterion Aω |
| Open Source Code | Yes | The code is available at https://github.com/Baohao Liao/RSD. |
| Open Datasets | Yes | We evaluate our method on a diverse set of reasoning tasks, including GSM8K (Cobbe et al., 2021b), MATH500 (Hendrycks et al., 2021), MMLU STEM (Hendrycks et al., 2020), Olympiad Bench (He et al., 2024), Gao Kao-2023-En (Liao et al., 2024), GPQA (Rein et al., 2023), and Minerva Math (Lewkowycz et al., 2022). |
| Dataset Splits | No | The paper mentions using specific datasets for evaluation (e.g., GSM8K, MATH500) and reports accuracy, but it does not explicitly provide details about the training/test/validation splits used for these datasets within the main text. |
| Hardware Specification | Yes | All experiments were conducted on NVIDIA A100 GPUs, using v LLM (Kwon et al., 2023) as the backend. |
| Software Dependencies | No | The paper mentions "v LLM (Kwon et al., 2023) as the backend" and "Merge Kit (Goddard et al., 2024)" but does not provide specific version numbers for any software components. |
| Experiment Setup | Yes | We use temperature = 0.7 and top p = 0.8 for majority voting, (process) Best-of-N and beam search, while setting temperature = 0 and top p = 1 for the remaining methods. For process Best-of-N, beam search and RSD, we define a generation ended with as a reasoning step, and then apply a PRM to rate this step. We employ the binary step function (the second option in Table 1) as the weighting function and set δ = 0.7. |