Reflection-Window Decoding: Text Generation with Selective Refinement

Authors: Zeyu Tang, Zhenhao Chen, Xiangchen Song, Loka Li, Yunlong Deng, Yifan Shen, Guangyi Chen, Peter Spirtes, Kun Zhang

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our extensive experimental results demonstrate the effectiveness of our approach. Through extensive empirical evaluations, our approach demonstrates significant improvement over existing decoding approaches, and maintains performance comparable or superior to beam search while being more efficient.
Researcher Affiliation Academia 1Carnegie Mellon University 2Mohamed bin Zayed University of Artificial Intelligence. Correspondence to: Zeyu Tang <EMAIL>, Zhenhao Chen <EMAIL>.
Pseudocode Yes We present the pseudocode of our reflection-window decoding approach in Algorithm 1.
Open Source Code No The paper does not provide concrete access to source code. There are no explicit statements about code release or links to repositories for the methodology described.
Open Datasets Yes Our experiments are conducted on MMLU (Hendrycks et al., 2020) and MT-Bench (Zheng et al., 2023).
Dataset Splits No The paper evaluates models on MMLU (Hendrycks et al., 2020) and MT-Bench (Zheng et al., 2023). While these are benchmarks, the paper does not specify custom training/validation/test splits for the experiments described, instead relying on the structure of these evaluation datasets.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or cloud computing resources used for running the experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes We use an entropy threshold of σ = 0.5 and a window size of d = 4 in reflection-window decoding. In these experiments, we set k = 10, p = 0.9, and temperature as 1.0 for both our approach and the baseline Top-k/Top-p sampling.