Execution-guided within-prompt search for programming-by-example

Authors: Gust Verbruggen, Ashish Tiwari, Mukul Singh, Vu Le, Sumit Gulwani

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate within-prompt search on straight-line Python code generation using five benchmarks across different domains (strings, lists, and arbitrary Python programming problems). We show that the model uses the execution results to guide the search and that within-prompt search performs well at low token budgets.
Researcher Affiliation Industry Gust Verbruggen Microsoft Keerbergen, Belgium EMAIL Ashish Tiwari, Mukul Singh, Vu Le & Sumit Gulwani Microsoft Redmond, USA EMAIL
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. Figure 1 provides a visual overview of the approach but is not formatted as pseudocode.
Open Source Code No The paper does not explicitly state that source code for the methodology is being released or provide a link to a code repository.
Open Datasets Yes We use five popular datasets that span different domains. PROSE (Microsoft) is a set of 354 benchmarks originally used to evaluate Flash Fill Gulwani (2011)... Sy Gu S (Alur et al., 2019)... Playgol (Cropper, 2019)... Lists (Rule et al., 2024)... MBPP (Austin et al., 2021)...
Dataset Splits No To save on cost of prompting for experiments, we filter trivial benchmarks by using a simple prompt conditioned on the first example (x1, y1) E to sample five programs P i at t = 0.6 and do not include a benchmark if P i |= E for all of them. This leaves 238 benchmarks from PROSE, 94 benchmarks for Sy Gus, 170 benchmarks for Playgol, 211 for Lists, and 268 for MBPP. The paper mentions filtering benchmarks for their experiments but does not provide specific training/test/validation splits for the datasets.
Hardware Specification No The model is gpt-4o. The paper mentions the specific LLM used for experiments but does not provide any specific details about the hardware (e.g., GPU, CPU, memory) on which these experiments were run.
Software Dependencies No We use Python as the programming language in which we synthesize programs. The paper mentions Python as the programming language and gpt-4o as the model, but does not specify Python's version or any other library/solver names with their version numbers.
Experiment Setup Yes Unless specified otherwise, we sample k = 4 operations at each line at a temperature of 0.6, which is a common value that achieves a nice balance between exploration and exploitation. The model is gpt-4o. We set the maximal number of iterations to 8.