Contextual Optimization Under Model Misspecification: A Tractable and Generalizable Approach

Authors: Omar Bennouna, Jiawei Zhang, Saurabh Amin, Asuman E. Ozdaglar

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide rigorous theoretical analysis and experimental validation, demonstrating superior performance compared to state-of-the-art methods. Our work offers a principled solution to the practically relevant challenge of model misspecification in contextual optimization.
Researcher Affiliation Academia 1Department of EECS, Massachusetts Institute of Technology 2Department of Computer Sciences, University of Wisconsin Madison. Correspondence to: Omar Bennouna <EMAIL>, Jiawei Zhang <EMAIL>.
Pseudocode No No pseudocode or algorithm block is explicitly provided in this paper. The paper refers to "Algorithm 3.1 in (Nocedal and Wright, 1999)" but does not include it within the text.
Open Source Code No The paper does not contain an explicit statement about releasing source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets No We have validated our method on synthetic data and plan further experiments on real-world datasets for comparison with existing methods. In every experiment, we sample x N(0, I) while all of its coordinates are conditioned to be between 0 and 10. and the coefficients of A from a standard normal Gaussian distribution, and b to be equal to A |w|... The paper uses synthetically generated data and does not provide access information for a publicly available or open dataset.
Dataset Splits No The paper describes how synthetic data is generated for experiments but does not provide specific training/test/validation split information (percentages, counts, or methodology) for the generated data.
Hardware Specification No All computational experiments were run on the MIT Super Cloud (Reuther et al., 2018). This names a computing resource but does not provide specific hardware details like GPU/CPU models or memory amounts.
Software Dependencies No The paper mentions running gradient descent for optimization but does not provide specific software dependencies or their version numbers, such as programming languages, libraries, or solvers.
Experiment Setup Yes We set (d, j) = (20, 5) and W to be a polyhedron and written as W = {w Rd, Aw = b, 10 w 0} where A Rj d (j d) and b Rj. To optimize ℓβ Pn, we ran gradient descent on its surrogate loss rβ Pn... We chose β by line search. We used βmin,P = E(x,c) Pn c w(c) as a lower bound to β, and βSPO+ = E(x,c) Pn c w ˆcθ SPO+(x) where θ SPO+ is the solution obtained by optimizing the SPO+ loss. For every value of s, we tested 96 evenly spaced values of β in the interval [βmin,P , βSPO+], and picked β yielding the solution with the best decision performance.