End-to-End Conformal Calibration for Optimization Under Uncertainty

Authors: Christopher Yeh, Nicolas Christianson, Alan Wu, Adam Wierman, Yisong Yue

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we present experimental results for our E2E method against several ETO baselines. Code to reproduce our results is available on Git Hub.3 ... Figure 3: Task loss performance (mean 1 stddev across 10 runs) for the battery storage problem with no distribution shift (top) and with distribution shift (bottom). Lower values are better.
Researcher Affiliation Academia Christopher Yeh EMAIL Nicolas Christianson EMAIL Alan Wu EMAIL Adam Wierman EMAIL Yisong Yue EMAIL Department of Computing and Mathematical Sciences California Institute of Technology
Pseudocode Yes Algorithm 1 End-to-end conformal calibration for robust decisions under uncertainty
Open Source Code Yes In this section, we present experimental results for our E2E method against several ETO baselines. Code to reproduce our results is available on Git Hub.3 https://github.com/chrisyeh96/e2e-conformal
Open Datasets Yes This problem comes from Donti et al. (2017), where a grid-scale battery operator predicts electricity prices y RT over a T-step horizon... We adopt the portfolio optimization setting and synthetic dataset from Chenreddy & Delage (2024)
Dataset Splits Yes For the setting without distribution shift, we take a random 20% subset of the dataset as the test set... For each seed, we further use a 80/20 random split of the remaining data for training and calibration. ... For each random seed, we generate 2000 samples and use a (train, calibration, test) split of (600, 400, 1000).
Hardware Specification Yes These times were measured on a machine with 2 AMD EPYC 7513 32-Core Processors, 1Ti B RAM, and 4 NVIDIA A100 GPUs (although only 1 of the GPUs was used in these experiments).
Software Dependencies No In all experiments, we use a batch size of 256 and the Adam optimizer (Kingma & Ba, 2015). ... In our implementation, we use the default cvxpy solver (Clarabel) for the optimization step in ETO, whereas we use the default cvxpylayers solver (SCS) during E2E training. The paper mentions software components (Adam optimizer, cvxpy, Clarabel, SCS) but does not provide specific version numbers for any of them.
Experiment Setup Yes In all experiments, we use a batch size of 256 and the Adam optimizer (Kingma & Ba, 2015). Models were trained for up to 100 epochs with early stopping if there was no improvement in validation loss for 10 consecutive epochs. For box and ellipsoid ETO baseline models, we performed a hyperparameter grid search over learning rates (10 4.5, 10 4, 10 3.5, 10 3, 10 2.5, 10 2, 10 1.5) and L2 weight decay values (0, 10 4, 10 3, 10 2).