The Relationship Between No-Regret Learning and Online Conformal Prediction

Authors: Ramya Ramalingam, Shayan Kiyani, Aaron Roth

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we compare the performance of Algorithm 2 with that of the MVP ( multi valid predictor ) algorithm (Bastani et al., 2022), that to our knowledge is the only other method for obtaining non-trivial group-conditional coverage guarantees in sequential adversarial settings. We run experiments on the same collection of datasets used to evaluate MVP in (Bastani et al., 2022). We compare rates of convergence to the desired coverage over all groups. Since the guarantees for our algorithm are more fine grained, and are proven in terms ||θt|| , we plot also the L norm of the parameters θt maintained by Algorithm 2 over time.
Researcher Affiliation Academia Ramya Ramalingam 1 Shayan Kiyani 1 Aaron Roth 1, 1Department of Computer Science, University of Pennsylvania. Correspondence to: Ramya Ramalingam <EMAIL>.
Pseudocode Yes Algorithm 1 Follow The Regularized Leader (pinball loss); Algorithm 2 Group Conditional ACI (GCACI)
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes We run both algorithms on stock market data from the WSJ daily price data... We run both MVP and GCACI on the airfoil dataset from the UCI Machine Learning Repository (Dua & Graff, 2017)... Finally, we compare performance on a covariate shift problem using 2018 Census data from the Folktables repository (Ding et al., 2021).
Dataset Splits Yes We use 25% of the data to train a linear regression model g : X R, which defines the scoring function f(x, y) = |g(x) y|. Another 25% of the data is used as is, and the final 50% of the data is sampled (with replacement) using exponential tilting... We use census data from two different states (California & Pennsylvania) and sample 0.2 of both states to get a test set with N = 52794 data points. The data is sequenced with all CA datapoints first, giving us unknown distribution shift from a natural source. A quantile regression model h : X R is trained on 50% of the remaining California data, defining the fixed scoring function f(x, y) = |h(x) y|.
Hardware Specification No The paper does not provide specific hardware details used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers.
Experiment Setup Yes To achieve our derived O( Tk/Ti) bounds we set the learning rate η = 1 for these experiments. We also then empirically investigate the relationship between the rate of convergence to the target coverage rate and the learning rate, by measuring the time-step1 at which the empirical group conditional coverage for the rest of the sequence falls within ϵ of the desired coverage rate, as a function of η. We set ϵ = 0.01 for all tests.