Error-quantified Conformal Inference for Time Series
Authors: Junxi Wu, Dongjian Hu, Yajie Bao, Shu-Tao Xia, Changliang Zou
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results demonstrate that ECI and its variants provide superior performance in time series, including data in finance, energy, and climate domains. We show that ECI maintains coverage at the target level and obtains tighter prediction sets than other state-of-the-art methods. |
| Researcher Affiliation | Academia | 1 School of Statistics and Data Science, LPMC, KLMDASR and LEBPS, Nankai University 2 Tsinghua Shenzhen International Graduate School, Tsinghua University EMAIL EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1 Adaptive Conformal Inference (ACI) Algorithm 2 Online Gradient Descent (OGD) Algorithm 3 Scale-Free Online Gradient Descent (SF-OGD) Algorithm 4 Online conformal prediction with decaying step sizes (decay-OGD) Algorithm 5 Conformal PID Algorithm 6 Sequential Predictive Conformal Inference (SPCI) |
| Open Source Code | Yes | Our code is available at https://github.com/creator-xi/ Error-quantified-Conformal-Inference. |
| Open Datasets | Yes | We evaluate four real-world datasets: Amazon stock, Google stock (Nguyen, 2018), electricity demand (Harries et al., 1999) and temperature in Delhi (Vrao., 2017). |
| Dataset Splits | No | The paper focuses on online conformal inference for time series, where data arrives sequentially and models are trained on 'previously observed data {(Xi, Yi)}i <= t' to predict for 'unseen label Yt'. This implies a continuous, chronological processing of data rather than a fixed, predefined train/test/validation split. While the synthetic dataset describes changepoints over time (t=1,...,500; t=501,...,1500; t=1501,...,2000), these are not conventional dataset splits for reproduction. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments, such as GPU or CPU models, memory specifications, or cloud computing instance types. |
| Software Dependencies | No | The paper mentions several models and functions (Prophet, AR, Theta, Transformer, Sigmoid function) but does not provide specific version numbers for any software libraries, frameworks, or programming languages used in the implementation. |
| Experiment Setup | Yes | We choose target coverage 1 α = 90% and construct asymmetric prediction sets using two-side quantile scores under α/2 respectively. For EQ term, we set f(x) = 1 / (1+exp(-cx)) as Sigmoid function and c = 1. For ECI-cutoff, we set h = 1 and ht = h * (max{st_w+1, ..., st} - min{st_w+1, ..., st}). For ECI-integral, we set weights wi = 0.95^(t-i) / sum(0.95^(t-j)) for 1 <= i <= t. Specifically, PID, ECI, and its variants use adaptive learning rates ηt = η * (max{st_w+1, ..., st} - min{st_w+1, ..., st}), where w is window length. We set input length to 12, output length to 1, the number of encoder layers to 3, the number of decoder layers to 3, and the number of features in the encoder/decoder inputs to 64 (for Transformer). Appendix G.2 provides detailed learning rates for each method. |