Conformal Thresholded Intervals for Efficient Regression

Authors: Rui Luo, Zhixin Zhou

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results demonstrate that CTI achieves superior performance compared to state-of-the-art conformal regression methods across various datasets, consistently producing smaller prediction sets while maintaining the desired coverage level.
Researcher Affiliation Collaboration Rui Luo1* and Zhixin Zhou2 1Department of Systems Engineering, City University of Hong Kong, Hong Kong SAR, China 2Alpha Benito Research, Los Angeles, USA
Pseudocode Yes Algorithm 1: Conformalized Thresholded Intervals
Open Source Code Yes Code https://github.com/luo-lorry/CTI
Open Datasets Yes Extensive experimental results demonstrate that CTI achieves superior performance compared to state-of-the-art conformal regression methods across various datasets, consistently producing smaller prediction sets while maintaining the desired coverage level.
Dataset Splits Yes We randomly allocate 20% of the samples for testing, and from the remaining data, we utilize 70% for training the quantile regression model and 30% for calibration.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. It mentions using random forest (RF) and neural network (NN) models but no underlying hardware.
Software Dependencies No The paper mentions "Quantile Regression Forests (RF)" and "Quantile Regression Neural Networks (NN)" but does not provide specific version numbers for these or any other software libraries or frameworks. While a code link is provided, specific software dependencies with versions are not explicitly stated in the text.
Experiment Setup Yes To balance expressiveness and computational efficiency, we fix the number of quantiles at K = 100 for all datasets. This choice ensures affordable runtime for CTI while maintaining flexibility for various tasks. We repeat all experiments 10 times, starting from the initial data splitting and the training procedure of quantile regression using both random forest (RF) and neural network (NN) models.