Model Uncertainty Quantification by Conformal Prediction in Continual Learning

Authors: Rui Gao, Weiwei Liu

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, extensive experiments on simulated and real data empirically verify the validity of our proposed method.
Researcher Affiliation Academia 1School of Computer Science, National Engineering Research Center for Multimedia Software, Institute of Artificial Intelligence and Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University, Wuhan, China. Correspondence to: Weiwei Liu <EMAIL>.
Pseudocode Yes Details of CPCL are shown in Algorithm 1.
Open Source Code No The paper does not provide any explicit statement about releasing source code, nor does it include a link to a code repository.
Open Datasets Yes We conduct experiments using Tiny Image Net, a subset of 200 classes from Image Net (Deng et al., 2009), rescaled to an image size of 64 × 64.
Dataset Splits Yes We draw 5000 samples from the above distribution to construct the train dataset for each task. ... We draw 1000 samples from the distribution of task τ1 and another 1000 samples from the distribution of task τ2 to construct the test dataset. We set Ncal = 1000 which is the number of samples for each task to construct the calibration set. For each task, we have 500 samples belonging to one class, subdivided into training (80%) and calibration sets (20%) along with 50 samples for testing.
Hardware Specification No The paper does not specify any particular hardware (e.g., GPU models, CPU types, or memory) used for running the experiments.
Software Dependencies No In practice, we implement the above fitting and prediction process by Python according to (Meinshausen, 2006). (Python version is not specified, and no specific library versions are given.)
Experiment Setup No The paper mentions significance levels (α) and the number of experimental runs/random seeds. However, it does not provide specific hyperparameters for model training such as learning rate, batch size, number of epochs, or optimizer settings for the continual learning methods used (e.g., SI, EWC, MAS, DGR, Finetuning) or for the modified ResNet-18.