One-step Label Shift Adaptation via Robust Weight Estimation
Authors: Ruidong Fan, Xiao Ouyang, Tingjin Luo, Lijun Zhang, Chenping Hou
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results substantiate the efficacy of our proposal. ... Comprehensive experimental results are presented to validate the efficacy of OLSA. ... We undertake a comprehensive evaluation of the performance and effectiveness of the proposed OLSA approach, focusing on two key aspects. For the initial component, we conduct a comparative analysis of OLSA with traditional label shift methods across diverse shift scenarios and evaluation metrics. |
| Researcher Affiliation | Academia | 1 National University of Defense Technology, Changsha, 410073, China 2 Nanjing University, Nanjing, 210023, China |
| Pseudocode | Yes | Algorithm 1 Procedure of OLSA approach |
| Open Source Code | No | The paper does not provide an explicit statement of code availability or a link to a code repository for the methodology described. |
| Open Datasets | Yes | In our study, we assess the performance and efficacy of OLSA on the MNIST [Le Cun et al., 1998], Fasion MNIST [Xiao et al., 2017], CIFAR10 [Krizhevsky et al., 2009] and CIFAR100 [Krizhevsky et al., 2009] datasets |
| Dataset Splits | Yes | For the MNIST and Fasion-MNIST datasets, we allocate 2000 samples each for the training and validation sets, and 10,000 samples for the test set. Analogously, for the CIFAR10 dataset, we assign 4000 samples each for training and validation, and 10,000 samples for testing. For the CIFAR100 dataset, the distribution is 10,000 samples for training, 5000 for validation, and 20,000 for testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running the experiments. |
| Software Dependencies | Yes | all methods run on framework with Python 3.7 and Py Torch based on the same pre-trained classifier. |
| Experiment Setup | Yes | In addition, we fix the trade-off parameter β = 0.1 empirically, while the calibration parameter γ is selected from the discrete set [0.8, 0.9, 1, 2] and the regularization parameter λ is chosen form the discrete set [0, 0.1, 1, 10] through the validation set results. |