Uncertainty Herding: One Active Learning Method for All Label Budgets

Authors: Wonho Bae, Danica Sutherland, Gabriel Oliveira

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In experimental validation across a variety of active learning tasks, our proposal matches or beats state-of-the-art performance in essentially all cases; it is the only method of which we are aware that reliably works well in both lowand high-budget settings.
Researcher Affiliation Collaboration Wonho Bae University of British Columbia & Borealis AI EMAIL; Danica J. Sutherland University of British Columbia & Amii EMAIL; Gabriel L. Oliveira Borealis AI EMAIL
Pseudocode Yes Algorithm 1: Uncertainty herding with parameter adaptation
Open Source Code No The authors have not publicly released code though indicated in private communication they plan to do so.
Open Datasets Yes across several benchmark datasets: CIFAR10 (Krizhevsky, 2009), CIFAR100 (Krizhevsky et al.), Tiny Image Net (mnmoustafa, 2017), Image Net (Deng et al., 2009), and Domain Net (Peng et al., 2019).
Dataset Splits Yes We train a randomly-initialized Res Net18 (He et al., 2016) on CIFAR10 using 5 random seeds, gradually increasing the size of the labeled set, as shown in Figure 3. We use Dei T (Touvron et al., 2021) pre-trained on Image Net (Deng et al., 2009), following Parvaneh et al. (2022); Xie et al. (2023). We fine-tune the entire model, using Dei T Small for CIFAR100 and Dei T Base for Domain Net.
Hardware Specification No The paper does not explicitly describe the hardware used for running its experiments, such as specific GPU or CPU models. It only mentions models like ResNet18 and DeiT which are software architectures.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies or libraries used in the experiments. It only mentions model architectures like Res Net18 and Dei T.
Experiment Setup No The paper mentions general experimental settings such as 'randomly-initialized Res Net18', 'cold-start', 'fine-tuning the entire model', and 'using 5 random seeds'. However, it does not provide concrete numerical hyperparameter values like learning rate, batch size, number of epochs, or specific optimizer settings.