Generalized Prediction Set with Bandit Feedback
Authors: Zhou Wang, Xingye Qiao
TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The empirical results further show that Bandit GPS effectively controls the recalls with promising performances on OOD detection and informative prediction. ... In Section 5, we present empirical evidence demonstrating the effectiveness of Bandit GPS. |
| Researcher Affiliation | Academia | Zhou Wang EMAIL Department of Mathematics and Statistics Binghamton University, the State University of New York; Xingye Qiao EMAIL Department of Mathematics and Statistics Binghamton University, the State University of New York |
| Pseudocode | Yes | Algorithm 1 Bandit GPS |
| Open Source Code | No | The paper does not contain an explicit statement about releasing code or a link to a code repository. |
| Open Datasets | Yes | We compare the methods by evaluating them on CIFAR10, CIFAR100, and SVHN datasets. |
| Dataset Splits | No | The paper describes how classes were split into 'normal' and 'OOD' for each dataset (e.g., 'For CIFAR10, we set {Bird, Cat, Deer, Dog, Frog, Horse} as normal classes while all the remaining {Airplane, Car, Ship, Truck} as the OOD class'). It also mentions evaluating on a 'fixed holdout labeled dataset' in Table 1. However, it does not provide specific train/validation/test split percentages, sample counts, or a detailed methodology for these standard dataset partitions. |
| Hardware Specification | Yes | All experiments are conducted on an NVIDIA P100 GPU with CUDA 11.3. |
| Software Dependencies | Yes | All experiments are conducted on an NVIDIA P100 GPU with CUDA 11.3. |
| Experiment Setup | Yes | We let all the experiments have the same desired recall 1 γ = 0.95 across datasets, utilizing Res Net (He et al., 2016) as the backbone architecture, Adam for optimization, learning rate η1 = 10 4 for network updates, and η2 = η2(t) = t 1/2 for optimizing λt,k. To improve the computational efficiency, model updates employ batch data with a size of 256 in each iteration, with about 6000 total iterations. |