BOOD: Boundary-based Out-Of-Distribution Data Generation
Authors: Qilin Liao, Shuo Yang, Bo Zhao, Ping Luo, Hengshuang Zhao
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results on common benchmarks demonstrate that BOOD surpasses the state-of-the-art method significantly, achieving a 29.64% decrease in average FPR95 (40.31% vs. 10.67%) and a 7.27% improvement in average AUROC (90.15% vs. 97.42%) on the CIFAR-100 dataset. |
| Researcher Affiliation | Academia | 1The University of Hong Kong, Hong Kong, China 2Department of Computer Science, Harbin Institute of Technology (Shenzhen), Shenzhen, China 3School of AI, Shanghai Jiao Tong University, Shanghai, China. Correspondence to: Shuo Yang <EMAIL>, Hengshuang Zhao <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 BOOD: Boundary-based Out-Of-Distribution data generation |
| Open Source Code | No | The paper does not provide an explicit statement or link for the open-sourcing of their code. It only mentions model selection for reproducibility. |
| Open Datasets | Yes | Following Dream OOD (Du et al., 2023), we select CIFAR-100 and IMAGENET-100 (Deng et al., 2009) as ID image datasets. As the OOD datasets should not overlap with ID datasets, we choose SVHN (Netzer et al., 2011), PLACES365 (Zhou et al., 2018), TEXTURES(Cimpoi et al., 2014), LSUN (Yu et al., 2015), ISUN (Xu et al., 2015) as OOD testing image datasets for CIFAR-100. For IMAGENET-100, we choose INATURALIST (Horn et al., 2018), SUN (Xiao et al., 2010), PLACES (Zhou et al., 2018) and TEXTURES (Cimpoi et al., 2014), following MOS (Huang & Li, 2021). |
| Dataset Splits | Yes | Following Dream OOD (Du et al., 2023), we select CIFAR-100 and IMAGENET-100 (Deng et al., 2009) as ID image datasets. As the OOD datasets should not overlap with ID datasets, we choose SVHN (Netzer et al., 2011), PLACES365 (Zhou et al., 2018), TEXTURES(Cimpoi et al., 2014), LSUN (Yu et al., 2015), ISUN (Xu et al., 2015) as OOD testing image datasets for CIFAR-100. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for the experiments, such as GPU or CPU models. It mentions computational cost comparison but not the underlying hardware. |
| Software Dependencies | Yes | A total of 1000 images per class were generated using Stable Diffusion v1.4, yielding a comprehensive set of 100,000 OOD images. |
| Experiment Setup | Yes | The initial learning rate was set to 0.1, with a cosine learning rate decay schedule implemented. A batch size of 160 was utilized. In the construction of the latent space, the temperature parameter t was assigned a value of 1. In the boundary feature selection process, the initial pruning rate r was established at 5, with an initial total step K of 100. The step size α was configured to 0.015. The hyper parameters for the OOD feature synthesis step were maintained consistent with those of the boundary feature identification process. A total of 1000 images per class were generated using Stable Diffusion v1.4, yielding a comprehensive set of 100,000 OOD images. For the regularization of the OOD detection model, the β parameter was set to 1.0 for IMAGENET-100 and 2.5 for CIFAR-100. |