Splitting & Integrating: Out-of-Distribution Detection via Adversarial Gradient Attribution
Authors: Jiayu Zhang, Xinyi Wang, Zhibo Jin, Zhiyu Zhu, Jianlong Zhou, Fang Chen, Huaming Chen
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate that our S & I algorithm achieves state-of-the-art results, with the average FPR95 of 29.05% (Res Net34)/38.61% (WRN40) and 37.31% (Bi T-S) on the CIFAR100 and Image Net benchmarks, respectively. Our code is available at: https://github.com/LMBTough/S-I |
| Researcher Affiliation | Academia | 1Suzhou University of Technology, Suzhou, China 2Faculty of Computer Science & Information Technology, University of Malaya, Kuala Lumpur, Malaysia 3Data Science Institute, University of Technology Sydney, Sydney, Australia 4School of Electrical & Computer Engineering, University of Sydney, Sydney, Australia. |
| Pseudocode | Yes | A. Pseudocode Algorithm 1 S & I Input: Input sample x, model f with parameters θ, number of layers l, number of iterations T, number of channels K, image height R, image width S, loss function L, learning rate η. Output: OOD score τ |
| Open Source Code | Yes | Our code is available at: https://github.com/LMBTough/S-I |
| Open Datasets | Yes | Specifically, on the CIFAR100 benchmark, we used CIFAR10 as ID datasets (Krizhevsky et al., 2009). We select SVHN (Netzer et al., 2011), Tiny Image Net (Liang et al., 2017), LSUN (Yu et al., 2015), Places (Zhou et al., 2017) and Textures (Cimpoi et al., 2014) as OOD datasets. The corresponding backbone models are Res Net34 (He et al., 2016) and WRN40 (Zagoruyko, 2016). On the Image Net benchmark, we use Image Net as our ID dataset (Deng et al., 2009). We also selected i Naturalist (Van Horn et al., 2018), SUN (Xiao et al., 2010), Places (Zhou et al., 2017) and Textures (Cimpoi et al., 2014) as OOD datasets. |
| Dataset Splits | No | The paper lists several well-known datasets (e.g., CIFAR10, CIFAR100, Image Net) which often have standard splits. However, it does not explicitly state the specific dataset split percentages, sample counts, or refer to a particular predefined split methodology within the text of this paper. |
| Hardware Specification | No | The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory specifications used for running the experiments. It mentions backbone models (e.g., Res Net34, WRN40, Bi T-S) but these refer to neural network architectures, not hardware. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., programming languages, libraries, frameworks, or operating systems) that would be needed to replicate the experiments. |
| Experiment Setup | Yes | where η denotes the learning rate, i = 1, 2, . . . , T, x0 = x, and xadv = x T. ... In this part, we first conduct an ablation study on the hyperparameter learning rate η. We vary the learning rate in the range 0.0005, 0.001, 0.0015, 0.002. |