Adaptive Multi-prompt Contrastive Network for Few-shot Out-of-distribution Detection

Authors: Xiang Fang, Arvind Easwaran, Blaise Genest

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that AMCN outperforms other state-of-the-art works.
Researcher Affiliation Academia 1Energy Research Institute @ NTU, Interdisciplinary Graduate Programme, Nanyang Technological University, Singapore 2College of Computing and Data Science, Nanyang Technological University, Singapore 3CNRS and CNRS@CREATE, IPAL IRL 2955, France and Singapore. Correspondence to: Xiang Fang <EMAIL>.
Pseudocode No The paper describes the methodology using textual explanations and mathematical equations, but does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes Codes are available in Github.
Open Datasets Yes For fair comparison, we follow MOS (Huang & Li, 2021) and MCM (Ming et al., 2022) to utilize Image Net1k (Deng et al., 2009) as ID set and a subset of i Naturalist (Horn et al., 2018), PLACES (Zhou et al., 2018), TEXTURE (Cimpoi et al., 2014) and SUN (Xiao et al., 2010) as OOD set.
Dataset Splits Yes For the K-shot OOD detection task, it aims to use only K labeled ID images from each class for model training, and to test on the complete test set for OOD detection. ... For the few-shot setting (Ye et al., 2020), following previous works (Miyai et al., 2023; Ye et al., 2020), we try different shots (1, 2, 4, 8, 16).
Hardware Specification No The computational work for this article was (fully/partially) performed on resources of the National Supercomputing Centre, Singapore (https://www.nscc.sg). This statement indicates the use of a supercomputing center but does not provide specific hardware details like GPU/CPU models or memory.
Software Dependencies No The paper mentions using 'CLIP-Vi T-B/16 (Radford et al., 2021) as the pretrained model' and 'Adam W (Loshchilov & Hutter, 2019) as the optimizer', but does not specify software versions for libraries, frameworks, or programming languages (e.g., Python, PyTorch versions).
Experiment Setup Yes For the parameters, we set α1 = 0.4, α2 = 0.2, α3 = 0.8, θ = 0.8, τ = 1.0, γ = 0.7, P = 1, S = 50, Z = 50. We set Adam W (Loshchilov & Hutter, 2019) as the optimizer, the learning rate of 0.003, the batch size as 64, the token length as 16 and the training epoch as 100.