Promptable Anomaly Segmentation with SAM Through Self-Perception Tuning

Authors: Hui-Yue Yang, Hui Chen, Ao Wang, Kai Chen, Zijia Lin, Yongliang Tang, Pengcheng Gao, Yuming Quan, Jungong Han, Guiguang Ding

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experiments 4.1 Experiment Setups Datasets. To ensure generalization across various industrial products and anomaly types with different prompts, we collect approximately 15,000 industrial anomaly images from real-world factories as training dataset. For the evaluation, we use six standard benchmark datasets, including MVTec (Bergmann et al. 2019), Vis A (Zou et al. 2022), MTD (Huang, Qiu, and Yuan 2020), KSDD2 (Boˇziˇc, Tabernik, and Skoˇcaj 2021), BTAD (Mishra et al. 2021), and MPDD (Jezek et al. 2021). The training dataset includes a variety of imaging conditions, product types, and anomaly classes that are distinct from those in the test datasets, ensuring a fair evaluation of generalization. More details are provided in the extended version. ... Table 1: Performance comparison under different evaluation modes (%). ... Table 2: Ablation study of each component in the proposed SPT (%).
Researcher Affiliation Collaboration 1School of Software, Tsinghua University 2BNRist, Tsinghua University 3LUSTER Light Tech Co., Ltd. 4Department of Automation, Tsinghua University EMAIL, EMAIL, EMAIL, EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the methodology using textual explanations and mathematical formulas (Eq. 1-12) but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code https://github.com/THU-MIG/SAM-SPT
Open Datasets Yes For the evaluation, we use six standard benchmark datasets, including MVTec (Bergmann et al. 2019), Vis A (Zou et al. 2022), MTD (Huang, Qiu, and Yuan 2020), KSDD2 (Boˇziˇc, Tabernik, and Skoˇcaj 2021), BTAD (Mishra et al. 2021), and MPDD (Jezek et al. 2021).
Dataset Splits No To ensure generalization across various industrial products and anomaly types with different prompts, we collect approximately 15,000 industrial anomaly images from real-world factories as training dataset. For the evaluation, we use six standard benchmark datasets, including MVTec (Bergmann et al. 2019), Vis A (Zou et al. 2022), MTD (Huang, Qiu, and Yuan 2020), KSDD2 (Boˇziˇc, Tabernik, and Skoˇcaj 2021), BTAD (Mishra et al. 2021), and MPDD (Jezek et al. 2021). The training dataset includes a variety of imaging conditions, product types, and anomaly classes that are distinct from those in the test datasets, ensuring a fair evaluation of generalization. More details are provided in the extended version. The paper mentions a training dataset and evaluation datasets but does not provide specific split percentages, sample counts for each split, or explicit references to how the benchmark datasets were split for their specific experiments.
Hardware Specification Yes All models are trained for 16 epochs using 8 NVIDIA 3090 GPUs with a batch of 8 images.
Software Dependencies No The paper mentions the use of SAM model and PEFT methods but does not specify any software libraries (e.g., PyTorch, TensorFlow) or their version numbers.
Experiment Setup Yes During training, for all models, the learning rate is set to 1 10 3 and is reduced after 10 epochs. All models are trained for 16 epochs using 8 NVIDIA 3090 GPUs with a batch of 8 images. The α in VRA-Adapter remains robust within the range of 0 to 0.5, depending on the specific PEFT method and model size. The rank of adapter is set to 8 for all models by default.