SLIP: Spoof-Aware One-Class Face Anti-Spoofing with Language Image Pretraining

Authors: Pei-Kai Huang, Jun-Xiong Chong, Cheng-Hsuan Chiang, Tzu-Hsien Chen, Tyng-Luh Liu, Chiou-Ting Hsu

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our extensive experiments and ablation studies support that SLIP consistently outperforms previous one-class FAS methods. We conduct extensive experiments on seven public face anti-spoofing databases
Researcher Affiliation Academia 1 National Tsing Hua University, Taiwan 2 Academia Sinica, Taiwan
Pseudocode No The paper describes the methodology using textual explanations and mathematical equations, but does not include a distinct section or figure explicitly labeled "Pseudocode" or "Algorithm".
Open Source Code Yes Code https://github.com/Pei-Kai Huang/AAAI25-SLIP
Open Datasets Yes We conduct extensive experiments on the following face anti-spoofing databases: (a) OULU-NPU (Boulkenafet et al. 2017) (denoted by O), (b) CASIA-MFSD (Zhang et al. 2012) (denoted by C), (c) MSU-MFSD (Wen, Han, and Jain 2015) (denoted by M), (d) Idiap Replay-Attack (Chingovska, Anjos, and Marcel 2012) (denoted by I), (e) 3DMAD (Erdogmus and Marcel 2014) (denoted by D) , (f) HKBU-MARs (Liu et al. 2016b) (denoted by H) , (g) CASIA-SURF (Yu et al. 2020a) (denoted by U), and (h) PADISI-Face (Rostami et al. 2021) (denoted by P).
Dataset Splits Yes We conduct intra-domain testing on OULU-NPU... to design four challenging protocols for evaluating the effectiveness of the anti-spoofing models. ... In Table 3, we conduct leave-one-dataset-out testing on the most commonly used benchmarks... In Table 4, we adopt the protocols proposed in (Huang et al. 2024a) to conduct cross-domain testing... In particular, the authors in (Huang et al. 2024a) proposed adopting the leave-one-attack-out strategy to consider 3D mask, print, and replay as the unseen attack type within six protocols.
Hardware Specification No The paper mentions using "pretrained contrastive language-image pretraining model (CLIP)" and discusses model size and inference speed, but does not provide specific details about the hardware (e.g., GPU models, CPU types) used for running experiments.
Software Dependencies No The paper mentions using the "pretrained contrastive language-image pretraining model (CLIP)" but does not specify version numbers for CLIP or any other programming languages, libraries, or solvers.
Experiment Setup Yes To train SLIP, we set a constant learning rate of 1e 5 with Adam optimizer up to 50 epochs.