Unlocking the Potential of Lightweight Quantized Models for Deepfake Detection

Authors: Renshuai Tao, Ziheng Qin, Yifu Ding, Chuangchuang Tan, Jiakai Wang, Wei Wang

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments demonstrate that this new framework significantly reduces 10.8 computational costs and 12.4 storage requirements while maintaining high detection performance, even surpassing SOTA methods using less than 5% FLOPs, paving the way for efficient deepfake detection in resource-limited scenarios.
Researcher Affiliation Academia 1Institute of Information Science, Beijing Jiaotong University 2Visual Intellgence +X International Cooperation Joint Laboratory of MOE 3School of Computer Science and Engineering, Beihang Univeristy EMAIL, EMAIL, EMAIL
Pseudocode No No explicit pseudocode or algorithm blocks were found in the paper. The methodology is described using text and mathematical equations.
Open Source Code Yes Code is in https://github.com/rstao-bjtu/QMDD.
Open Datasets Yes In this section, we show the accuracy performance of our proposed lightweight designs compared to existing deepfake detection models and algorithms in Table 2 for Cross-GAN-Sources Evaluation on Foren Synths [Wang and others, 2020] and Table 3 for Cross-Diffusion-Sources Evaluation on Universal Fake Detect [Ojha et al., 2023].
Dataset Splits No Following common practices [Jeong and others, 2022a; Jeong and others, 2022c; Ojha et al., 2023], we use average precision (AP) and accuracy (Acc) as the evaluation metrics. When on the seen data domain, i.e., images generated by Pro GAN, the detection accuracy of the quantized model with proposed methods is comparable to that of the models without quantization. ... And for unseen data domain, our method shows its effectiveness across different GAN and Diffusion. While the paper distinguishes between 'seen' and 'unseen' data domains and mentions evaluation metrics, it does not provide specific split ratios (e.g., percentages for training, validation, test sets) or the explicit methodology used to partition the datasets for their experiments.
Hardware Specification No The paper discusses computational costs (FLOPs) and storage requirements but does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used to conduct the experiments.
Software Dependencies No The paper mentions several methods and frameworks (e.g., LSQ [Esser et al., 2019], Re Act Net [Liu et al., 2020], N2UQ [Liu et al., 2022], and QIL [Jung et al., 2019]) that it uses or compares against, but it does not specify the version numbers for any software libraries, programming languages, or development environments used for implementation.
Experiment Setup No The paper describes the proposed quantization framework, including the use of 2/3-bit weights and activations and different backbones (ResNet-23/34/50, MobileNet), but it does not provide specific hyperparameters such as learning rate, batch size, number of epochs, or optimizer settings used during training or experimentation.