Critical Forgetting-Based Multi-Scale Disentanglement for Deepfake Detection
Authors: Kai Li, Wenqi Ren, Jianshu Li, Wei Wang, Xiaochun Cao
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results validate the efficacy of the proposed method. ... Extensive experiments on forgery datasets demonstrate that our proposed method outperforms the state-of-the-art methods in terms of generalization and robustness. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, Sun Yat-sen University 2School of Cyber Science and Technology, Shenzhen Campus of Sun Yat-sen University 3National University of Singapore |
| Pseudocode | No | The paper describes the proposed method in prose and mathematical formulations (Equations 1-11) but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We use Face Forensics++ (FF++) (Rossler et al. 2019) as the training dataset... Celeb-DF V2 (CDF) (Li et al. 2020b), Deep Fake Detection Challenge (DFDC) (Dolhansky et al. 2020) and Wild Deep Fake (WDF) (Zi et al. 2020). ... Efficient Net (Tan and Le 2019) pre-trained on Image Net (Deng et al. 2009). |
| Dataset Splits | Yes | The split setting for training and validation is the same as the initial dataset setting. |
| Hardware Specification | No | The paper discusses the software and training parameters (e.g., optimizer, batch size, learning rate) but does not provide specific details about the hardware used, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions several methods and models used (Retina Face, Efficient Net, Adamw optimizer) but does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow, CUDA versions). |
| Experiment Setup | Yes | For training, we use a batch size of 256 and employ the Adamw optimizer with beats 0.9 and 0.999. The learning rate is set as 2e-4. The hyper-parameters λ1, λ2, λ3 and λ4 are set as 0.1, 0.15, 0.1, and 0.1, respectively. |