DcDsDiff: Dual-Conditional and Dual-Stream Diffusion Model for Generative Image Tampering Localization

Authors: Qixian Hao, Shaozhang Niu, Jiwei Zhang, Kai Wang

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that Dc Ds Diff outperforms mainstream methods in accurate localization, generalization, extensibility, and robustness. Code page: https://github.com/Qixian Hao/Dc Ds Diffand-GIT10K. Extensive experiments demonstrate that Dc Ds Diff has significant advantages in terms of localization performance, generalization, extensibility, and robustness.
Researcher Affiliation Academia 1Beijing Key Lab of Intelligent Telecommunication Software and Multimedia, Beijing University of Posts and Telecommunications, China 2Southeast Digital Economy Development Institute, China 3Key Laboratory of Interactive Technology and Experience System, Ministry of Culture and Tourism(BUPT), China EMAIL, EMAIL
Pseudocode No The paper describes the architecture and methodology in detail across several sections (3.1 Background, 3.2 Overview, 3.3 HFVG, 3.4 MM-MSFF, 3.5 DSDN), but it does not include a clearly labeled pseudocode block or algorithm section with structured steps.
Open Source Code Yes Code page: https://github.com/Qixian Hao/Dc Ds Diffand-GIT10K.
Open Datasets Yes In terms of datasets, firstly, we constructed a GIT10K dataset containing 10,000 images using four common diffusion-based local inpainting methods: Brush Net (BN) [Ju et al., 2024], Paint by Example (PE) [Yang et al., 2023], Inpaint Anything (IA) [Yu et al., 2023], and Power Paint (PP) [Zhuang et al., 2025], with each method contributing 2,500 images. Secondly, to comprehensively evaluate Dc Ds Diff, we tested its performance on datasets containing other tampering types, including RLS [Hao et al., 2024c], IMD [Novozamsky et al., 2020], Nist16 [Guan et al., 2016], DEFACTO Splicing (DEF) [Mahfoudi et al., 2019], and Auto Splice (AUTO) [Jia et al., 2023].
Dataset Splits Yes Finally, we divided the train and test sets of the aforementioned datasets in a 9:1 ratio.
Hardware Specification Yes All experiments were conducted on a single NVIDIA Ge Force RTX 4090 GPU.
Software Dependencies No In this paper, we implemented Dc Ds Diff using the Py Torch framework. No specific version number is provided for PyTorch or any other software component.
Experiment Setup Yes During the training phase, we used the Adam W optimizer with the learning rate of 0.001 and the batch size of 6. ς was set to 0.5. We employed a Signal-to-Noise Ratio (SNR)-based variance schedule [Hoogeboom et al., 2023] to adjust the SNR of the diffusion process. The model was trained for 100 epochs. During the inference phase, the model undergoes ten iterative steps.