IMDPrompter: Adapting SAM to Image Manipulation Detection by Cross-View Automated Prompt Learning
Authors: Quan Zhang, Yuxin Qi, Xi Tang, Jinwei Fang, Xi Lin, Ke Zhang, Chun Yuan
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results from five datasets (CASIA, Columbia, Coverage, IMD2020, and NIST16) validate the effectiveness of our proposed method. |
| Researcher Affiliation | Academia | 1Tsinghua University 2Shanghai Jiao Tong University 3University of Science and Technology of China |
| Pseudocode | No | The paper describes the method in prose and equations (e.g., equations 1-19) but does not include a dedicated pseudocode or algorithm block. |
| Open Source Code | No | The paper does not provide a direct link to a code repository or an explicit statement about the release of source code for the methodology. |
| Open Datasets | Yes | Our method is trained only on the CASIAv2 dataset Dong et al. (2013). For in-distribution (IND) evaluation, we use the CASIAv1 dataset Dong et al. (2013). For out-of-distribution (OOD) evaluation, we use three datasets: Columbia Hsu & Chang (2006), Coverage Wen et al. (2016)and IMD2020 Novozamsky et al. (2020). [...] In order to directly compare with state-of-the-art technologies, we trained on CASIAv2 Dong et al. (2013) and conducted extensive testing on COVER Wen et al. (2016), Columbia Hsu & Chang (2006), NIST16 Hsu & Chang (2006), CASIAv1 Dong et al. (2013), and the recent IMD Novozamsky et al. (2020). |
| Dataset Splits | Yes | Our method is trained only on the CASIAv2 dataset Dong et al. (2013). For in-distribution (IND) evaluation, we use the CASIAv1 dataset Dong et al. (2013). For out-of-distribution (OOD) evaluation, we use three datasets: Columbia Hsu & Chang (2006), Coverage Wen et al. (2016)and IMD2020 Novozamsky et al. (2020). [...] Table 7: Details of the training set and five test sets used in our experiments. [...] Our model was trained on the CASIAv2 dataset and evaluated across all test sets. |
| Hardware Specification | Yes | All experiments are run on NVIDIA A6000 GPUs. |
| Software Dependencies | No | The paper describes algorithms and optimizers used (e.g., Adam W optimizer, Focal Loss) but does not provide specific version numbers for software dependencies like programming languages or libraries. |
| Experiment Setup | Yes | For the optimization process, we train our model using the Adam W optimizer with an initial learning rate of 1e-4. We use a batch size of 4 and train for 100 epochs. We implement a linear warm-up strategy with a cosine annealing scheduler Loshchilov & Hutter (2016) to decay the learning rate. [...] As shown in Figure 7, we conducted hyperparameter analysis on λ1,λ2, and λ3, ultimately selecting the optimal parameter configuration: λ1 = 1.0, λ2 = 0.1, λ3 = 1.0. |