Bridging Knowledge Gap Between Image Inpainting and Large-Area Visible Watermark Removal
Authors: Yicheng Leng, Chaowei Fang, Junye Chen, Yixiang Fang, Sheng Li, Guanbin Li
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on both a large-scale synthesized dataset and a real-world dataset demonstrate that our approach significantly outperforms existing state-of-the-art methods. We conduct extensive experiments on a large-scale synthesized dataset and a real-world dataset, and the results demonstrate that our proposed method achieves state-of-the-art performance. |
| Researcher Affiliation | Collaboration | 1 School of Artificial Intelligence, Xidian University, Xi an, China 2 School of Data Science, The Chinese University of Hong Kong, Shenzhen, China 3 School of Computer Science and Engineering, Research Institute of Sun Yat-sen University in Shenzhen, Sun Yat-sen University, Guangzhou, China 4 Afirstsoft, Shenzhen, China |
| Pseudocode | No | The paper describes the methodology using text and mathematical formulations (e.g., equations 1-5), but it does not include any explicitly labeled "Pseudocode" or "Algorithm" blocks. |
| Open Source Code | Yes | The source code is available in the supplementary materials. |
| Open Datasets | Yes | Background images are sourced from the Places365 Challenge dataset (Zhou et al. 2017) |
| Dataset Splits | Yes | The training set includes 60,000 images of size 256 256 with 1,087 different watermarks, while the validation set contains 10,000 images of size 512 512 with 160 distinct watermarks, different from those in the training set. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types) used for running the experiments. |
| Software Dependencies | No | The codes are implemented by Py Torch (Paszke et al. 2019), Py Torch-Lightning (Falcon 2019) and Hydra (Yadan 2019). While these software frameworks are mentioned, specific version numbers for PyTorch, PyTorch-Lightning, or Hydra are not provided. |
| Experiment Setup | Yes | We employ Adam optimizer (Kingma and Ba 2014) with a learning rate of 0.0001 to train both generator and discriminator. The model is trained for 100 epochs with a batch size of 16. For the weights for individual sub-losses, we experiment with variation on weight factors, and observe subtle performance fluctuation. Finally, we set: ω1 = 10, ω2 = 30, ω3 = 1, ω4 = 100, ω5 = 0.001. |