ScreenMark: Watermarking Arbitrary Visual Content on Screen

Authors: Xiujian Liang, Gaozhi Liu, Yichao Si, Xiaoxiao Hu, Zhenxing Qian

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To validate the effectiveness of Screen Mark, we compiled a dataset comprising 100,000 screenshots from various devices and resolutions. Extensive experiments on different datasets confirm the superior robustness, imperceptibility, and practical applicability of the method. Experiments Experimental Settings Benchmarks. We are the first learning-based watermarking specialized for VSC protection and have no directly relevant baseline model to compare against. In order to measure our performance in terms of robustness, we still compared our method with four state-of-the-art(SOTAs) single-modal watermarking methods, i.e.,Stegastamp(Tancik, Mildenhall, and Ng 2020), PIMo G(Fang et al. 2022), MBRS(Jia, Fang, and Zhang 2021), DWSF(Guo et al. 2023). Datasets. Given the absence of a suitable screenshot dataset for VSC protection, we created a dataset called Screen Image, comprising 100,000 screenshots from various devices and resolutions ranging from SD (720x480) to 4K (3840x2160). We randomly selected 50,000 images as our training dataset.
Researcher Affiliation Academia Xiujian Liang, Gaozhi Liu, Yichao Si, Xiaoxiao Hu, Zhenxing Qian* Fudan University, School of Computer Science Shang Hai 200438, China liangxj23,gzliu22,ycsi22,EMAIL, EMAIL
Pseudocode No The paper describes the proposed method in sections like "Proposed Method", "Stage-1: Pairwise Initialization Workflow", "Stage-1: Pairwise Initialization Architecture", "Stage-2: Adaptive Pre-Training Workflow", and "Stage-2: Adaptive Pre-Training Architecture". These sections explain the steps and components but do not provide structured pseudocode or an algorithm block.
Open Source Code No The paper does not explicitly state that source code is provided for the methodology described, nor does it include any links to repositories or mention code in supplementary materials.
Open Datasets Yes To validate the effectiveness of Screen Mark, we compiled a dataset comprising 100,000 screenshots from various devices and resolutions... To evaluate the Screen Mark, we randomly sample 1,000 images each from Image Net(Deng et al. 2009) and Screen Image(excluding training) respectively.
Dataset Splits No We randomly selected 50,000 images as our training dataset. To evaluate the Screen Mark, we randomly sample 1,000 images each from Image Net(Deng et al. 2009) and Screen Image(excluding training) respectively.
Hardware Specification Yes Our method is implemented using Py Torch(2019) and executed on an NVIDIA Ge Force RTX 4090 GPU.
Software Dependencies Yes Our method is implemented using Py Torch(2019) and executed on an NVIDIA Ge Force RTX 4090 GPU. ... We use the Adam optimizer(2014) with a learning rate of 1e-5, and set the training epochs to 100, while the compared methods adopt their default settings.
Experiment Setup Yes In terms of experimental parameter, the information length L is 100. The height H and width W of watermark pattern PW is 512... The batch size N0, diffusion block N1 and reversal block N2 is 16, 5 and 2, respectively. In Stage-1, the loss function weight factors β and γ are 0.1 and 1, respectively. For LP attern , λ0, λ1, λ2 and λ3 are set to 1.0, 0.5, 0.1 and 0.01, respectively... The α of Alpha-Fusion Rendering Module Rα is set to 5... We use the Adam optimizer(2014) with a learning rate of 1e-5, and set the training epochs to 100...