Embedding Robust Watermarking into Pattern to Protect the Copyright of Ceramic Artifacts

Authors: Lei Tan, Yuliang Xue, Guobiao Li, Zhenxing Qian, Sheng Li, Chunlei Bao

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Various experiments have been conducted to demonstrate the advantage of our proposed method for protecting the copyright of the ceramic artworks, which provides reliable watermark extraction accuracy without the need for a conspicuous stamp. Comprehensive experiments indicate the advantage of our method for protecting the copyright of ceramic artworks. It provides exceptional robustness, with approximately a 10% improvement in bit accuracy compared to the state-of-the-art (SOTA) physical image watermarking methods.
Researcher Affiliation Academia Lei Tan1, Yuliang Xue1, Guobiao Li1, Zhenxing Qian1*, Sheng Li1, Chunlei Bao2 1School of Computer Science, Fudan University 2The Arts and Technology Education Centre, Fudan University EMAIL, EMAIL
Pseudocode No No explicit pseudocode or algorithm blocks are present in the paper. The methodology is described through text and diagrams (e.g., Fig. 4 for the framework, Fig. 5 for template embedding).
Open Source Code No The paper does not contain any explicit statements about releasing source code, nor does it provide links to code repositories.
Open Datasets Yes To train it, we randomly select 40,000 cover images from the COCO dataset (Lin et al. 2014). Before embedding watermark, we resize them to the resolution of 400x400. natural images derive from the mirflickr25k (Huiskes and Lew 2008) are adopted as the patterns
Dataset Splits No To train it, we randomly select 40,000 cover images from the COCO dataset (Lin et al. 2014). Before embedding watermark, we resize them to the resolution of 400x400. supplementary experiments were conducted on the same test data. The paper mentions using 40,000 images for training and then 'the same test data' for supplementary experiments, but does not provide specific splits (e.g., percentages or exact counts for training, validation, and test sets) for reproduction beyond the initial selection of 40,000 for training.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, or memory) used to run the experiments.
Software Dependencies No In real implementation, we use 2000 images to train a U-net (Ronneberger, Fischer, and Brox 2015) to simulate and replace this transformation. This mentions a neural network architecture (U-Net) but does not provide specific software dependencies with version numbers for implementation (e.g., Python, PyTorch/TensorFlow versions, CUDA).
Experiment Setup Yes The decoder for watermarking extraction takes the Res Net18 architecture (He et al. 2016). To train it, we randomly select 40,000 cover images from the COCO dataset (Lin et al. 2014). Before embedding watermark, we resize them to the resolution of 400x400. The length of watermark message is set to 28; each bit in the message is independently sampled from a Bernoulli distribution with probability 1/2. The size of bit template Lp is set to 25 and the hyperparameter η is set to 2. Unless stated otherwise, α and β are set to 0.25 and 0.4, respectively.