Lightweight-Mark: Rethinking Deep Learning-Based Watermarking

Authors: Yupeng Qiu, Han Fang, Ee-Chien Chang

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we demonstrate the significant reductions in model size and computational complexity achieved by our proposed lightweight model compared to previous works. Additionally, we validate the effectiveness of our proposed PH and DO methods in improving the model s invisibility and robustness. In practice, during training, we employ a Combined Noise technique, where the model is exposed to a random noise layer in each mini-batch. This enables the model to learn robustness to multiple types of distortions at the same time.
Researcher Affiliation Academia 1National University of Singapore. Correspondence to: Han Fang <EMAIL>.
Pseudocode No The paper describes methods and model structures in prose and figures (e.g., Fig. 1, Fig. 5) but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor structured code-like procedures.
Open Source Code No The paper does not provide an explicit statement about releasing its own source code or a link to a code repository for the methodology described. It mentions using 'authors public code' for other models but not for its own proposed work.
Open Datasets Yes All networks are trained on the COCO dataset (Lin et al., 2014) and tested on the classical USC-SIPI image dataset (Viterbi, 1977).
Dataset Splits No The paper states that "All networks are trained on the COCO dataset (Lin et al., 2014) and tested on the classical USC-SIPI image dataset (Viterbi, 1977)". However, it does not specify the exact training, validation, and test splits within these datasets (e.g., percentages, sample counts, or references to predefined splits with citations for partitioning the COCO or USC-SIPI datasets).
Hardware Specification Yes All experimental models are implemented through Py Torch (Collobert et al., 2011) and run on NVIDIA RTX 3090 (24GB).
Software Dependencies No The paper mentions "All experimental models are implemented through Py Torch (Collobert et al., 2011)" and using the "Adam optimizer (Kingma & Ba, 2015)". While PyTorch and Adam are key software components, specific version numbers for PyTorch or any other libraries are not provided.
Experiment Setup Yes The length L of the secret message is set to 64. The safe distance ϵ in LDO is set to 0.1, and both λPH(DO) 1 and λPH(DO) 2 are initially set to 1. ... As for the optimizer, we used the Adam optimizer (Kingma & Ba, 2015) with a learning rate of 1e-3 and default hyperparameters.