Mesh Watermark Removal Attack and Mitigation: A Novel Perspective of Function Space

Authors: Xingyu Zhu, Guanhui Ye, Chengdong Dong, Xiapu Luo, Shiyao Zhang, Xuetao Wei

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that FUN CEVA D E achieves 100% evasion rate among all previous watermarking methods while achieving only 0.3% evasion rate on FUN CMAR K.
Researcher Affiliation Academia 1Department of Computer Science and Engineering, Southern University of Science and Technology, China 2Department of Computing, Hong Kong Polytechnic University, Hong Kong EMAIL, EMAIL, EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes methods through mathematical formulations and descriptive text, such as equations (1) to (7) and detailed explanations of FUNCEVADE and FUNCMARK, but does not include a distinct pseudocode block or algorithm section.
Open Source Code No The paper does not provide an explicit statement about releasing source code for the described methodology, nor does it include a link to a code repository. It mentions using a third-party package 'mesh-to-sdf' with its link, but this is not the authors' own implementation code.
Open Datasets Yes We normalize meshes in Shape Net (Chang et al. 2015) and Stanford Repo (Laboratory 2023) to [ 1, 1]3.
Dataset Splits No The paper mentions using ShapeNet and Stanford Repo datasets but does not explicitly provide training, validation, or test dataset splits, or reference any predefined splits with specific details like percentages, sample counts, or citations for such splits.
Hardware Specification No The paper does not provide specific details about the hardware used to run experiments, such as GPU or CPU models, processor types, or memory specifications.
Software Dependencies No The paper mentions using 'mesh-to-sdf', 'SIREN', and 'marching cube' as software components, but it does not specify any version numbers for these or any other software dependencies, which would be necessary for reproducible setup.
Experiment Setup Yes We use Adam optimizer with the initialized learning rate 10-3 for 1000 epochs, and we decrease the learning rate by half every 200 epochs. We set Ns = 32 (i.e., the spherical system is divided into 32 32 partitions). We set message length n = 48, and the detection threshold τ = 31. ... and the default watermarking strength δ = 0.001.