Physical Marker: Revealing Invisible Hyperlinks Hidden in Printed Trademarks

Authors: Yuliang Xue, Lei Tan, Guobiao Li, Zhenxing Qian, Sheng Li, Xinpeng Zhang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Various experiments have been conducted to demonstrate the advantage of our proposed method for embedding links in printed brand logos, which provides reliable extraction accuracy under both simulated and real scenarios.
Researcher Affiliation Academia Yuliang Xue, Lei Tan, Guobiao Li, Zhenxing Qian*, Sheng Li, Xinpeng Zhang School of Computer Science, Fudan University, Shang Hai, China EMAIL, EMAIL, EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the method and architecture using figures and descriptive text, but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements about releasing source code, nor does it provide links to a code repository.
Open Datasets Yes We began by standardizing the bitmap images from the Large Logo Dataset (Sage et al. 2017). ... 417 images from the METU Trademark Dataset (Tursun and Sinan 2015) were dedicated to testing.
Dataset Splits Yes For experiments in the digital environment, 98,418 images generated from Large Logo Dataset were used for training. To further assess the robustness and generalization capabilities of the method, 417 images from the METU Trademark Dataset (Tursun and Sinan 2015) were dedicated to testing. Moreover, we randomly select 20 images from test set for real PCR tests.
Hardware Specification Yes The entire framework is implemented in PyTorch and executed on NVIDIA GeForce RTX 2080 Ti.
Software Dependencies No The entire framework is implemented in PyTorch and executed on NVIDIA GeForce RTX 2080 Ti.
Experiment Setup Yes We employ the Adam optimizer for our model. During training, the size of input bitmap is 400 400 and the message is randomly generated with a length of 100 bits. The subspace dimensionality K is set to 16. The parameter setting in distortion layer are as follows: Blur kernel size: [3, 7] Gaussian noise: σ U[0, 0.2] Brightness adjustment: [ 0.3, 0.3] Contrast adjustment: [0.5, 1.5] JPEG compression quality factor: [50, 100]. For the loss function in the Eq. (9), we choose λ1 = 2, λ2 = 1.5, λ3 = 0.5, λ4 = 1.5, λ5 = 2.5. The batch size in the training is set to 16 and the model are trained for 30 epochs with an initial learning rate = 0.0001.