Efficient and Separate Authentication Image Steganography Network

Authors: Junchao Zhou, Yao Lu, Jie Wen, Guangming Lu

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that the proposed method achieves more secure, effective, and efficient image steganography. Code is available at https://github.com/Revive624/Authentication Image-Steganography. We compare our method with three baselines: ISN (Lu et al., 2021), Deep MIH (Guan et al., 2022) and IIS (Zhou et al., 2024) in terms of PSNR, SSIM and LPIPS, on DIV2K dataset and Image Net dataset. The training settings and details are provided in Appendix C. Quantitative Results. Table 1 presents quantitative comparisons. The results for the revealed secret images are reported as averages.
Researcher Affiliation Academia Junchao Zhou 1 Yao Lu 1 Jie Wen 1 Guangming Lu 1 Yao Lu and Jie Wen are corresponding authors. 1Department of Computer Science and Technology, University of Harbin Institute of Technology (Shenzhen), Guangdong, China. Correspondence to: Yao Lu <EMAIL>, Jie Wen <jiewen EMAIL>.
Pseudocode No The paper describes the methodology using mathematical formulations (e.g., Equation 4, 5, 6, 7, 8, 9, 10, 11, 12, 13) and textual descriptions of networks (IAN, IHN) and modules, but does not provide structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/Revive624/Authentication Image-Steganography.
Open Datasets Yes We compare our method with three baselines: ISN (Lu et al., 2021), Deep MIH (Guan et al., 2022) and IIS (Zhou et al., 2024) in terms of PSNR, SSIM and LPIPS, on DIV2K dataset and Image Net dataset. The training settings and details are provided in Appendix C. The proposed Efficient and Separate Authentication Image Steganography Network is trained and tested on the DIV2K and Image Net datasets.
Dataset Splits Yes The DIV2K dataset consists of 800 training images cropped to 144 144 and 100 test images cropped to 1024 1024. For Image Net dataset, we randomly select 20,000 training images cropped to 144 144 for finetuning and 5,000 test images cropped to 256 256.
Hardware Specification Yes All experiments are conducted on a Nvidia RTX 4090 GPU.
Software Dependencies No The paper describes the use of Adam optimizer and Cosine Annealing LR scheduler, but does not provide specific version numbers for these or other software libraries (e.g., Python, PyTorch, CUDA) required for replication.
Experiment Setup Yes Training is performed for 100K iterations using the Adam optimizer with β1 = 0.9 and β2 = 0.999. The initial learning rates are set to 2 10 4 for IAN, and 1 10 4 for IHN and Dynamic Generation Module, with a Cosine Annealing LR scheduler for dynamic adjustment. The hyperparameters λ1, λ2 and λ3 are set to 2, 4, and 3, respectively.