Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Attention Based Data Hiding with Generative Adversarial Networks
Authors: Chong Yu1120-1128
AAAI 2020 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through the qualitative, quantitative experiments and analysis, this novel framework shows compelling performance and advantages over the current state-of-the-art methods in data hiding applications. |
| Researcher Affiliation | Industry | 1NVIDIA Corporation No.5709 Shenjiang Road, No.26 Qiuyue Road Shanghai, China 201210 EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1 ABDH Algorithm |
| Open Source Code | No | The paper does not provide a direct link or explicit statement about the availability of its own source code. It only references a third-party repository for a pre-trained model: 'Res Net50 structure and pre-trained model are from the repository: https://github.com/pytorch/vision.' |
| Open Datasets | Yes | To train ABDH, we apply the COCO dataset (Lin et al. 2014). ... The testing dataset is generated by combining Set5 (Bevilacqua et al. 2012) and Set14 (Zeyde, Elad, and Protter 2010) datasets. |
| Dataset Splits | Yes | We randomly divide COCO with the 8:2 ratio to generate the separate training and validation datasets. |
| Hardware Specification | No | The paper does not explicitly state the specific hardware used to run the experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper mentions using 'Py Torch as the framework' and 'Adam optimizer' but does not specify version numbers for PyTorch or any other software dependencies, such as Python version or specific library versions. |
| Experiment Setup | Yes | To train ABDH...We crop original images to 512 512. ...To improve the robustness against attacks, we also generate an attacked training dataset... We use Py Torch as the framework and train ABDH with 150 epochs. The loss adjustment parameter λ is set as 0.6. The hyper-parameters of Adam optimizer are: β1=0.5, β2=0.999. The base learning rate is 0.0002. |