Promising Multi-Granularity Linguistic Steganography by Jointing Syntactic and Lexical Manipulations
Authors: Chengfu Ou, Lingyun Xiang, Yangfan Liu
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that MMLS significantly outperforms existing methods in terms of semantic coherence, embedding capacity, and security. The paper includes sections like "Experiments and Analysis", "Datasets and Implementation Details", and "Results and Analysis" with comparative tables. |
| Researcher Affiliation | Academia | Chengfu Ou, Lingyun Xiang*, Yangfan Liu School of Computer and Communication Engineering, Changsha University of Science and Technology EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes methods in narrative text and figures but does not include a distinct section or figure explicitly labeled 'Pseudocode' or 'Algorithm'. |
| Open Source Code | Yes | Code https://github.com/hahally/MMLS/ |
| Open Datasets | Yes | In the experiments, we select the QQP-Pos dataset (Yang et al. 2022) consisting of 140000 training samples, 3000 validation samples, and 3000 test samples to train the syntax-controlled paraphrase generator. |
| Dataset Splits | Yes | In the experiments, we select the QQP-Pos dataset (Yang et al. 2022) consisting of 140000 training samples, 3000 validation samples, and 3000 test samples to train the syntax-controlled paraphrase generator. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used, such as GPU/CPU models or specific cloud computing instance types. It only mentions the use of a pre-trained BERT model. |
| Software Dependencies | No | The paper mentions 'Sklearn package', 'BERT is initialized with pretrained bert-base-uncased from Hugging Face', and 'Stanford Core NLP toolkit' but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | We set the hidden state size to 256, the filter size to 1024, and the head number to 4. The number of layers of the semantic encoder, sentence decoder, and syntactic encoder are set to 4, 4, and 3, respectively. We use Adam optimizer (Kingma and Ba 2015) with a learning rate of 1e-4, and the number of training epochs is 50, with a batch size of 32. |