Autonomous LLM-Enhanced Adversarial Attack for Text-to-Motion
Authors: Honglei Miao, Fan Ma, Ruijie Quan, Kun Zhan, Yi Yang
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Evaluations across popular T2M models demonstrate ALERT-Motion s superiority over previous methods, achieving higher attack success rates with stealthier adversarial prompts. Sections like "4 Experiment", "4.1 Experimental Settings", "4.2 Evaluation Metrics", "4.3 Evaluation Results", and "Table 1: The results of the adversarial attacks against MDM and MLD on T2M evaluation model" confirm empirical studies with data analysis. |
| Researcher Affiliation | Academia | 1School of Information Science and Engineering, Lanzhou University 2College of Computer Science and Technology, Zhejiang University 3College of Computing and Data Science, Nanyang Technological University EMAIL |
| Pseudocode | Yes | Algorithm 1: ALERT-Motion |
| Open Source Code | No | The paper does not provide explicit access to the source code for the methodology described. It mentions using 'pretrained models from the official Git Hub repositories' for third-party tools (MLD and MDM), but not for their own ALERT-Motion. |
| Open Datasets | Yes | We select target prompt texts and target motion from the Human ML3D (H3D) (Guo et al. 2022a). |
| Dataset Splits | No | The paper mentions selecting examples for attack from the 'top 20 of the Dissimilar subset in the evaluation setup of (Petrovich, Black, and Varol 2023)' and using a 'batch size of 20, including 19 negative examples'. However, it does not provide specific training/test/validation dataset splits or detailed methodology for partitioning the data for reproducibility. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions using specific models and APIs like 'gpt-3.5-turbo-instruct API', 'Universal Sentence Encoder (Cer et al. 2018)', and 'GPT-2 (Radford et al. 2019)' but does not provide specific version numbers for software libraries or dependencies (e.g., PyTorch version, Python version) needed for replication. |
| Experiment Setup | Yes | We set the number of iterations as 50, the size of the prompt set as 20. We set the similarity threshold η as 0.4. Our approach involves a batch size of 20, including 19 negative examples |