Cross-Modal Stealth: A Coarse-to-Fine Attack Framework for RGB-T Tracker

Authors: Xinyu Xiang, Qinglong Yan, Hao Zhang, Jianfeng Ding, Han Xu, Zhongyuan Wang, Jiayi Ma

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the superiority of our method. ... Experiments Experimental Settings Datasets and Evaluation Metrics We perform experiments on RGBT234 (Li et al. 2019) and Las He R (Li et al. 2022a) datasets, and evaluate our attack effectiveness using the precision rate (PR) and success rate (SR), which are the classical metrics on tracking tasks. ... Quantitative Evaluation The quantitative comparisons on the RGBT234 dataset are illustrated in Fig. 6(a)-(d)... Qualitative Evaluation As shown in Fig. 7, we visualize two groups of tracking results. ... Generalization Evaluation Moreover, we conduct generalization experiments on the Las He R dataset... Application on Physical Domain We extend our experiments into the physical domain. ... Ablation Studies We perform ablation studies to verify the validity of parameter setting and our specific designs, conducted on RGBT234 dataset against Vi PT, with results shown in Fig. 10.
Researcher Affiliation Academia 1Electronic Information School, Wuhan University, Wuhan 430072, China, 2School of Computer Science, Wuhan University, Wuhan 430072, China, 3School of Automation, Southeast University, Nanjing 210096, China EMAIL, qinglong EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes methods using natural language and mathematical equations but does not present any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code https://github.com/Xinyu-Xiang/CMS
Open Datasets Yes We perform experiments on RGBT234 (Li et al. 2019) and Las He R (Li et al. 2022a) datasets
Dataset Splits No The paper mentions using RGBT234 and Las He R datasets but does not provide specific details on how these datasets were split into training, validation, or test sets, nor does it refer to a standard split by citation.
Hardware Specification Yes All experiments are conducted on the NVIDIA TITAN RTX GPU with Py Torch.
Software Dependencies No The paper mentions "Py Torch" as a software dependency but does not specify a version number.
Experiment Setup Yes For our two-stage framework, we train Stage I for 80 epochs, and then train Stage II for 30 epochs. The hyper-parameters for balancing each sub-loss are empirically set as α1 = 1.0, α2 = 0.1, α3 = 50.0, β1 = 0.1, β2 = 0.1, β3 = 50.0, and β4 = 1.0.