Dual Information Purification for Lightweight SAR Object Detection

Authors: Xi Yang, Jiachen Sun, Songsong Duan, De Cheng

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate that DIPKD significantly outperforms existing distillation techniques in SAR object detection, achieving 60.2% and 51.4% m AP scores on the SSDD and HRSID datasets, respectively.
Researcher Affiliation Academia Xi Yang, Jiachen Sun, Songsong Duan, De Cheng* State Key Laboratory of Integrated Services Networks, School of Telecommunications Engineering, Xidian University, Xi an 710071, China EMAIL, EMAIL
Pseudocode No The paper describes the methodology using text and mathematical equations, but it does not include any structured pseudocode or algorithm blocks.
Open Source Code No Our proposed method, DIPKD, is implemented under the MMDetection (Chen et al. 2019) framework in Python. The paper does not provide an explicit statement about releasing their own code or a link to a code repository for the methodology described.
Open Datasets Yes We evaluate the proposed method on the SSDD (Zhang et al. 2021) and HRSID (Wei et al. 2020) SAR datasets.
Dataset Splits No The paper mentions using the SSDD and HRSID datasets but does not explicitly provide details about training, validation, or test splits (e.g., percentages or sample counts). It refers to standard COCO-style measurement for evaluation, which implies predefined splits, but the specific split information is not stated in the paper.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU models, CPU, memory) used to conduct the experiments.
Software Dependencies No Our proposed method, DIPKD, is implemented under the MMDetection (Chen et al. 2019) framework in Python. The paper mentions the MMDetection framework and Python, but it does not specify version numbers for these software components.
Experiment Setup Yes DIPKD uses α, β, γ to balance the loss of target and non-target in Eq.(14), reverse information loss in Eq.(19), respectively. τ = 0.5 is used to adjust the attention distribution and mask rate is 25% for all the experiments. We adapt the hyper-parameters α = 5 10 5, β = 3.5 10 5 and γ = 4.5 10 7 for all the experiments.