SDDiff: Boosting Radar Perception via Spatial-Doppler Diffusion

Authors: Shengpeng Wang, Xin Luo, Yulong Xie, Wei Wang

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive evaluations show that SDDiff significantly outperforms stateof-the-art baselines by achieving 59% higher in EVE accuracy, 4 greater in valid generation density while boosting PCE effectiveness and reliability.
Researcher Affiliation Academia 1Huazhong University of Science and Technology 2Wuhan University EMAIL, EMAIL
Pseudocode No The paper describes methods in prose and equations, but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes The code and dataset will be available on https://github.com/Stellar Esti/SDDiff.
Open Datasets Yes Additionally, we will make our self-collected dataset publicly available to the research community. We evaluate the proposed method using both the publicly available Coloradar dataset and a self-collected dataset across different indoor and outdoor scenarios. Colo Radar Dataset. We conduct our method on Colo Radar Dataset [Kramer et al., 2022]
Dataset Splits Yes For a fair comparison with other learning-based baselines, we select the same 36 sequences as the training set and others for testing. The dataset comprises 10,371 frames, with 10% used for fine-tuning and 90% for testing.
Hardware Specification Yes It takes about 5 days to train our model with a machine using three NVIDIA GeForce RTX 4090 GPUs and Intel Xeon Gold 6226R CPU.
Software Dependencies Yes We implement our SDDiff using Pytorch 1.11.0 with CUDA 12.4.
Experiment Setup Yes The parameters ω of the weighted spatial and Doppler loss are set to 0.01. We train SDDNet for 100 epochs on Colo Radar Dataset with Adam W optimizer and a learning rate 10 4.