L4DR: LiDAR-4DRadar Fusion for Weather-Robust 3D Object Detection
Authors: Xun Huang, Ziyu Xu, Hai Wu, Jinlong Wang, Qiming Xia, Yan Xia, Jonathan Li, Kyle Gao, Chenglu Wen, Cheng Wang
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental evaluation on a Vo D dataset with simulated fog proves that L4DR is more adaptable to changing weather conditions. It delivers a significant performance increase under different fog levels, improving the 3D m AP by up to 20.0% over the traditional Li DAR-only approach. Moreover, the results on the K-Radar dataset validate the consistent performance improvement of L4DR in realworld adverse weather conditions. |
| Researcher Affiliation | Academia | 1Fujian Key Laboratory of Sensing and Computing for Smart Cities, Xiamen University, China 2Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, China 3Zhongguancun Academy, China 4Technische Universit at M unchen, Germany 5University of Waterloo, Canada |
| Pseudocode | No | The paper describes the methodology using textual descriptions and mathematical formulations (e.g., equations 1-6) and architectural diagrams (Figures 4, 5, 6), but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | Code https://github.com/ylwhxht/L4DR |
| Open Datasets | Yes | Experimental evaluation on a Vo D dataset with simulated fog proves that L4DR is more adaptable to changing weather conditions... Moreover, the results on the K-Radar dataset validate the consistent performance improvement of L4DR in realworld adverse weather conditions. |
| Dataset Splits | Yes | According to the official K-Radar split, we used 17458 frames for training and 17536 frames for testing. |
| Hardware Specification | Yes | We conduct all experiments with a batch size of 16 on 2 RTX 3090 GPUs. |
| Software Dependencies | No | We implement L4DR with Point Pillars (Lang et al. 2019), the most commonly used base architecture in radar-based, Li DAR and 4D radar fusion-based 3D object detection. This can effectively verify the effectiveness of our L4DR and avoid unfair comparisons caused by inherent improvements in the base architecture. We set τ in section 3.2 as 0.3 while training and 0.2 while inferring. We conduct all experiments with a batch size of 16 on 2 RTX 3090 GPUs. Other parameter settings refer to the default official configuration in the Open PCDet (Team et al. 2020) tool. The paper mentions software tools like Point Pillars and Open PCDet but does not provide specific version numbers for these or any other libraries or programming languages used. |
| Experiment Setup | Yes | We trained our L4DR with the following losses : Lall = βcls Lcls + βloc Lloc + βfad Lfad, where βcls = 1, βloc = 2, βfad = 0.5... We use Adam optimizer with lr = 1e-3, β1 = 0.9, β2 = 0.999. We set τ in section 3.2 as 0.3 while training and 0.2 while inferring. We conduct all experiments with a batch size of 16. |