OLiDM: Object-aware LiDAR Diffusion Models for Autonomous Driving

Authors: Tianyi Yan, Junbo Yin, Xianpeng Lang, Ruigang Yang, Cheng-Zhong Xu, Jianbing Shen

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that OLi DM generates high-quality Li DAR point clouds, achieving the best FPD and JSD on KITTI-360. Moreover, we are the first to evaluate the quality of foreground Li DAR objects generated by various methods, where OLi DM showing superior performance across all metrics. Additionally, OLi DM excels in conditional generation tasks such as sparse-to-dense Li DAR completion. Finally, our validation on the nu Scenes dataset confirms that OLi DM effectively enhances the performance of downstream 3D perception tasks, e.g., improving m AP by 2.4% of mainstream 3D detectors.
Researcher Affiliation Collaboration 1SKL-IOTSC, Computer and Information Science, University of Macau 2Li Auto Inc 3CEMSE Division, King Abdullah University of Science and Technology 4Shanghai Jiao Tong University
Pseudocode No The paper describes the methodology using textual explanations, mathematical formulations, and figures, but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code https://yanty123.github.io/OLi DM/
Open Datasets Yes nu Scenes (Caesar et al. 2020) and KITTI-360 (Liao, Xie, and Geiger 2022) are popular datasets widely used in autonomous driving research, featuring detailed annotations for evaluating different tasks. For the generation task, we employ the KITTI-360 dataset... we use the nu Scenes (Caesar et al. 2020) as the benchmark.
Dataset Splits Yes For the generation task, we employ the KITTI-360 dataset to demonstrate the effectiveness, while for 3D object detection and other tasks (Han et al. 2024; Tao et al. 2023), we use the nu Scenes (Caesar et al. 2020) as the benchmark. Evaluation Metrics. For the scene-level Li DAR generation, we use MMD, JSD on the BEV plane and FPD as the metrics, and we generate 10k samples using 0000 and 0002 seq as the validation split following (Zyrianov, Zhu, and Wang 2022; Xiong et al. 2023).
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory configurations used for running the experiments.
Software Dependencies No The paper does not mention specific software dependencies or library versions (e.g., Python version, PyTorch version, CUDA version) required to replicate the experiments.
Experiment Setup No The paper describes the overall methodology and experimental results but does not provide concrete details on hyperparameter values (e.g., learning rate, batch size, number of epochs) or other system-level training configurations in the main text.