A New Adversarial Perspective for LiDAR-based 3D Object Detection

Authors: Shijun Zheng, Weiquan Liu, Yu Guo, Yu Zang, Siqi Shen, Cheng Wang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that adversarial perturbations based on random objects effectively deceive vehicle detection and reduce the recognition rate of 3D object detection models. Our method effectively attacks state-of-the-art 3D detectors on KITTI and nu Scenes, with attack success rates greater than 80% for most models.
Researcher Affiliation Academia 1Fujian Key Laboratory of Sensing and Computing for Smart Cities, School of Informatics, Xiamen University, China 2Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, School of Informatics, Xiamen University, China 3College of Computer Engineering, Jimei University, China EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the PCS-GAN framework with a diagram (Figure 4) and mathematical objective functions (Equations 3 and 4), but it does not present any structured pseudocode or algorithm blocks.
Open Source Code No The text states "The dataset will be released for public research." but does not contain any explicit statements or links about the release of source code for the methodology described in the paper.
Open Datasets Yes We construct a Li DAR point cloud dataset (ROLi D) of random objects, including water mist and smoke data. The dataset will be released for public research. We evaluated the performance of adversarial attacks on the KITTI (Geiger, Lenz, and Urtasun 2012) and nu Scenes (Caesar et al. 2020) datasets.
Dataset Splits Yes Since they do not provide labels for the test set, we use the training set and validation set for adversarial attack performance evaluation. For each dataset, we selected 3000 frame point clouds for the attack. Water mist perturbations were applied to each frame to create an adversarial point cloud sequence, and similarly, smoke perturbations were added to generate another adversarial sequence. In the experiments, the lengths of both the water mist and smoke sequences were set to 3. ... We used 3000 frames of original point clouds and 3 frames of water mist or smoke sequences to generate 9000 mixed frames, which were then used for evaluating the models.
Hardware Specification No The paper mentions "We use 32-line Li DAR to collect data" for data acquisition but does not provide specific hardware details (e.g., GPU, CPU models, or memory) used for running the experiments or training the models.
Software Dependencies No The paper mentions using "Point Net" as a discriminator but does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow, CUDA versions) needed to replicate the experiment.
Experiment Setup Yes In the experiments, PCS-GAN was trained for 2000 epochs with a batch size of 4, and the initial sampling length N of the point cloud sequence was set to 16. ... For water mist perturbation, we set dv = 0.004, dh = 0.5 on KITTI, and set dv = 0.08, dh = 0.08 on nu Scenes. For smoke perturbation, we set dv = 0.002, dh = 0.25 on KITTI, and set dv = 0.05, dh = 0.05 on nu Scenes.