Efficient 3D Recognition with Event-driven Spike Sparse Convolution

Authors: Xuerui Qiu, Man Yao, Jieyuan Zhang, Yuhong Chou, Ning Qiao, Shibo Zhou, Bo Xu, Guoqi Li

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on Model Net40, KITTI, and Semantic KITTI datasets demonstrate that E-3DSNN achieves state-of-the-art (SOTA) results with remarkable efficiency.
Researcher Affiliation Collaboration Xuerui Qiu1,2, Man Yao1 , Jieyuan Zhang4, Yuhong Chou1,5, Ning Qiao6, Shibo Zhou7, Bo Xu1, Guoqi Li1,3,8 1Institute of Automation, Chinese Academy of Sciences 2 School of Future Technology, University of Chinese Academy of Sciences 3Peng Cheng Laboratory 4University of Electronic Science and Technology of China 5The Hong Kong Polytechnic University 6Syn Sense AG Corporation 7Huinao Zhixin 8 Institute of Automation, Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, Chinese Academy of Sciences EMAIL
Pseudocode No The paper describes the methods (Spike Voxel Coding, Spike Sparse Convolution, E-3DSNN architecture) using prose and mathematical equations (Eq. 1-15) and diagrams (Fig. 1, 2, 3) rather than structured pseudocode or algorithm blocks.
Open Source Code Yes Code https://github.com/bollossom/E-3DSNN/
Open Datasets Yes Experiments on Model Net40, KITTI, and Semantic KITTI datasets demonstrate that E-3DSNN achieves state-of-the-art (SOTA) results with remarkable efficiency.
Dataset Splits Yes The Model Net40 (Wu et al. 2015) dataset contains 12,311 CAD models with 40 object categories. They are split into 9,843 models for training and 2,468 for testing. ... The large KITTI dataset (Geiger et al. 2012b) consists of 7481 training samples, which are divided into trainsets with 3717 samples and validation sets with 3769 samples.
Hardware Specification Yes In this work, we train our E-3DSNN with 4 A100 GPUs.
Software Dependencies No The paper mentions using 'Open PCDet (Team 2020) codebase' and 'Pointcept (Contributors 2023) codebase' but does not specify version numbers for these or any other software libraries or frameworks used for implementation.
Experiment Setup Yes In this section, we give the specific hyperparameters of our training settings in all experiments, as depicted in Tab. 2. In this work, we train our E-3DSNN with 4 A100 GPUs. Table 2: Hyper-parameter training settings of 3DSNN. Parameter Model Net40 KITTI Semantic KITTI Learning Rate 1e 1 1e 2 2e 3 Weight Decay 1e 4 1e 2 5e 3 Batch Size 16 64 96 Training Epochs 200 80 100 Optimizer SGD Adam Adam W