EventPillars: Pillar-based Efficient Representations for Event Data

Authors: Rui Fan, Weidong Hao, Juntao Guan, Lai Rui, Lin Gu, Tong Wu, Fanhong Zeng, Zhangming Zhu

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that our Event Pillars records a new state-of-the-art precision on object recognition and detection datasets with surprisingly 9.2 and 4.5 lower computation and storage consumption. This brings a new insight into dense event representations and is promising to boost the edge deployment of event-based vision.
Researcher Affiliation Academia 1Key Laboratory of Analog Integrated Circuits and Systems (Ministry of Education) 2School of Integrated Circuits, Xidian University, Xi an 710071, China 3Hangzhou Institute of Technology, Xidian University, Hangzhou, China 4RIKEN AIP, Tokyo103-0027, Japan 5The University of Tokyo, Japan
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks. Methods are described in prose.
Open Source Code Yes Code https://github.com/Fineshawray/Event Pillars.git
Open Datasets Yes For object recognition, we employ three widely-used event-based datasets: N-Cars (Sironi et al. 2018), N-Caltech101 (Orchard et al. 2015), and N-Image Net (Kim et al. 2021). ... For object detection, we select Gen1 (de Tournemire et al. 2020) and 1 Mpx (Perot et al. 2020) automotive detection datasets to facilitate comparison with relevant baselines.
Dataset Splits No The paper mentions several datasets and specific exclusion criteria or input resolutions (e.g., 'all inputs are resized to 224 224 resolution', 'we exclude targets with boundary below 10 pixels and diagonal below 30 pixels'), but it does not provide specific train/test/validation splits (e.g., percentages, sample counts, or references to predefined splits) used for its own experiments on these datasets.
Hardware Specification Yes This evaluation was conducted on a CPU (AMD EPYC, 64bits, 2.9GHz of RAM).
Software Dependencies No The paper mentions using a 'Res Net-34', 'ADAM optimizer', 'YOLOv6 head', 'Swin V2 transformer backbone', and 'torch.scatter' but does not provide specific version numbers for any of these software components or libraries.
Experiment Setup Yes We train the network using an ADAM optimizer (Kingma and Ba 2017) with a learning rate of 2e-4 and a batch size of 32, maintaining other experimental setups consistent with prior works (Gehrig et al. 2019; Kim et al. 2021). ... We employ a batch size of 16 and an ADAM optimizer with a learning rate of 1e-4 for both Gen1 and 1Mpx datasets, maintaining other experimental details consistent with ERGO-12.