SMamba: Sparse Mamba for Event-based Object Detection

Authors: Nan Yang, Yang Wang, Zhanwen Liu, Meng Li, Yisheng An, Xiangmo Zhao

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Results on three datasets (Gen1, 1Mpx, and e Tram) demonstrate that our model outperforms other methods in both performance and efficiency. The paper includes sections like "Experiments", "Quantitative Results", "Sparsification Visualizations", and "Ablation Studies" which involve data analysis and performance metrics.
Researcher Affiliation Academia 1School of Information Engineering, Chang an University, China 2School of Civil Engineering, Tsinghua University, China EMAIL, EMAIL. All authors are affiliated with academic institutions (Chang'an University and Tsinghua University).
Pseudocode No The paper describes the methodology through text and diagrams (Figure 2, Figure 3, Figure 4) but does not contain any explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not provide a concrete statement about the release of source code for the described methodology (SMamba), nor does it include a link to a code repository. It mentions using existing tools like YOLOX but does not offer its own implementation code.
Open Datasets Yes We conduct experiments on two autonomous driving datasets Gen1 (De Tournemire et al. 2020) and 1Mpx (Perot et al. 2020), and one traffic monitoring dataset e Tram (Verma et al. 2024). These datasets are cited with author names and years, indicating they are established and publicly referenced academic datasets.
Dataset Splits No To guarantee comparison fairness, we follow the dataset preprocessing methods, augmentation techniques, mixed batching strategy, event representation method and evaluation protocols established in RVT (Gehrig and Scaramuzza 2023). The paper defers to another work for evaluation protocols and does not explicitly state the dataset splits (e.g., percentages, sample counts) within its text.
Hardware Specification No No specific hardware details (e.g., GPU models, CPU types, memory specifications) used for running the experiments are provided in the paper. It only compares FLOPs, parameters, and runtime.
Software Dependencies No The paper mentions following protocols established in RVT (Gehrig and Scaramuzza 2023) and using YOLOX (Ge et al. 2021) as the detection head, but it does not specify any version numbers for these or other software dependencies.
Experiment Setup No Implementation Details. To guarantee comparison fairness, we follow the dataset preprocessing methods, augmentation techniques, mixed batching strategy, event representation method and evaluation protocols established in RVT (Gehrig and Scaramuzza 2023). The paper refers to another work (RVT) for its experimental protocols and does not explicitly provide concrete hyperparameter values, training configurations, or system-level settings within its main text.