Know Where You Are From: Event-Based Segmentation via Spatio-Temporal Propagation
Authors: Ke Li, Gengyu Lyu, Hao Chen, Bochen Xie, Zhen Yang, Youfu Li, Yongjian Deng
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | A large number of experiments have demonstrated the effectiveness of our proposed framework in terms of both quantity and quality. We conduct experiments on two commonly used ESS datasets, DDD17 (Binas et al. 2017) and DSEC-Semantic (DSEC) (Gehrig et al. 2021), and compare our method with state-of-the-art (SOTA) supervised ESS methods as well as the baseline model Seg Former (Xie et al. 2021). Evaluation on the DDD17 Dataset, Evaluation on the DSEC Dataset, Ablation study results showing the impact of SME and ER2SM. |
| Researcher Affiliation | Academia | 1College of Computer Science, Beijing University of Technology 2School of Computer Science and Engineering, Southeast University 3Department of Mechanical Engineering, City University of Hong Kong {tokeli@emails., lyugengyu@, yangzhen@, yjdeng@}bjut.edu.cn, EMAIL {boxie4-c@my.,meyfli@}cityu.edu.hk |
| Pseudocode | No | The paper describes the methods using text, mathematical equations (Eq. 1-6), and diagrams (Figure 2, 3, 7). There are no sections or figures explicitly labeled as 'Pseudocode' or 'Algorithm'. |
| Open Source Code | Yes | Code https://github.com/Schuck Lee/KWYAF |
| Open Datasets | Yes | We conduct experiments on two commonly used ESS datasets, DDD17 (Binas et al. 2017) and DSEC-Semantic (DSEC) (Gehrig et al. 2021) |
| Dataset Splits | Yes | This paper splits the training and testing sets following (Sun et al. 2022c). |
| Hardware Specification | Yes | All experiments are implemented using Pytorch on two RTX 3090s. |
| Software Dependencies | No | The paper mentions 'Pytorch' but does not specify a version number or other key software components with versions, which is necessary for reproducibility. |
| Experiment Setup | Yes | During training, we employ data augmentation techniques such as random resizing and flipping, and train for 60 epochs with the batch size of 32. We utilize the Adam W and poly learning rate schedule, with an initial learning rate of 1e-3. The training process is similar to DDD17, yet with an initial learning rate of 6e-5. |