DriveGazen: Event-Based Driving Status Recognition Using Conventional Camera
Authors: Xiaoyin Yang, Xin Yang
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We specifically collected the Driving Status (Drive Gaze) dataset to demonstrate the effectiveness of our approach. Additionally, we validate the superiority of the Drive Gazen on the Single-eye Event-based Emotion (SEE) dataset. To the best of our knowledge, our method is the first to utilize guide attention spiking neural networks and eye-based event frames generated from conventional cameras for driving status recognition. |
| Researcher Affiliation | Academia | Xiaoyin Yang, Xin Yang Dalian University of Technology EMAIL, EMAIL |
| Pseudocode | No | The paper describes the model architecture and processes using mathematical equations and descriptive text, but it does not include a clearly labeled pseudocode or algorithm block. |
| Open Source Code | No | The paper states: "Please refer to our project page and supplementary materials for more details." However, it does not provide a direct link to a code repository or an explicit statement confirming the immediate release of the source code for the described methodology. |
| Open Datasets | Yes | We specifically collected the Driving Status (Drive Gaze) dataset to demonstrate the effectiveness of our approach. Additionally, we validate the superiority of the Drive Gazen on the Single-eye Event-based Emotion (SEE) dataset. ... The first publicly available eye-based event-driven driving state dataset generated from conventional cameras, containing intensity frames and corresponding events, capturing data from different ages, races, genders, etc; |
| Dataset Splits | Yes | In total, Drive Gaze includes 1645 sequences/245365 frames of original events, with a total duration of 68.1 minutes(Figure 4, divided into 1316 for training and 329 for testing. |
| Hardware Specification | Yes | We trained ADSN for 150 epochs using a batch size of 128 on an NVIDIA TITAN V GPU. |
| Software Dependencies | No | ADSN is implemented in Py Torch (Paszke et al. 2019). While PyTorch is mentioned, a specific version number is not provided, nor are versions for any other key software dependencies. |
| Experiment Setup | Yes | We trained ADSN for 150 epochs using a batch size of 128 on an NVIDIA TITAN V GPU.For the SNN settings, we use a spiking threshold of 0.3 and a decay factor of 0.2 for all SNN neurons. |