RGB-Event ISP: The Dataset and Benchmark

Authors: Yunfan LU, Yanlin Qian, Ziyang Rao, Junren Xiao, Liming Chen, Hui Xiong

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental First, we present a new event-RAW paired dataset, collected with a novel but still confidential sensor that records pixel-level aligned events and RAW images... Second, we propose a conventional ISP pipeline to generate good RGB frames as reference... Third, we classify the existing learnable ISP methods into 3 classes, and select multiple methods to train and evaluate on our new dataset. Lastly, since there is no prior work for reference, we propose a simple event-guided ISP method and test it on our dataset.
Researcher Affiliation Collaboration Yunfan Lu, Yanlin Qian, Ziyang Rao, Junren Xiao, Liming Chen2, Hui Xiong AI Thrust, Hong Kong University of Science and Technology (Guangzhou); Alpsen Tek2 EMAIL, EMAIL EMAIL EMAIL, EMAIL
Pseudocode No The paper describes the controllable ISP pipeline and proposed methods in prose and figures, but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes In summary, to the best of our knowledge, this is the very first research focusing on event-guided ISP, and we hope it will inspire the community. The code and dataset are available at: https://github.com/yunfan Lu/RGB-Event-ISP.
Open Datasets Yes In summary, to the best of our knowledge, this is the very first research focusing on event-guided ISP, and we hope it will inspire the community. The code and dataset are available at: https://github.com/yunfan Lu/RGB-Event-ISP.
Dataset Splits Yes We divided the dataset into training and test sets, with 3/4 of the data used for training and 1/4 for testing. The testing set includes 3 indoor scenes and 3 outdoor scenes to ensure sufficient diversity.
Hardware Specification Yes Implementation Details: All our models were trained and tested on the same machine with a single A40 GPU with 48GB of GPU memory.
Software Dependencies No We used Py Torch (Paszke et al., 2017) for all experiments, applying random cropping and rotation for data augmentation.
Experiment Setup Yes The training batch size was 1, with each patch sized at 1024 1024. The learning rate was 0.0001, and all models were trained for 50 epochs.