Physical-aware Neural Radiance Fields for Efficient Exposure Correction

Authors: Kai Xu, Mingwen Shao, Yuanjian Qiao, Yan Wang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that our PHY-Ne RF achieves state-of-the-art results in addressing adverse lighting problems while ensuring high rendering efficiency. . . Extensive experiments have shown that our PHY-Ne RF is superior to existing methods in terms of effectiveness and efficiency in dealing with abnormal lighting scenes.
Researcher Affiliation Academia Qingdao Institute of Software, College of Computer Science and Technology, China University of Petroleum (East China) EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes its methods using mathematical equations and textual explanations, but no explicit pseudocode or algorithm blocks are provided.
Open Source Code No The paper does not explicitly state that source code is provided, nor does it provide any links to a code repository. It mentions "More visualization results are detailed in the supplementary material" which typically refers to additional figures or data, not source code.
Open Datasets Yes We use the previously collected dataset of Aleth-Ne RF (Cui et al. 2024), including low-light, normal-light, and over-exposure multi-view images. . . We also use the normal light images from the LOw Light paired dataset (LOL) (Wei et al. 2018) as the reference images, containing 500 low light and normal light image pairs, each with a 400 600 resolution.
Dataset Splits Yes In each scene, we select 3-5 images as the test set, 1 image as the validation set, and the other images as the training set.
Hardware Specification Yes We train different scenes for 4-6 epochs on a single RTX3090 GPU, taking about 2 hours per scene.
Software Dependencies No We implement PHY-Ne RF using Pytorch and train our network using the Adam optimizer. The paper does not specify version numbers for Pytorch or any other software dependencies.
Experiment Setup Yes The training batch size is 1024, with 12,500 iterations per epoch. We train different scenes for 4-6 epochs on a single RTX3090 GPU, taking about 2 hours per scene. . . λg, λbc, λss, λcc are the hyperparameters for balancing the total loss weights, which are set to 1e 3, 1e 3, 1e 3, 1e 8, respectively. . . λms is a hyperparameter set to 1e 3.