High Dynamic Range Novel View Synthesis with Single Exposure

Authors: Kaixuan Zhang, Hu Wang, Minxian Li, Mingwu Ren, Mao Ye, Xiatian Zhu

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4. Experiments Datasets. Following HDR-GS and HDR-Ne RF, we use the multi-view image dataset with 8 synthetic scenes created by the software Blender... Evaluation metrics. We employ the PSNR and SSIM as quantitative metrics... Implementation details. Both models are trained with the Adam optimizer... Quantitative evaluation. Tab. 1 presents the quantitative results on the synthetic datasets... 4.3. Ablation studies. We conduct an ablation study with the most efficient model, Mono-HDR-GS, on the synthetic datasets.
Researcher Affiliation Academia 1Nanjing University of Science and Technology. 2University of Electronic Science and Technology of China. 3State Key Laboratory of Intelligent Manufacturing of Advanced Construction Machinery. 4University of Surrey. Correspondence to: Minxian Li <EMAIL>.
Pseudocode No The paper describes the proposed method, Mono-HDR-3D, through textual descriptions, mathematical formulas (e.g., equations 3, 5, 6), and architectural diagrams (Figures 2, 3, 4). However, it does not include any explicitly labeled pseudocode or algorithm blocks with structured steps.
Open Source Code Yes Source code is released at https://github.com/ prinasi/Mono-HDR-3D.
Open Datasets No The paper states: 'Following HDR-GS and HDR-Ne RF, we use the multi-view image dataset with 8 synthetic scenes created by the software Blender (Blender Foundation, 2025) and 4 real scenes captured by a camera'. However, it does not provide a direct link, DOI, or specific repository name for this dataset, nor does it explicitly state that the dataset is publicly available or provide a formal citation for its public release.
Dataset Splits Yes We use the same training and test data, where images at 18 views with the exposure time randomly selected from {t1, t3, t5} are used for training, while the other 17 views at the same exposure time and HDR images are used for testing.
Hardware Specification No The paper mentions training details and discusses model inference speed (fps) in Table 1 but does not specify the particular GPU or CPU models, memory, or other hardware components used for running the experiments. For example, it does not state 'NVIDIA A100' or 'Intel Xeon'.
Software Dependencies No The paper mentions training with the Adam optimizer and using Blender for synthetic scene creation, as well as Photomatrix Pro for tone-mapping. However, it does not provide specific version numbers for any key software libraries, frameworks, or programming languages used for the implementation (e.g., 'Python 3.8', 'PyTorch 1.9').
Experiment Setup Yes Both models are trained with the Adam optimizer with the same parameters as HDR-Ne RF and HDR-GS. For Eq. (9), we set β to 0.01/0.05 , while α = 0.6 for Mono-HDR-Ne RF/Mono-HDR-GS. We set the learning rate of L2H-CC/H2L-CC to 5 10 4/1 10 3, and the decays to 5 10 5/5 10 4 by cross-validation.