Learning Gain Map for Inverse Tone Mapping
Authors: yinuo liao, Yuanshen Guan, Ruikang Xu, Jiacheng Li, Shida Sun, Zhiwei Xiong
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on these datasets demonstrate the superiority of our proposed GMNet over existing HDR-related methods both quantitatively and qualitatively. |
| Researcher Affiliation | Academia | Yinuo Liao, Yuanshen Guan, Ruikang Xu, Jiacheng Li, Shida Sun, Zhiwei Xiong University of Science and Technology of China, Hefei, China EMAIL, EMAIL |
| Pseudocode | No | The paper describes the network architecture and processes in detail, but it does not include a formally labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | The codes and datasets are available at https://github.com/qtlark/GMNet. |
| Open Datasets | Yes | Furthermore, we build synthetic and real-world datasets to facilitate future research on the GM-ITM task. Specifically, the synthetic dataset consists of SDR-GM pairs derived from HDRTV-standard videos, while the real-world dataset comprises high-resolution SDR-GM pairs captured by mobile devices. The codes and datasets are available at https://github.com/qtlark/GMNet. |
| Dataset Splits | Yes | After filtering, we obtain a training set of 900 pairs and a test set of 100 pairs. ... After filtering out low-quality pairs, we select 900 pairs for training and 100 pairs for testing. |
| Hardware Specification | Yes | All experiments are conducted on a workstation equipped with RTX 3090 under Ubuntu 20.04 LTS. ... The runtime is evaluated in NVIDIA A100 as the average of 100 trials in the resolution of 4096 3072. |
| Software Dependencies | No | The paper mentions using the Adam optimizer (Kingma, 2014) and ReLU activation function, and the operating system Ubuntu 20.04 LTS, but it does not provide specific version numbers for key software libraries or frameworks (e.g., PyTorch, TensorFlow, Python version). |
| Experiment Setup | Yes | We use the Adam optimizer (Kingma, 2014) with β1 = 0.9 and β2 = 0.99 to train our network. The batch size is set to 32, and the initial learning rate is 2 10 4, halving every 2 104 iterations, with a total of 1 105 iterations. The weights of GMNet are randomly initialized. Each Res Block group contains 5 blocks, and the number of hidden layers C is set to 64. |