TextMEF: Text-guided Prompt Learning for Multi-exposure Image Fusion

Authors: Jinyuan Liu, Qianjun Huang, Guanyao Wu, Di Wang, Zhiying Jiang, Long Ma, Risheng Liu, Xin Fan

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results on three publicly available benchmarks demonstrate that our Text MEF significantly outperforms state-of-the-art approaches in both visual inspection and objective analysis.
Researcher Affiliation Academia 1School of Mechanical Engineering, Dalian University of Technology, China 2School of Software Technology, Dalian University of Technology, China 3College of Information Science and Technology, Dalian Maritime University,China EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the proposed method in detail within Section 3 ("The Proposed Method") including Prompt Learning, Loss Function, and Network Architecture, but it does not present these procedures in structured pseudocode or algorithm blocks.
Open Source Code No The paper mentions that 'All methods are tested with their official codes' when referring to comparison methods, but it does not explicitly state that the code for the proposed Text MEF method is open-source, available, or provide a link.
Open Datasets Yes Extensive experimental results on three publicly available benchmarks demonstrate that our Text MEF significantly outperforms state-to-the-art approaches in both visual inspection and objective analysis. We conduct experiments on the SICE [Cai et al., 2018] dataset. ... To evaluate the performance, 100/26/30 image sequences from SICE [Cai et al., 2018], Mobile [Jiang et al., 2023], and MEF [Ma et al., 2017] datasets are adopted for evaluation.
Dataset Splits Yes In the first stage, we employ 315 pairs of over/under-exposure images and 383 well-exposed images. In the second stage, 367 over/under-exposed image sequences with reference images are collected. To evaluate the performance, 100/26/30 image sequences from SICE [Cai et al., 2018], Mobile [Jiang et al., 2023], and MEF [Ma et al., 2017] datasets are adopted for evaluation.
Hardware Specification Yes The overall framework is implemented on Pytorch with an NVIDIA Tesla V100 GPU.
Software Dependencies No The paper states that 'The overall framework is implemented on Pytorch', but it does not specify a version number for Pytorch or any other software dependency.
Experiment Setup Yes The batch size and epochs for the prompt learning and image fusion are set to 16/8 and 160/200, respectively. We employ the Adam optimizer to guide parameter optimization. The learning rate for the prompt learning is set to 5e 5, while the initial learning rate for the image fusion is set to 2e 4, with a learning rate decay of 0.1 at epochs 100 and 135.