DreamUHD: Frequency Enhanced Variational Autoencoder for Ultra-High-Definition Image Restoration

Authors: Yidi Liu, Dong Li, Jie Xiao, Yuanfei Bao, Senyan Xu, Xueyang Fu

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on various UHD image restoration tasks show that our method surpasses stateof-the-art methods both qualitatively and quantitatively. Experiments We evaluate FEVAE-UHD on benchmarks for 4 UHD image restoration tasks: (a) low-light image enhancement, (b) image dehazing, (c) image deblurring, and (d) Image Demoir eing. Ablation Study We use the UHD-Bulr dataset to conduct the ablation study on the main designs of FEVAE-UHD.
Researcher Affiliation Academia School of Information Science and Technology and Mo E Key Laboratory of Brain-inspired Intelligent Perception and Cognition University of Science and Technology of China, Hefei, 230026, China EMAIL, EMAIL
Pseudocode No The paper describes methods using text and diagrams (Figure 2, Figure 3) but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code https://github.com/lyd-2022/dream UHD
Open Datasets Yes Image Dehazing Results Tab.1 presents the quantitative dehazing results on UHD-Haze. Image Deblurring Results We evaluate UHD image deblurring on the UHD-Blur dataset. Low-Light Image Enhancement Results We evaluate low-light enhancement on the UHD-LL dataset. Image Demoir eing Results We conduct image demoir eing experiments on the UHDM dataset, as shown in Tab. 4.
Dataset Splits No The paper mentions using specific datasets (UHD-Haze, UHD-Blur, UHD-LL, UHDM) for experiments and ablation studies, but it does not provide details on how these datasets were split into training, validation, or test sets (e.g., percentages, sample counts, or specific splitting methodology).
Hardware Specification No The paper mentions "full-resolution inference on consumer-grade GPUs" in the introduction as a general statement about the challenges, but it does not specify any particular GPU models, CPU models, or other hardware used for running their experiments.
Software Dependencies No The paper does not provide any specific software dependencies or their version numbers (e.g., Python, PyTorch, CUDA versions) used for implementation or experimentation.
Experiment Setup No The paper describes the two-stage training process, the loss functions (reconstruction loss, KL divergence loss, FFT loss), and the architecture of the latent space restoration network (IRNet using Cubic-Mixer blocks). However, it does not specify concrete hyperparameters such as learning rates, batch sizes, number of epochs, or the optimizer used for training, which are critical for reproducing the experiments.