UniDemoiré: Towards Universal Image Demoiréing with Data Generation and Synthesis

Authors: Zemin Yang, Yujing Sun, Xidong Peng, Siu Ming Yiu, Yuexin Ma

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our extensive experiments demonstrate the cutting-edge performance and broad potential of our approach for generalized image demoir eing. ... Quantitative comparisons can be found in Table 2. Visual comparisons on demoir eing real data in UHDM are illustrated in Figure 6. ... Quantitative results are shown in Table 3. ... Ablation Study We individually ablate submodules in our proposed method to analyze their contribution. All these experiments are trained with the UHDM dataset and validated on the FHDMi dataset. Experimental results in Table 4 verify that all components in our Uni Demoir e solution are crucial for achieving the desired demoir eing performance.
Researcher Affiliation Academia 1Shanghai Tech University 2The University of Hong Kong EMAIL, EMAIL
Pseudocode No The paper describes methods through text and figures (e.g., Figure 1: The workflow of our proposed Uni Demoir e, Figure 4: Overview of the Moir e Image Synthesis stage), but does not contain any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code https://github.com/4DVLab/UniDemoire
Open Datasets Yes 2) Real Moir e Image Dataset, TIP (Sun, Yu, and Wang 2018), FHDMi (He et al. 2020), and UHDM (Yu et al. 2022), are used to demonstrate our ability in restoring real moir e images.
Dataset Splits No The paper mentions training on one dataset and testing/validating on another (e.g., "trained with the UHDM dataset and validated on the FHDMi dataset" or "Source Target" in Table 3) but does not provide specific percentages or sample counts for splits within any given dataset (e.g., train/validation/test splits of UHDM itself).
Hardware Specification No The paper does not explicitly state any specific hardware details such as GPU models, CPU models, or memory specifications used for running the experiments. It mentions using a 'mobile phone' for capturing data, but not for computation.
Software Dependencies No The paper mentions general software concepts like "diffusion models", "U-shaped transformer backbone", and "VGG16 network", but does not provide specific versions for any software dependencies (e.g., Python 3.8, PyTorch 1.9).
Experiment Setup No The paper states, "Thorough implementation details are in the appendix," but does not include specific experimental setup details such as hyperparameters (e.g., learning rate, batch size, number of epochs, optimizer settings) in the main text.