PHR-DIFF: Portrait Highlights Removal via Patch-aware Diffusion Model
Authors: Hongsheng Zheng, Zhongyun Bao, Gang Fu, Xuze Jiao, Chunxia Xiao
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on multiple public datasets demonstrate that PHRDIFF removes highlights more cleanly and avoids artifacts. ... We utilize the PFSD dataset (Zheng et al. 2024) for the model training and evaluate our method on several public datasets, including Spec-Face (Muhammad et al. 2020), FFHQ (Karras, Laine, and Aila 2019), and Celeb A (Ziwei Liu and Tang 2015)... Quantitative Comparison. ... Visual Comparison. ... User Study. ... Ablation Study. |
| Researcher Affiliation | Academia | 1School of Computer Science, Wuhan University, Wuhan, China 2School of Computer and Information, Anhui Polytechnic University, Wuhu, China 3Department of Computing, The Hong Kong Polytechnic University, Hong Kong SAR, China EMAIL, EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1: Patch-Residual Training for PHR-DIFF ... Algorithm 2: Patch-Aware Sampling for PHR-DIFF |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code or a link to a code repository. The text only mentions "More comparison results can be seen in the supplementary." which typically refers to additional figures or data, not source code. |
| Open Datasets | Yes | We utilize the PFSD dataset (Zheng et al. 2024) for the model training and evaluate our method on several public datasets, including Spec-Face (Muhammad et al. 2020), FFHQ (Karras, Laine, and Aila 2019), and Celeb A (Ziwei Liu and Tang 2015) |
| Dataset Splits | No | The paper states: "We utilize the PFSD dataset (Zheng et al. 2024) for the model training and evaluate our method on several public datasets, including Spec-Face (Muhammad et al. 2020), FFHQ (Karras, Laine, and Aila 2019), and Celeb A (Ziwei Liu and Tang 2015)... For the FFHQ and Celeb A, we manually select 1,150 and 480 portraits with significant specular highlights, respectively." While it indicates PFSD is for training and others for evaluation/testing, it does not specify the train/validation splits for PFSD or explicit percentages/counts for any dataset splits to ensure reproducibility. |
| Hardware Specification | Yes | We implement our PHR-DIFF using Py Torch and train it on 6 NVIDIA Ge Force RTX 3090 GPUs. |
| Software Dependencies | No | The paper mentions "Py Torch" but does not specify its version number or any other software dependencies with version numbers. |
| Experiment Setup | Yes | The Adam optimizer is employed with parameters as (0.9, 0.999). During the training, we set the diffusion steps T to 1,000, and the noise schedule βt increases linearly from 0.0001 to 0.02. The model is trained for the 1000 epochs. For the sampling, we use a U-Net architecture similar to (Saharia et al. 2022) as the denoiser fθ, with 25 sampling steps. The patch size is set to 64 64, resulting in a total of I=4 4 patches. |