RetouchGPT: LLM-based Interactive High-Fidelity Face Retouching via Imperfection Prompting
Authors: Wen Xue, Chun Ding, Ruotao Xu, Si Wu, Yong Xu, Hau-San Wong
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments have been performed to verify effectiveness of our design elements and demonstrate that Retouch GPT is a useful tool for interactive face retouching and achieves superior performance over state-of-the-arts. We utilized Flickr-Face-HQ-Retouching dataset (FFHQR) (Shafaei, Little, and Schmidt 2021) (contains 56k/7k/7k train/evaluate/test images) for comparison. |
| Researcher Affiliation | Collaboration | 1School of Computer Science and Engineering, South China University of Technology 2Institute of Super Robotics (Huangpu) 3Department of Computer Science, City University of Hong Kong EMAIL, EMAIL EMAIL, EMAIL |
| Pseudocode | No | The paper includes a workflow diagram (Figure 2) and mathematical formulations, but no explicit section or block labeled as 'Pseudocode' or 'Algorithm' with structured steps formatted like code. |
| Open Source Code | No | The paper states: 'All competing methods are implemented using open-source codes.' but does not explicitly provide a statement or a link for the source code of Retouch GPT itself. |
| Open Datasets | Yes | We utilized Flickr-Face-HQ-Retouching dataset (FFHQR) (Shafaei, Little, and Schmidt 2021) (contains 56k/7k/7k train/evaluate/test images) for comparison. |
| Dataset Splits | Yes | We utilized Flickr-Face-HQ-Retouching dataset (FFHQR) (Shafaei, Little, and Schmidt 2021) (contains 56k/7k/7k train/evaluate/test images) for comparison. |
| Hardware Specification | Yes | We implement Retouch GPT by using Py Torch and train it on a single GPU with 80G graphics memory. |
| Software Dependencies | No | We implement Retouch GPT by using Py Torch and train it on a single GPU with 80G graphics memory. We use the pre-trained T5 model (Raffel et al. 2020) as text encoder and Llama (Touvron et al. 2023b) as LLM. Specific version numbers for PyTorch or the used models/libraries are not provided. |
| Experiment Setup | Yes | In the training process, the parameters of Retouch GPT are updated by the Adam optimizer (Kingma and Ba 2015) with the learning rate of 2 10 4. The hyper-parameter κ, ζ, η are set to 10, 0.1, and 0.9, respectively. ... There are a total of 400k training iterations, and the batch size is set to 1. |