Energy-Guided Optimization for Personalized Image Editing with Pretrained Text-to-Image Diffusion Models
Authors: Rui Jiang, Xinghe Fu, Guangcong Zheng, Teng Li, Taiping Yao, Xi Li
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that our method excels in object replacement even with a large domain gap, highlighting its potential for high-quality, personalized image editing. We evaluated our method using established benchmarks. For object swapping tasks, we utilized Dream Edit Bench (Li et al. 2023), which features 22 themes aligned with the Dream Booth framework (Ruiz et al. 2023). We implemented our proposed method using the Stable Diffusion 1.5 model as the pre-trained text-to-image diffusion model. |
| Researcher Affiliation | Collaboration | Rui Jiang1*, Xinghe Fu1*, Guangcong Zheng1, Teng Li1, Taiping Yao2, Xi Li1 1College of Computer Science and Technology, Zhejiang University 2Youtu Lab, Tencent EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methodology in detail using mathematical formulations and descriptive text, but it does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code, nor does it provide a link to a code repository. The only link found is to a dataset, 'Graphit' which is not the code for their method. |
| Open Datasets | Yes | We evaluated our method using established benchmarks. For object swapping tasks, we utilized Dream Edit Bench (Li et al. 2023), which features 22 themes aligned with the Dream Booth framework (Ruiz et al. 2023). Additionally, we selected 50 images from PIEBench (Ju et al. 2024), representing distinct conceptual categories, and paired them randomly for object swapping. |
| Dataset Splits | No | The paper mentions selecting images for benchmarks and performing 'two-by-two exchanges' for object swapping, but it does not specify standard training, validation, or test dataset splits in terms of percentages, counts, or explicit partitioning methodologies. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for experiments, such as GPU models, CPU types, or memory specifications. It only mentions using the Stable Diffusion 1.5 model. |
| Software Dependencies | No | The paper mentions using the 'Stable Diffusion 1.5 model' but does not list specific software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions). |
| Experiment Setup | Yes | The number of optimization steps is set to 50 for all experiments. More details can be found in Appendix A. |