CoRe: Context-Regularized Text Embedding Learning for Text-to-Image Personalization
Authors: Feize Wu, Yun Pang, Junyi Zhang, Lianyu Pang, Jian Yin, Baoquan Zhao, Qing Li, Xudong Mao
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments demonstrate that our method outperforms several baseline methods in both identity preservation and text alignment. We demonstrate the effectiveness of our method by comparing it with four state-of-the-art personalization methods through both qualitative and quantitative evaluations. |
| Researcher Affiliation | Academia | 1Sun Yat-sen University 2The Hong Kong Polytechnic University |
| Pseudocode | No | The paper describes methods using prose and mathematical equations but does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code https://github.com/pangy9/Co Re |
| Open Datasets | Yes | For a comprehensive evaluation, we collect 24 concepts from previous studies (Gal et al. 2022; Ruiz et al. 2023; Kumari et al. 2023). ... To ensure a fair and unbiased evaluation, these concepts were selected from multiple datasets (Gal et al. 2022; Ruiz et al. 2023; Kumari et al. 2023), covering 8 animal toys/animals, 8 figurines, and 8 inanimate objects. |
| Dataset Splits | No | The paper describes using 24 concepts and 20 text prompts for quantitative evaluation and mentions a 'regularization prompt set' used during training. However, it does not specify explicit training/validation/test splits for the images or concepts used to train the personalized models, nor does it provide details for reproducing data partitioning. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or cloud computing resources used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers) needed to replicate the experiments. |
| Experiment Setup | No | The paper mentions hyperparameters like λemb and λattn in the optimization objective, and '10 optimization steps' for test-time optimization, but does not provide their specific values for the main training phase. It also states that 'The implementation details of our method and the baselines are provided in the Appendix,' indicating that these details are not in the main text. |