CA-Edit: Causality-Aware Condition Adapter for High-Fidelity Local Facial Attribute Editing

Authors: Xiaole Xian, Xilin He, Zenghao Niu, Junliang Zhang, Weicheng Xie, Siyang Song, Zitong Yu, Linlin Shen

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive quantitative and qualitative experiments demonstrate the effectiveness of our method in boosting both fidelity and editability for localized attribute editing.
Researcher Affiliation Academia 1Computer Vision Institute, School of Computer Science & Software Engineering, Shenzhen University, 2National Engineering Laboratory for Big Data System Computing Technology, Shenzhen University, 3Guangdong Provincial Key Laboratory of Intelligent Information Processing, 4University of Exeter, 5Great Bay University
Pseudocode No The paper describes the method using mathematical formulations and textual explanations, but it does not contain any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No Our codes will be made publicly available.
Open Datasets Yes We collect a high-quality facial image dataset comprising 200,000 high-quality images by combining filtered images from Face Caption-15M with selections from FFHQ and Celeb Mask-HQ datasets.
Dataset Splits No FFLEBench includes 15,000 samples from FFHQ, along with local masks and corresponding textual captions. Note that the samples used to construct FFLEBench are independent of those used for training.
Hardware Specification No The paper does not explicitly mention any specific hardware (e.g., GPU models, CPU types, memory amounts) used for running its experiments.
Software Dependencies No The paper mentions various models and frameworks (e.g., Style GAN, CLIP, Bi Se Net, Stable Diffusion Inpainting, Control Net Inpainting, IP-Adapter) but does not provide specific version numbers for any software libraries, programming languages, or solvers used in its implementation.
Experiment Setup No The paper discusses the method and evaluation metrics but does not provide specific details on the experimental setup such as learning rates, batch sizes, number of epochs, or optimizer settings for training.