AccCtr: Accelerating Training-Free Conditional Control For Diffusion Models

Authors: Longquan Dai, He Wang, Yiming Zhang, Shaomeng Wang, Jinhui Tang

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive testing has demonstrated that Acc Ctr offers superior sample quality and faster generation times. In this section, we conduct thorough experiments and comparisons to showcase the efficacy and strengths of our Acc Ctr sampling approach. In this section, we conduct quantitative comparison for sampling quality comparison. Total six methods including three training-free CMDs (Free Do M [Yu et al., 2023], DSG [Yang et al., 2024b], UGD [Bansal et al., 2024] ) and three training-required CMDs (Control Net [Zhang et al., 2023], T2I-Adapter [Mou et al., 2024], Control Net++ [Li et al., 2024] ) are compared. The test is conducted on COCO2017 validation set with timesteps set to 20. For text alignment, we evaluated the CLIP Scores [Radford et al., 2021]. For conditional consistency, we measured MSE [Sara et al., 2019] for depth maps, SSIM [Wang et al., 2004] for edge maps, and m Io U [Rezatofighi et al., 2019] for segmentation maps.
Researcher Affiliation Academia Longquan Dai , He Wang , Yiming Zhang , Shaomeng Wang and Jinhui Tang Nanjing University of Science and Technology EMAIL
Pseudocode Yes Algorithm 1 Alternative Maximization Sampling
Open Source Code No The paper does not provide concrete access to source code. It lacks explicit statements about code release or links to a repository.
Open Datasets Yes the model was subjected to 200, 000 training steps on the COCO2017 dataset [Lin et al., 2014]
Dataset Splits Yes the model was subjected to 200, 000 training steps on the COCO2017 dataset [Lin et al., 2014]
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments.
Software Dependencies No The paper mentions "SD-V1.5 model" and "Adam optimizer" but does not provide specific software library names with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes We employed the SD-V1.5 model as the backbone. To facilitate the training process, we selected the Adam optimizer and set its learning rate to 1e 5. With a batch size of 1, the model was subjected to 200, 000 training steps on the COCO2017 dataset [Lin et al., 2014], lasting roughly 60 hours.