Constant Acceleration Flow

Authors: Dogyun Park, Sojin Lee, Sihyeon Kim, Taehoon Lee, Youngjoon Hong, Hyunwoo J. Kim

NeurIPS 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our comprehensive studies on toy datasets, CIFAR-10, and Image Net 64 64 demonstrate that CAF outperforms state-of-the-art baselines for one-step generation. We also show that CAF dramatically improves few-step coupling preservation and inversion over Rectified flow.
Researcher Affiliation Academia Dogyun Park Korea University EMAIL Sojin Lee Korea University EMAIL Sihyeon Kim Korea University EMAIL Taehoon Lee Korea University EMAIL Youngjoon Hong KAIST EMAIL Hyunwoo J. Kim Korea University EMAIL
Pseudocode Yes Algorithm 1 Training process of Constant Acceleration Flow
Open Source Code Yes Code is available at https://github.com/mlvlab/CAF.
Open Datasets Yes Our comprehensive studies on toy datasets, CIFAR-10, and Image Net 64 64 demonstrate that CAF outperforms state-of-the-art baselines for one-step generation.
Dataset Splits No To further validate the effectiveness of our approach, we train CAF on real-world image datasets, specifically CIFAR-10 at 32 32 resolution and Image Net at 64 64 resolution.
Hardware Specification Yes The total training takes about 21 days with 8 NVIDIA A100 GPUs for Image Net, and takes 10 days 8 NVIDIA RTX3090 GPUs for CIFAR-10.
Software Dependencies No For all experiments, we use Adam W [53] optimizer with a learning rate of 0.0001 and apply an Exponential Moving Average (EMA) with a 0.999 decay rate.
Experiment Setup Yes For all experiments, we use Adam W [53] optimizer with a learning rate of 0.0001 and apply an Exponential Moving Average (EMA) with a 0.999 decay rate. For adversarial training, we employ adversarial loss Lgan using real data x1,real from [24]: ... We set h = 1.5 and d as LPIPS-Huber loss [43] for all real-data experiments.