Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

Bellman Optimal Stepsize Straightening of Flow-Matching Models

Authors: Bao Nguyen, Binh Nguyen, Viet Anh Nguyen

ICLR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental evaluations across image generation tasks demonstrate the efficacy of BOSS in terms of both resource utilization and image quality. Our results reveal that BOSS achieves substantial gains in efficiency while maintaining competitive sample quality, effectively bridging the gap between low-resource constraints and the demanding requirements of flow-matching generative models.
Researcher Affiliation Academia Bao Nguyen Vin University EMAIL Binh Nguyen National University of Singapore EMAIL Viet Anh Nguyen Chinese University of Hong Kong EMAIL
Pseudocode Yes This section presents the pseudocode in Algorithm 1 for the practical implementation of the dynamic programming algorithm designed to determine the Bellman optimal stepsizes.
Open Source Code Yes Our code can be found at https://github.com/nguyenngocbaocmt02/BOSS.
Open Datasets Yes In particular, we use the CIFAR-10 (Krizhevsky et al., 2009) and three high-resolution (256x256) datasets Celeb A-HQ (Karras et al., 2018), LSUN-Church, LSUN-Bedroom (Yu et al., 2015), and AFHQ-Cat.
Dataset Splits No The paper mentions using specific datasets and finetuning models, but it does not explicitly detail the exact training, validation, and test splits (e.g., percentages or counts) used for reproducibility across all experiments. While some datasets have standard splits, the paper does not specify how these splits were applied or if custom splits were used for validation.
Hardware Specification Yes using NVIDIA RTX A5000.
Software Dependencies No The paper mentions "Sci Py (Virtanen et al., 2020)" but does not provide version numbers for other key software dependencies like programming languages (e.g., Python), deep learning frameworks (e.g., PyTorch, TensorFlow), or CUDA versions, which are necessary for full reproducibility.
Experiment Setup Yes The pretrained models are finetuned in 12,000 iterations. One iteration is the passing and backpropagation process for a batch including 15 samples. (...) The value of N in Equation (5) and Kmax are fixed at 100, and 100 in all experiments if not mentioned.