Not-So-Optimal Transport Flows for 3D Point Cloud Generation

Authors: Ka-Hei Hui, Chao Liu, xiaohui zeng, Chi-Wing Fu, Arash Vahdat

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In an extensive empirical study, we show that our proposed model outperforms prior diffusion- and flow-based approaches on a wide range of unconditional generation and shape completion on the Shape Net benchmark.
Researcher Affiliation Collaboration 1The Chinese University of Hong Kong 2NVIDIA
Pseudocode No The paper describes procedures but does not contain a distinct 'Pseudocode' or 'Algorithm' section or figure, nor structured steps formatted like code.
Open Source Code No The paper provides a URL 'https://research.nvidia.com/labs/genair/not-so-ot-flow' which is a project overview page and not a direct link to a source-code repository as specified.
Open Datasets Yes Following Yang et al. (2019); Klokov et al. (2020); Cai et al. (2020); Zhou et al. (2021), we employ the Shape Net dataset (Chang et al., 2015) for training and evaluating our approach. ... We use the Gen Re dataset (Zhang et al., 2018) for depth renderings of Shape Net shapes.
Dataset Splits Yes Specifically, we train separate generative models for the Chair, Airplane, and Car categories with the provided train-test splits.
Hardware Specification Yes We implement our networks using PyTorch (Paszke et al., 2019) and run all experiments on a GPU cluster with four A100 GPUs.
Software Dependencies No The paper mentions PyTorch (Paszke et al., 2019) and the Adam optimizer (Kingma, 2014) but does not provide specific version numbers for any software libraries or dependencies.
Experiment Setup Yes We employ the Adam optimizer (Kingma, 2014) to train our model with a learning rate of 2e-4 and an exponential decay of 0.998 every 1,000 iterations. Following LION (Zeng et al., 2022), we use an exponential moving average (EMA) of our model with a decay of 0.9999. Specifically, we train our unconditional generative model for approximately 600,000 iterations with a batch size of 256, taking about four days to complete.