FlowMamba: Learning Point Cloud Scene Flow with Global Motion Propagation

Authors: Min Lin, Gangwei Xu, Yun Wang, Xianqi Wang, Xin Yang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the effectiveness of Flow Mamba, with 21.9% and 20.5% EPE3D reduction from the best published results on Flying Things3D and KITTI datasets.
Researcher Affiliation Academia Min Lin1, Gangwei Xu2, Yun Wang2, Xianqi Wang1, Xin Yang2,3* 1School of Artificial Intelligence and Automation, Huazhong University of Science & Technology 2School of EIC, Huazhong University of Science & Technology 3Hubei Key Laboratory of Smart Internet Technology, Huazhong University of Science & Technology EMAIL
Pseudocode No The paper describes the methods using text, mathematical formulas, and block diagrams (Figures 2 and 3), but it does not include a clearly labeled pseudocode or algorithm block.
Open Source Code No The paper does not provide any explicit statements about releasing source code, nor does it include links to a code repository.
Open Datasets Yes For a fair comparison, we trained Flow Mamba following the previous methods (Wei et al. 2021; Fu et al. 2023; Cheng and Ko 2023) on Flying Things3D (Mayer et al. 2016) and tested on both Flying Things3D and KITTI (Geiger et al. 2013). Flying Things3D is a large synthetic dataset, including 19,640 pairs of labeled training samples and 3,824 samples in the test set. We directly evaluated the model on KITTI (Geiger et al. 2013) without any fine-tuning to validate the generalization ability on the real-world KITTI dataset, which contains 200 pairs of test data.
Dataset Splits Yes Flying Things3D is a large synthetic dataset, including 19,640 pairs of labeled training samples and 3,824 samples in the test set. We directly evaluated the model on KITTI (Geiger et al. 2013) without any fine-tuning to validate the generalization ability on the real-world KITTI dataset, which contains 200 pairs of test data.
Hardware Specification Yes All evaluations were conducted on a single RTX 3090 GPU.
Software Dependencies No All experiments were conducted using Py Torch (Paszke et al. 2019). The paper mentions PyTorch but does not specify a version number.
Experiment Setup Yes We used the Adam W (Kingma and Ba 2014; Loshchilov and Hutter 2017) optimizer with parameters β1 = 0.9 and β2 = 0.999. The learning rate was adjusted using the Cosine LR strategy, starting at 1e-3. The model was trained for a total of 300 epochs, and the same evaluation metrics as in recent studies (Wu et al. 2020; Cheng and Ko 2023; Liu et al. 2024a) were employed.