Neural Eulerian Scene Flow Fields
Authors: Kyle Vedder, Neehar Peri, Ishan Khatri, Siyi Li, ERIC EATON, Mehmet Kocamaz, Yue Wang, Zhiding Yu, Deva Ramanan, Joachim Pehserl
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | On the Argoverse 2 2024 Scene Flow Challenge, Euler Flow outperforms all prior art, surpassing the next-best unsupervised method by more than 2.5 , and even exceeding the next-best supervised method by over 10%. 5 EXPERIMENTS In order to validate Euler Flow s construction and better understand the impact of its design choices, we perform extensive experiments on the Argoverse 2 (Wilson et al., 2021) and Waymo Open (Sun et al., 2020) autonomous vehicle datasets. |
| Researcher Affiliation | Collaboration | Kyle Vedder1,2 Neehar Peri2,3 Ishan Khatri3 Siyi Li1 Eric Eaton1 Mehmet Kocamaz2 Yue Wang2 Zhiding Yu2 Deva Ramanan3 Joachim Pehserl2 1University of Pennsylvania 2NVIDIA 3Carnegie Mellon University |
| Pseudocode | No | The paper describes the methodology using mathematical equations and prose, but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper mentions 'See vedder.io/eulerflow for interactive visuals' and references 'https://github.com/kylevedder/SceneFlowZoo' for variants of Zero Flow (Vedder et al., 2024). However, there is no explicit statement or direct link provided for the open-source code of the Euler Flow methodology described in this paper. |
| Open Datasets | Yes | We perform extensive experiments on the Argoverse 2 (Wilson et al., 2021) and Waymo Open (Sun et al., 2020) autonomous vehicle datasets. 1https://www.argoverse.org/sceneflow |
| Dataset Splits | Yes | To evaluate this, we perform a sweep of Euler Flow s network depth on the Argoverse 2 validation split (Figure 12). While Euler Flow with NSFP s default of depth 8 performs well on our Argoverse 2 evaluations (0.1% worse than the supervised state-of-the-art Flow4D), we see that performance improves as the neural prior s depth increases until depth 18 (indicating underfitting), where we start to see degradation (indicating overfitting to noise). Based on these results our Argoverse 2 2024 Scene Flow Challenge leaderboard submission uses a depth 18 neural prior (Figure 5). |
| Hardware Specification | Yes | With our implementation, optimizing Euler Flow for a single Argoverse 2 sequence takes 24 hours on one NVIDIA V100 16GB GPU |
| Software Dependencies | No | We used Py Torch3D (Ravi et al., 2020), which has custom CUDA operations with CUDA templated support for single neighbor differentiable KNN. (No specific version number is provided for PyTorch3D or any other key software component.) |
| Experiment Setup | Yes | In practice, we set W to 3 and α to 0.01. To showcase the flexibility of Euler Flow without hyperparameter tuning, for all quantitative experiments we run with a neural prior of depth 8 (NSFP s default depth), except for our submission to the Argoverse 2 2024 Scene Flow Challenge (Section 5.1) where, based on our depth ablation study on the val split (Section 5.2.3), we set the depth of the neural prior to 18. For efficiency, we use Euler integration with t set as the time between observations for our ODE solver, enabling support for arbitrary sensor frame rates, and set the cycle consistency balancing term α = 0.01 and optimization window W = 3 for all experiments. |