ReMatching Dynamic Reconstruction Flow

Authors: Sara Oblak, Despoina Paschalidou, Sanja Fidler, Matan Atzmon

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our evaluations on popular benchmarks involving both synthetic and real-world dynamic scenes demonstrate that augmenting current state-of-the-art methods with our approach leads to a clear improvement in reconstruction accuracy.
Researcher Affiliation Collaboration Sara Oblak 1 Despoina Paschalidou 1 Sanja Fidler1 2 3 Matan Atzmon 1 1 NVIDIA 2 University of Toronto 3 Vector Institute EMAIL
Pseudocode Yes Algorithm 1 Re Matching loss Require: Solver for 5, times {tl} LRM = 0 for t {tl} do ut( ) solve(ρ,ψt( )) LRM LRM + ρ(ut( ),ψ) end for Return: LRM
Open Source Code No The paper does not provide any explicit statement about releasing its source code, nor does it provide a link to a code repository.
Open Datasets Yes Our evaluations on popular benchmarks involving both synthetic and real-world dynamic scenes... We evaluate the Re Matching framework on benchmarks involving synthetic and real-world video captures of deforming scenes. ... D-Ne RF dataset (Pumarola et al., 2021) ... Hyper Ne RF real-world. The Hyper Ne RF dataset (Park et al., 2021b) ... Dynamic Scenes dataset (Yoon et al., 2020)
Dataset Splits Yes D-Ne RF dataset (Pumarola et al., 2021) comprises of 8 scenes, each consisting from 100 to 200 frames, hence providing a dense multi-view coverage of the scene. We follow D-Ne RF s evaluation protocol and use the same train/validation/test split at 800 800 image resolution with a black background. ... We follow the evaluation protocol provided with the dataset, and use the same train/test split. ... approximately 80 180 frames for training and an additional 20 frames reserved for testing.
Hardware Specification Yes The runtime analysis was conducted on a single NVIDIA RTX A6000.
Software Dependencies No The paper mentions software components like 'Adam optimizer', 'Multilayer perceptron (MLP)', 'Gaussian Splatting image model', 'automatic differentiation', but does not provide specific version numbers for any libraries or frameworks like Python, PyTorch, or CUDA.
Experiment Setup Yes Training is done for 40K iterations, where for the first 3K iterations, only {µi,Σi,ci,αi} n i=1 are optimized. ... We initialize the model using n = 100K 3D Gaussians. ... We set dmean emb = 63, dtime emb = 13 and dτ = 30. ... For optimization we use an Adam optimizer with different learning rates for the network components... We set the Re Matching loss weight λ = 0.001. When supplementing the Re Matching loss with an additional entropy loss, we use 0.0001 as the entropy loss weight.