VRVVC: Variable-Rate NeRF-Based Volumetric Video Compression

Authors: Qiang Hu, Houqiang Zhong, Zihan Zheng, Xiaoyun Zhang, Zhengxue Cheng, Li Song, Guangtao Zhai, Yanfeng Wang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that VRVVC achieves a wide range of variable bitrates within a single model and surpasses the RD performance of existing methods across various datasets. Experimental results show that our VRVVC achieves variable bitrates by a single model while maintaining state-of-the-art RD performance across various datasets. We demonstrate the effectiveness of VRVVC through comparisons with state-of-the-art methods qualitatively and quantitatively: K-planes (Fridovich-Keil et al. 2023), Re RF (Wang et al. 2023), Te Tri RF (Wu et al. 2024), and Joint RF (Zheng et al. 2024b). We perform three ablation studies on residual dynamic modeling, progressive training, and joint optimization by disabling each component individually.
Researcher Affiliation Academia Qiang Hu1*, Houqiang Zhong2*, Zihan Zheng1, Xiaoyun Zhang1 , Zhengxue Cheng2, Li Song2, Guangtao Zhai2, Yanfeng Wang3 1Cooperative Medianet Innovation Center, Shanghai Jiao Tong University 2School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University 3School of Artificial Intelligence, Shanghai Jiao Tong University EMAIL
Pseudocode No The paper describes methods using text and mathematical equations, but it does not contain any explicitly labeled pseudocode blocks or algorithms.
Open Source Code No The paper does not contain an explicit statement about the release of source code, nor does it provide a link to a code repository or mention code in supplementary materials.
Open Datasets Yes We validate our method on two datasets: Re RF (Wang et al. 2023) and DNA-Rendering (Cheng et al. 2023)
Dataset Splits Yes We validate our method on two datasets: Re RF (Wang et al. 2023) and DNA-Rendering (Cheng et al. 2023), using 2 views for testing and the rest for training.
Hardware Specification Yes Our experimental setup includes an Intel E5-2699 v4 and a V100 GPU.
Software Dependencies No The paper describes the proposed framework and methodology but does not specify any particular software libraries, frameworks, or their version numbers (e.g., Python, PyTorch, CUDA versions).
Experiment Setup Yes Our experimental setup includes an Intel E5-2699 v4 and a V100 GPU. We train 40,000 iterations, with each Go F lasting 30 frames. The Lagrange multipliers Λ are initialized as {0.0018, 0.0035, 0.0067, 0.0130, 0.025, 0.0483, 0.0932, 0.18}, and the quantization parameters A are set to {1.0000, 1.3944, 1.9293, 2.6874, 3.7268, 5.1801, 7.1957, 10.0}. The weights γ1 and γ2 are 0.0001 and 0.001.