MaintaAvatar: A Maintainable Avatar Based on Neural Radiance Fields by Continual Learning

Authors: Shengbo Gu, Yu-Kun Qiu, Yu-Ming Tang, Ancong Wu, Wei-Shi Zheng

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results validate the effectiveness of our Mainta Avatar model. Experimental results in two datasets demonstrate the effectiveness of our model, achieving state-of-the-art performance.
Researcher Affiliation Academia Shengbo Gu1,3, Yu-Kun Qiu1,3, Yu-Ming Tang1,3, Ancong Wu1,3*, Wei-Shi Zheng1,2,3* 1School of Computer Science and Engineering, Sun Yat-sen University, China 2Peng Cheng Laboratory, Shenzhen, China 3Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education, China EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes methods and processes but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements about releasing source code or provide links to a code repository.
Open Datasets Yes Our model is evaluated on ZJU-Mo Cap (Peng et al. 2021) and THuman2.0 dataset (Yu et al. 2021).
Dataset Splits Yes For ZJU-Mo Cap (Peng et al. 2021), ... This dataset includes one camera assigned for training and the other 22 cameras for evaluation. For each task, we choose only five images with different viewpoints ... and different poses for training. ... For Thuman2.0 (Yu et al. 2021), ... we render images from four viewpoints (0, 90, 180, 270 degrees) for each pose as the training set and render images from nine viewpoints (0, 40, 80, ..., 280, 320 degrees) for evaluation.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments.
Software Dependencies No The paper mentions 'Py Torch3D' and 'Adam' but does not specify version numbers for these or any other software components.
Experiment Setup Yes The random seed is set to 42. The network MLPo and the MLPp have 8 and 4 layers respectively. ... We set the learning rates for the MLPo and both the ℓc and ℓg to 5 * 10^-4, and the rest to 5 * 10^-5. Adam is adopted as the optimizer. For the current task, we sample 6 patches of 32 * 32 size, whereas for past tasks, we sampled one patch of 64 * 64 size. 128 points are sampled from each ray. In the ZJU-Mo Cap dataset ..., each task is trained for 12,000 iterations, with the pose distillation loss ... inactive for the first 10,000 iterations and activated for the final 2,000. In contrast, in the Thuman2.0 dataset ..., tasks undergo 80,000 iterations, divided into two phases: the pose distillation loss remains inactive for the initial 70,000 iterations and becomes active for the last 10,000. ... λ1 = 0.2 and λp is defined by the following formula: ... For λβ, if t < tmax t0, we set it to 0; otherwise, we set it to 800.