Large Displacement Motion Transfer with Unsupervised Anytime Interpolation

Authors: Guixiang Wang, Jianjun Li

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, experiments show that compared with the large displacement motion between source and driving images, the small displacement motion between interpolated and driving images makes it easier to realize motion transfer. Compared with existing state-of-the-art methods, our method significantly improves motion-related metrics.
Researcher Affiliation Academia 1School of Computer Science and Engineering, Hangzhou Dianzi University, Zhejiang, China 2School of Information Science and Technology, Hangzhou Normal University, Zhejiang, China.
Pseudocode No The paper describes the methodology and model architecture through text and diagrams (Figure 1 and 2), but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement about making the code open source, nor does it provide a link to a code repository.
Open Datasets Yes Datasets: We trained on multiple types of datasets, including faces and human bodies. The datasets are as follows: Uv A-Nemo(Dibeklioglu et al., 2012; 2015), Fashion(Zablotskaia et al., 2019), Tai-Chi-HD(Siarohin et al., 2019b), and Ted Talks(Siarohin et al., 2021)
Dataset Splits No The paper states: "In the video reconstruction task with the same identity, the first frame D1 of video is used as a source image to reconstruct {Dt}n t=1.", but it does not specify explicit training, validation, or test splits for the datasets used.
Hardware Specification Yes The method is implemented on Py Torch(Paszke et al., 2019), all experiments are conducted on an NVIDIA 4090 GPU with a resolution of 256 256 for all datasets and 100 epochs of training.
Software Dependencies No The method is implemented on Py Torch(Paszke et al., 2019). While PyTorch is mentioned, a specific version number is not provided, nor are other key libraries with their versions.
Experiment Setup Yes We use Adam optimizer(Kingma & Ba, 2015) to update our model and set the learning rate to 0.0001, which dropped by a factor of 10 at the end of the 70th epoch and the 90th epoch. We set the training hyperparameters to: λr = 10, λf = 10, λa = 10, and λs = 10.