Thin-Plate Spline-based Interpolation for Animation Line Inbetweening

Authors: Tianyi Zhu, Wei Shang, Dongwei Ren

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are conducted on benchmark datasets, and our method is compared against state-of-the-art techniques in video interpolation (Huang et al. 2022; Zhang et al. 2023; Li et al. 2023), animation interpolation (Chen and Zwicker 2022), and line inbetweening (Siyao et al. 2023). The evaluation metrics include CD and WCD scores, as well as introducing Earth Mover s Distance (EMD) and user study into consideration. Our approach outperforms existing methods by producing high-quality interpolation results with enhanced fluidity for all three interpolation gaps, i.e., 1, 5, and 9.
Researcher Affiliation Academia 1Faculty of Computing, Harbin Institute of Technology 2Tianjin Key Lab of Machine Learning, College of Intelligence and Computing, Tianjin University
Pseudocode No The paper describes the methodology using textual explanations and mathematical equations, but it does not include a clearly labeled pseudocode block or algorithm.
Open Source Code Yes Code https://github.com/Tian-one/tps-inbetween
Open Datasets Yes The training and testing were conducted on Mixiamo Line240 dataset (Siyao et al. 2023), a line art dataset with ground truth geometrization and vertex matching labels.
Dataset Splits No We set the frame gap N = 5 during training, and tested on the test set with the gaps N = 1, 5, 9 respectively. The paper mentions a 'test set' but does not provide specific percentages or counts for a full train/validation/test split of the Mixiamo Line240 dataset itself.
Hardware Specification Yes The training and testing were performed on an NVIDIA RTX A6000 GPU.
Software Dependencies No We implemented our model in PyTorch (Paszke et al. 2019). We apply Glue Stick (Pautrat et al. 2023) as our keypoints matching model. The paper mentions PyTorch and Glue Stick but does not provide specific version numbers for these software components.
Experiment Setup Yes We set the frame gap N = 5 during training, and tested on the test set with the gaps N = 1, 5, 9 respectively. Our model was trained at the resolution of 512 x 512, and tested at the original resolution 720 x 720. We employed the Adam (Kingma and Ba 2015) optimizer with β1 = 0.9 and β2 = 0.999 at a learning rate of 1 x 10-4 for 50 epochs. The hyperparameters λlpips, λcnt, λbi and η were set as 5, 5, 1 x 10-3 and 0.9, respectively.