Spectral Motion Alignment for Video Motion Transfer Using Diffusion Models

Authors: Geon Yeong Park, Hyeonho Jeong, Sang Wan Lee, Jong Chul Ye

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate SMA s efficacy in improving motion transfer while maintaining computational efficiency and compatibility across various video customization frameworks.
Researcher Affiliation Academia Korea Advanced Institute of Science and Technology (KAIST) EMAIL
Pseudocode Yes Pseudo-code is provided in the appendix.
Open Source Code No The paper does not contain any explicit statement about releasing source code or a link to a code repository for the methodology described.
Open Datasets Yes we curated a dataset comprising 30 text-video pairs sourced from the publicly available DAVIS (Pont-Tuset et al. 2017) and Web Vid10M (Bain et al. 2021) collections.
Dataset Splits No The paper describes the dataset characteristics (video lengths ranging between 8 and 16 frames) but does not provide specific training/test/validation splits or their proportions.
Hardware Specification No The paper mentions "15GB v RAM" in the context of efficiency for a specific model (VMC with SMA) but does not provide specific hardware details like GPU/CPU models, processors, or memory amounts used for the experiments generally.
Software Dependencies No The paper mentions various frameworks and models like "diffusion models", "VDMs", "Show-1 video model", "Zeroscope", "Stable Diffusion v1-5", and "Control Net-Depth", but does not provide specific version numbers for any software dependencies or libraries.
Experiment Setup No The paper states: "The resolution for all produced videos is standardized to 512x512." This provides one specific detail, but it lacks other crucial hyperparameters like learning rate, batch size, optimizer settings, or number of epochs necessary for a comprehensive experimental setup.