DynASyn: Multi-Subject Personalization Enabling Dynamic Action Synthesis
Authors: Yongjin Choi, Chanhun Park, Seung Jun Baek
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that Dyn ASyn is capable of synthesizing highly realistic images of subjects with novel contexts and dynamic interactions with the surroundings, and outperforms baseline methods in both quantitative and qualitative aspects. |
| Researcher Affiliation | Academia | Yongjin Choi, Chanhun Park, Seung Jun Baek* Department of Computer Science, Korea University, Seoul, Korea EMAIL |
| Pseudocode | No | The paper describes the methodology in prose and does not include explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement or link regarding the public availability of its source code. |
| Open Datasets | Yes | Dataset. We train and evaluate on a total of 15 image-text pairing datasets, including 7 individual datasets from Break-a-Scene, 2 datasets from the COCO benchmark (Lin et al. 2014), 2 datasets from the Zhang et al (Zhang et al. 2024) and 4 additionally collected proprietary datasets. |
| Dataset Splits | No | The paper mentions using various datasets but does not provide specific details on how these datasets were split into training, validation, or test sets. |
| Hardware Specification | No | The paper does not provide specific details regarding the hardware (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions using GPT-4, Stable Diffusion v2.1, and SAM but does not provide specific version numbers for underlying programming languages or libraries (e.g., Python, PyTorch, TensorFlow). |
| Experiment Setup | No | Detailed settings of hyperparmaeters are provided in Supplementary Materials. |