DiffGrasp: Whole-Body Grasping Synthesis Guided by Object Motion Using a Diffusion Model

Authors: Yonghao Zhang, Qiang He, Yanguang Wan, Yinda Zhang, Xiaoming Deng, Cuixia Ma, Hongan Wang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate that our approach outperforms the state-of-the-art method and generates plausible results.
Researcher Affiliation Collaboration Yonghao Zhang1,2*, Qiang He1,2*, Yanguang Wan1,2, Yinda Zhang3, Xiaoming Deng1,2 , Cuixia Ma1,2 , Hongan Wang1,2 1Institute of Software, Chinese Academy of Sciences 2University of Chinese Academy of Sciences 3Google EMAIL, EMAIL
Pseudocode No The paper describes the methodology using text and mathematical equations, but it does not include a clearly labeled pseudocode block or algorithm section.
Open Source Code Yes Project Page https://iscas3dv.github.io/Diff Grasp/
Open Datasets Yes We use GRAB (Taheri et al. 2020) and ARCTIC (Fan et al. 2023) to conduct our experiment, which collects full-body hand-object interaction mesh sequences.
Dataset Splits Yes We follow the conventional approach to divide the training and validation sets by 8 subjects used for training and 2 subjects for testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup No The paper mentions various loss terms and balancing weights (e.g., λdiff, λrecon, λinter, λwrist, λho) but does not provide their specific numerical values or other concrete hyperparameters (like learning rate, batch size, number of epochs) for the experimental setup in the main text.