BodyGen: Advancing Towards Efficient Embodiment Co-Design

Authors: Haofei Lu, Zhe Wu, Junliang Xing, Jianshu Li, Ruoyu Li, Zhe Li, Yuanchun Shi

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments across various tasks demonstrate Body Gen s advantages against previous methods in terms of both convergence speed and performance. Body Gen achieves an average performance improvement of 60.03% against the state-of-the-art baselines. ... 5 EXPERIMENTAL EVALUATIONS ... Environments. We conduct a comprehensive evaluation of Body Gen with baselines in ten challenging co-design environments ... 5.2 ABLATION STUDIES
Researcher Affiliation Collaboration 1Department of Computer Science and Technology, Tsinghua University, 2Ant Group EMAIL, EMAIL EMAIL
Pseudocode Yes Algorithm 1 illustrates the overall training process of Body Gen, which is based on PPO for efficient reinforcement learning.
Open Source Code Yes We provide codes and more results on the website: https://genesisorigin.github.io. ... Our code is available in our supplementary material for reproduction and further study. Visit our website for videos and more additional visualizations.
Open Datasets No The paper mentions using
Dataset Splits No The paper describes experiments in ten challenging co-design environments but does not mention specific training/test/validation splits for datasets. The environments are generated dynamically as part of the reinforcement learning process rather than being pre-existing datasets with defined splits.
Hardware Specification Yes Each model is trained using four random seeds on a system equipped with 112 Intel Xeon Platinum 8280 cores and six Nvidia RTX 3090 GPUs. Our main code framework is based on Python 3.9.18 and Py Torch 2.0.1. For all the environments used in our work, it takes approximately only 30 hours to train a model with 20 CPU cores and a single NVIDIA RTX 3090 GPU on our server.
Software Dependencies Yes Our main code framework is based on Python 3.9.18 and Py Torch 2.0.1.
Experiment Setup Yes Table 3 displays the hyperparameters Body Gen adopted across all experiments. ... For Body Gen, we ran a grid search over Mo SAT layer normalization {w/o-LN, Pre-LN, Post-LN}, Policy network learning rate {5e 5, 1e 4, 3e 4}, Value network learning rate {1e 4, 3e 4}, and Mo SAT hidden dimension {32, 64, 128, 256}. ... Table 3: Hyperparameters of Body Gen adopted in all the experiments (listing specific values for Optimizer, Learning Rates, Batch Sizes, etc.)