Geometry-aware RL for Manipulation of Varying Shapes and Deformable Objects
Authors: Tai Hoang, Huy Le, Philipp Becker, Vien A Ngo, Gerhard Neumann
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results demonstrate that the proposed Heterogeneous Equivariant Policy (HEPi) outperforms both Transformer-based and pure EMPN baselines, particularly in complex 3D manipulation tasks. HEPi s integration of equivariance and explicit heterogeneity modelling improves performance in terms of average returns, sample efficiency, and generalization to unseen objects. |
| Researcher Affiliation | Collaboration | Tai Hoang1 , Huy Le1,2, Philipp Becker1, Ngo Anh Vien2, Gerhard Neumann1 1Autonomous Learning Robots, Karlsruhe Institute of Technology 2Bosch Center for Artificial Intelligence |
| Pseudocode | No | The paper describes the methodology using mathematical equations and prose. It does not include a distinct section labeled "Pseudocode" or "Algorithm", nor does it present structured code-like blocks. |
| Open Source Code | No | Our project page is available here. |
| Open Datasets | Yes | We then introduce a novel task, Rope-Shaping, which increases complexity by requiring the rope to form a specific shape (a W from the LASA dataset (Khansari-Zadeh & Billard, 2011)) to a desired orientation. |
| Dataset Splits | Yes | Finally, we evaluate the generalization of these models to unseen objects on two rigid tasks: rigid-sliding and rigid-insertion. Both tasks are trained on subsets of objects one (plus), two (plus, star), and three (plus, star, pentagon) and tested on the remaining objects. |
| Hardware Specification | Yes | All experiments were conducted on a machine equipped with an NVIDIA A100 or an NVIDIA H100 GPU. |
| Software Dependencies | No | We utilized the Torch RL framework (Bou et al., 2023) for the implementation of PPO and TRPL algorithms, and Py G (Py Torch Geometric) (Fey & Lenssen, 2019) for handling the graph-based structure. The Transformer architecture was implemented using the torch.nn.Transformer Encoder and torch.nn.Transformer Encoder Layer packages from Py Torch (Paszke et al., 2017). |
| Experiment Setup | Yes | We presents the hyperparameters used across all policy models (HEPi, EMPN, and Transformer) for all the tasks in Table 4. Table 5: Hyperparameters for Rigid Environments Table 6: Hyperparameters for Deformable Environments |