Learning Geometric Reasoning Networks For Robot Task And Motion Planning

Authors: Smail Ait Bouhsain, Rachid Alami, Thierry Simeon

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive experimental results, we show that our model outperforms state-of-the-art methods, while maintaining generalizability to more complex environments, diverse object shapes, multi-robot settings, and real-world robots.
Researcher Affiliation Academia Smail Ait Bouhsain, Rachid Alami & Thierry Sim eon Laboratory for Analysis and Architecture of Systems (LAAS) National Center for Scientific Research (CNRS), Toulouse, France EMAIL
Pseudocode Yes Algorithm 1 shows the pseudo-code for the GRN-based planner. Given the GRN predictions on the initial state of the environment, a feasibility threshold and a grasp obstructions threshold, it recursively builds a task plan by moving objects when it is feasible, or trying to free access to grasp types based on grasp obstruction and IK feasibility information otherwise.
Open Source Code Yes Our code is available at: https://github.com/Smail8/geometric_reasoning_networks.git
Open Datasets No Our model is trained and evaluated on fully synthetic data. We generate a number of datasets following the method described in Appendix B.
Dataset Splits Yes The Panda-3D-4 dataset consists of a training set containing 70 000 scenes, a validation set of 10 000 scenes, and a test set of 20 000 scenes, each one generated using a different random seed to ensure that the environments are different across all three sets. The Panda-Tabletop-4 and PR2-3D-4 are each composed of 25 000 training scene, 5 000 validation scenes and 10 000 test scenes.
Hardware Specification Yes The model is trained on an Intel(R) Xeon(R) W-2223 CPU @ 3.60GHz workstation, with an NVIDIA RTX A5000 GPU.
Software Dependencies No The three modules are implemented in Pytorch Geometric (Fey & Lenssen, 2019) and trained using the Adam optimizer (Kingma, 2014).
Experiment Setup Yes During the pre-training stage, each module is trained for 100 epochs. We use a batch size of 8192 and a learning rate of 0.001 for the IK feasibility classifier and the GO estimator. For AGF classifier, we set the batch size to 2048 and the learning rate to 0.0001. During the fine-tuning stage, the complete GRN model is trained for 100 epochs with a batch size of 2048 and a learning rate of 0.0001.