TeLoGraF: Temporal Logic Planning via Graph-encoded Flow Matching

Authors: Yue Meng, Chuchu Fan

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments in five simulation environments ranging from simple dynamical models in the 2D space to high-dimensional 7Do F Franka Panda robot arm and Ant quadruped navigation. Results show that our method outperforms other baselines in the STL satisfaction rate.
Researcher Affiliation Academia 1Department of Aeronautics and Astronautics, MIT, Cambridge, USA. Correspondence to: Yue Meng <EMAIL>.
Pseudocode No The paper describes its methodology in text and equations but does not present any structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https: //github.com/mengyuest/Te Lo Gra F.
Open Datasets Yes all the code and the datasets will be open-sourced to promote the development of STL planning.
Dataset Splits Yes We use 80% for training and 20% for validation.
Hardware Specification Yes We use Nvidia L40S GPUs for the training, where each training job takes 6-24 hours on a single GPU.
Software Dependencies No The learning pipeline is implemented in Pytorch Geometric (Fey & Lenssen, 2019; Paszke et al., 2019). The loss function is constructed as: L = min(0.5 ρ(τ, 0, ϕ), 0) + c1 1 T t=0 {min(u2 t 1, 0) + min(v2 t 1, 0)} + c2 1 T t=0 (u2 t + v2 t ) (7) The first loss term maximizes the truncated robustness score ρ for the trajectory τ = (x0, u0, ..., u T 1, x T ) to ensure STL rule satisfaction for the STL ϕ. We use pytorch-kinematics (Zhong et al., 2024) library to leverage Pytorch and GPU devices to compute the forward kinematic in a parallelized and efficient way. While software packages like PyTorch, Pytorch Geometric, and pytorch-kinematics are mentioned, specific version numbers for these dependencies are not provided.
Experiment Setup Yes The training is conducted for 1000 epochs with a batch size of 256. We use the commonly used ADAM (Kingma, 2014) optimizer with an initial learning rate 5 10 4 and a cosine annealing schedule that reduces the learning rate to 5 10 5 at the 900-th epochs and then keep it as constant for the rest 100 epochs. In the flow matching, we set the flow step Ns = 100.