EqNIO: Subequivariant Neural Inertial Odometry
Authors: Royina Karegoudra Jayanth, Yinshuang Xu, Ziyun Wang, Evangelos Chatzipantazis, Kostas Daniilidis, Daniel Gehrig
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the generality of our framework by applying it to two neural inertial odometry methods, TLIO (Liu et al., 2020), and RONIN (Herath et al., 2020). Extensive qualitative and quantitative results comparing Eq NIO against previous works across diverse benchmarks establish a new state-of-the-art in inertial-only odometry. Eq NIO significantly enhances the accuracy, reliability, and generalization of existing methods. 5 EXPERIMENTS |
| Researcher Affiliation | Academia | Royina Karegoudra Jayanth , Yinshuang Xu , Ziyun Wang, Evangelos Chatzipantazis, Kostas Daniilidis , Daniel Gehrig University of Pennsylvania EMAIL EMAIL |
| Pseudocode | No | The paper describes the methodology in text and diagrams (e.g., Figure 2 for network architecture) but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The project details and code can be found at https://github.com/Royina Jayanth/Eq NIO. |
| Open Datasets | Yes | Datasets: Our TLIO variant is trained on the TLIO Dataset (Liu et al., 2020) and tested on TLIO and Aria Everyday Activities (Aria) Datasets (Lv et al., 2024). Our RONIN variant is trained on RONIN Dataset (Herath et al., 2020). We train on the 50% open-sourced data. We test our RONIN variant on three popular pedestrian datasets RONIN (Herath et al., 2020), RIDI (Yan et al., 2018) and Ox IOD (Chen et al., 2018b). |
| Dataset Splits | Yes | TLIO Dataset: The TLIO Dataset (Liu et al., 2020) is a headset dataset that consists of IMU raw data at 1k Hz and ground truth obtained from MSCKF at 200 Hz for 400 sequences totaling 60 hours. ... We use their data splits for training (80%), validation (10%), and testing(10%). |
| Hardware Specification | Yes | The baseline TLIO and our methods applied to TLIO were trained on NVIDIA a40 GPU occupying 7-8 GB memory per epoch... RONIN was trained on NVIDIA 2080ti for 38 epochs... We report the floating point operations (FLOPs), the inference time (in milliseconds), and Maximum GPU memory (in GB) during inference, on an NVIDIA 2080 Ti GPU for the NN averaged over multiple runs to get accurate results. |
| Software Dependencies | No | The framework is implemented in Pytorch and all hyperparameters of the base architectures are used to train TLIO and RONIN respectively. No specific version number for Pytorch or any other software library is provided. |
| Experiment Setup | Yes | The network is trained with a batch size of 1024 samples for 50 epochs, with 1770 iterations per epoch. The NN is trained for 10 epochs with a Mean Squared Error loss and the remaining 40 epochs with a Mean Log Likelihood Error loss (MLE)... The hidden dimension is 128 and the convolutional kernel is 16 x 1. Finally, the fully connected block of hidden dimension 128 consisting of linear, nonlinearity, layer norm, and output linear layer follows a pooling over the time dimension... The network architecture is the same as SO(2) with hidden dimension 64 and 2 convolutional blocks... |