DiffusionIMU: Diffusion-Based Inertial Navigation with Iterative Motion Refinement

Authors: Xiaoqiang Teng, Chenyang Li, Shibiao Xu, Zhihao Hao, Deke Guo, Jingyuan Li, Haisheng Li, Weiliang Meng, Xiaopeng Zhang

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that Diffusion IMU consistently outperforms existing methods, demonstrating superior generalization to unseen users while alleviating the impact of the sensor noise. In this section, we conducted evaluations for Diffusion IMU both qualitatively and quantitatively.
Researcher Affiliation Academia 1School of Computer and Artificial Intelligence, Beijing Technology and Business University, China 2School of Artificial Intelligence, Beijing University of Posts and Telecommunications, China 3School of Computer, Sun Yat-sen University, China 4State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, China 5School of Artificial Intelligence, University of Chinese Academy of Sciences, China
Pseudocode No The paper describes the methodology using mathematical equations and textual descriptions in Section 4 'Methodology' without presenting any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about the release of its source code or a link to a code repository.
Open Datasets Yes The proposed Diffusion IMU was evaluated on the Ro NIN dataset [Herath et al., 2020]. ... 1https://ronin.cs.sfu.ca/README.txt
Dataset Splits Yes The dataset ... is divided into seen and unseen test sets based on user presence in training1.
Hardware Specification Yes The model was trained on a single NVIDIA A100 GPU, leveraging its high computational efficiency for forward and backward passes.
Software Dependencies Yes The proposed Diffusion IMU model was implemented in Py Torch 1.7.1 [Paszke et al., 2019] and optimized using the Adam optimizer [Kingma and Ba, 2015].
Experiment Setup Yes The training process employed a batch size of 128 and an initial learning rate of 0.0003. A dropout rate of 0.2 was applied to mitigate overfitting. The maximum diffusion steps were set to 3, and the hidden dimensionality of the model was configured to 128.