LocoVR: Multiuser Indoor Locomotion Dataset in Virtual Reality

Authors: Kojiro Takeyama, Yimeng Liu, Misha Sra

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our evaluation shows that Loco VR significantly enhances model performance in three practical indoor tasks utilizing human trajectories, and demonstrates predicting socially-aware navigation patterns in home environments.
Researcher Affiliation Collaboration Kojiro Takeyama1,2, Yimeng Liu1, Misha Sra1 1: University of California Santa Barbara, 2: Toyota Motor North America EMAIL
Pseudocode No The paper describes model architectures, inputs, outputs, and loss functions in detail in Section B 'Experimental Details', but it does not contain any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes The dataset and evaluation code are available at https://github.com/kt2024-hal/Loco VR.
Open Datasets Yes The dataset and evaluation code are available at https://github.com/kt2024-hal/Loco VR.
Dataset Splits Yes Loco VR: Loco VR is our main contribution, and it was collected using our VR system. The dataset includes over 7000 trajectories in 131 indoor environments. We split it into training (85%) and validation sets (15%).
Hardware Specification Yes Each model is trained for up to 100 epochs on a single NVIDIA RTX 4080 graphics card with 8G memory.
Software Dependencies No The paper mentions using the Adam optimizer and U-Net models, but does not provide specific version numbers for these or other key software dependencies like Python, PyTorch, or CUDA.
Experiment Setup Yes We use the Adam optimizer (Kingma & Ba, 2014) to train the U-Net models used in the experiments. The learning rate is 5e-5, and the batch size is 16. Each model is trained for up to 100 epochs on a single NVIDIA RTX 4080 graphics card with 8G memory.