Grid Cell-Inspired Fragmentation and Recall for Efficient Map Building

Authors: Jaedong Hwang, Zhang-Wei Hong, Eric R Chen, Akhilan Boopathy, Pulkit Agrawal, Ila R Fiete

TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate FARMap on complex procedurally-generated spatial environments and realistic simulations to demonstrate that this mapping strategy much more rapidly covers the environment (number of agent steps and wall clock time) and is more efficient in active memory usage, without loss of performance.
Researcher Affiliation Academia Jaedong Hwang EMAIL Massachusetts Institute of Technology Zhang-Wei Hong EMAIL Massachusetts Institute of Technology Eric Chen EMAIL Massachusetts Institute of Technology Akhilan Boopathy EMAIL Massachusetts Institute of Technology Pulkit Agrawal EMAIL Massachusetts Institute of Technology Ila Fiete EMAIL Massachusetts Institute of Technology
Pseudocode Yes Algorithm 1 presents the overall procedure of FARMap at time t. On top of the Frontier algorithm (Yamauchi, 1997), we have colored the FARMap algorithm blue.
Open Source Code No The paper includes a footnote '1https://jd730.github.io/projects/FARMap' after the abstract, which points to a project overview page rather than a specific code repository. There is no explicit statement indicating the release of source code for the methodology described in the paper.
Open Datasets Yes We conducted experiments on both FARMap and Frontier integrated with the pre-trained Neural SLAM (Chaplot et al., 2020) obtained from the official repository for the Gibson (Shen et al., 2021) exploration task with the Habitat simulator (Szot et al., 2021). We use American used in Section 5.4 as an example.
Dataset Splits No The paper describes generating '1,500 different environments' for evaluation and training an RND agent 'for 1 million steps' on each environment, but it does not specify traditional training/test/validation splits for a dataset used to train and evaluate a learning model.
Hardware Specification Yes Our models are implemented on PyTorch and the experiments are conducted on an Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz for spatial exploration experiments and on an NVIDIA Titan V for RND and Neural SLAM.
Software Dependencies No The paper mentions 'PyTorch' and 'Robot Operation System (ROS)' but does not provide specific version numbers for these key software components. For example, it states 'Our models are implemented on PyTorch' but no version is given.
Experiment Setup Yes We run the agent on 1,500 different environments: 300 different maps with five random seeds and the starting position and the color of the map are changed on each random seed. We set γ, ρ, and ϵ to 0.9, 2, and 5, respectively. The observation size (h, w) is (15,15). ... The learning rate is 0.0001, the reward discount factor is 0.99 and the number of epochs is 4. For other parameters, we use the same values mentioned in PPO and RND: we set the GAE parameter λ as 0.95, value loss coefficient as 1.0, entropy loss coefficient as 0.001, and clip ratio (ϵ in Eq. 7 in Schulman et al. (2017)) as 0.1.