Multi-Agent Path Finding: A New Boolean Encoding

Authors: Roberto Asín Achá, Rodrigo López, Sebastian Hagedorn, Jorge A. Baier

JAIR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In our experimental evaluation, we used square grids, ranging from 20 20 to 50 50 cells, and warehouse maps, with a varying number of agents and obstacles. We compared against representative solvers of the state-of-the-art, including the search-based algorithm CBS, the ASP-based solver ASP-MAPF, and the branch-and-cut-and-price hybrid solver, BCP.
Researcher Affiliation Academia Roberto As ın Ach a EMAIL Department of Computer Science, Universidad de Concepci on, Concepci on, Chile; Rodrigo L opez EMAIL Department of Management Control and Information Systems, School of Economics and Business, Universidad de Chile, Santiago, Chile, & Department of Computer Science, Pontificia Universidad Cat olica de Chile, Santiago, Chile; Sebasti an Hagedorn EMAIL Department of Computer Science, Pontificia Universidad Cat olica de Chile, Santiago, Chile; Jorge A. Baier EMAIL Department of Computer Science, Pontificia Universidad Cat olica de Chile, Santiago, Chile, & Instituto Milenio Fundamentos de los Datos, Santiago, Chile
Pseudocode Yes Figure 1: ASP-MAPF s encoding (G omez et al., 2021). Figure 2: ASP-MAPF2 s encoding
Open Source Code Yes 2. Source code available at https://github.com/robertoasin/Mt MF
Open Datasets No The paper describes benchmark instances by varying parameters like grid size, number of agents, and percentage of obstacles (e.g., 'AG: Instances in this set are 20 20 grids with 40 (10%) randomly placed obstacles with a number of agents that varies between 20 and 70, randomly distributed.'). However, it does not provide specific access information (link, DOI, repository, or formal citation for data access) for these or the newly generated benchmarks. It references 'G omez et al. (2021)' which is a paper, not a direct dataset link.
Dataset Splits No The paper describes benchmark instances by varying parameters like grid size, number of agents, and percentage of obstacles (e.g., 'AG: Instances in this set are 20 20 grids with 40 (10%) randomly placed obstacles with a number of agents that varies between 20 and 70, randomly distributed.'). However, it does not mention training, validation, or test splits for these instances, which are treated as individual problem instances for solver evaluation, not as a dataset to be split for model training.
Hardware Specification Yes The experiments reported in this subsection were done on a computer running Linux Mint 19.3 with an Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz and 16 Gbytes of RAM memory. For the new benchmarks: we used a Cluster composed of machines with the following characteristics: CPU: Intel(R) Xeon(R) CPU E3-1270 v6 @ 3.80GHz, RAM: 64 GBytes, OS: Ubuntu 18.04.5 LTS.
Software Dependencies No The paper mentions using 'Max SAT and ASP technology' and refers to 'ASP-MAPF2' and 'Mt MS' solvers. It also mentions algorithms like 'MSU3' and 'LSU'. However, it does not provide specific version numbers for the ASP solver (e.g., Clingo) or the Max SAT solver used in their experiments. It only states 'For all solvers, we use versions provided by their authors.'
Experiment Setup No The paper specifies experimental parameters such as the types and sizes of maps ('square grids, ranging from 20 20 to 50 50 cells, and warehouse maps'), the range of agents ('between 20 and 70', 'between 4 and 30'), and obstacle percentages. It also states the 'time limit set to 600 seconds per instance' and later 'increasing the time limit for this benchmark to 1000 seconds'. However, it does not provide specific internal solver parameters or hyperparameters like learning rates, specific heuristics, or other detailed configuration settings for the algorithms themselves (ASP-MAPF2, Mt MS, etc.).