Solving Multi-agent Path Finding on Strongly Biconnected Digraphs

Authors: Adi Botea, Davide Bonusi, Pavel Surynek

JAIR 2018 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform a detailed empirical analysis of di BOX, showing a good scalability. To our knowledge, our work is the first study of multi-agent path finding focused on directed graphs. 10. Empirical Evaluation In this section we evaluate the performance of di BOX on a range of problems. We start with a comparison to an optimal solver, followed by a more detailed analysis of the impact of various components of di BOX on its performance. The algorithms are implemented in C++. The experiments are performed on an Intel 1.60 GHz machine, running Ubuntu 12.04 64 bit.
Researcher Affiliation Collaboration Adi Botea EMAIL IBM Research Dublin, Ireland Davide Bonusi EMAIL Raffmetal S.p.a Brescia, Italy Pavel Surynek EMAIL Faculty of Information Technology Czech Technical University in Prague Czech Republic
Pseudocode Yes The steps of the algorithms are also presented in more detail, with additional pseudocode and discussions. Algorithm 1: SOLVEDERIVEDEAR Algorithm 2: SOLVEBASICCYCLE Algorithm 3: Main di BOX algorithm in pseudocode. Algorithm 4: SOLVEEARS Algorithm 5: EARDECOMPOSITION
Open Source Code No No explicit statement about releasing the code for the methodology described in this paper is provided, nor is a link to a repository.
Open Datasets No The paper describes generating custom instances for testing (e.g., 'Two sets of instances were used...', 'For each graph size, we generate three graphs, and for each graph we generate 10 instances...'), but does not provide concrete access information (link, DOI, specific repository, or citation to an established benchmark) for a publicly available or open dataset.
Dataset Splits No The paper mentions generating random instances for evaluation (e.g., 'For each number of agents we generated 10 instances with the initial and the goal configurations set at random.'). However, it does not provide specific training/test/validation dataset splits, which are typically associated with machine learning models and not directly applicable in the context of this algorithm evaluation which generates problem instances.
Hardware Specification Yes The experiments are performed on an Intel 1.60 GHz machine, running Ubuntu 12.04 64 bit.
Software Dependencies No The paper states: 'The algorithms are implemented in C++.' It also mentions 'MDD-SAT (Surynek et al., 2016)' as an optimal solver used for comparison. However, it does not provide specific version numbers for C++ libraries or for MDD-SAT itself, which would be needed to replicate the experimental software environment.
Experiment Setup Yes In terms of scalability, in both problem sets, on instances with 11 agents or more, MDD-SAT reaches a timeout of 300 seconds. As expected, version L is significantly more costly as compared to version S. In fact, version L does not scale beyond graphs with 120 nodes, given a timeout of 30 minutes per instance.