On Hash-Based Work Distribution Methods for Parallel Best-First Search

Authors: Yuu Jinnai, Alex Fukunaga

JAIR 2017 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We analyze the search and communication overheads of state-of-the-art hash-based parallel best-first search algorithms... We show that Abstract Zobrist hashing outperforms previous methods on search domains using hand-coded, domain specific feature projection functions... We then propose GRAZHDA*, a graph-partitioning based approach to automatically generating feature projection functions... We show that GRAZHDA* outperforms previous methods on domain-independent planning. (Abstract) ... We evaluated the performance of the following HDA* variants on several standard benchmark domains with different characteristics. (Section 4.1)
Researcher Affiliation Academia Yuu Jinnai EMAIL Alex Fukunaga EMAIL Graduate School of Arts and Sciences The University of Tokyo Tokyo, Japan
Pseudocode Yes Algorithm 1: HDA [Z] Algorithm 2: Initialize HDA [Z] Algorithm 3: Initialize HDA [Z, Afeature] Algorithm 4: Greedy Abstract Feature Generation
Open Source Code No The paper refers to an instance generator from GitHub: 'We obtained the instance generator from https://github.com/eaburns/pbnf/tree/master/tile-gen' (Footnote 2). However, this is a third-party tool for generating instances, not the authors' own source code for the methodology described in this paper. No explicit statement about releasing their own code or a link to their code repository is provided.
Open Datasets Yes For the 15-puzzle, we used the standard set of 100 instances in Korf (1985) (Section 3.2). We used 60 benchmark instances, consisting of 10 actual amino acid sequences from BAli BASE 3.0 (Thompson, Koehl, Ripp, & Poch, 2005) (Section 4.1.3). We selected a set of IPC benchmark instances (Section 5.2.1).
Dataset Splits No The paper mentions using 'randomly generated instances' or 'standard set of 100 instances' for various puzzles and '60 benchmark instances' for Multiple Sequence Alignment, and 'IPC benchmark instances' for planning. However, it does not specify any explicit training/test/validation splits, proportions, or methodologies for how these datasets were partitioned for experimental evaluation.
Hardware Specification Yes We ran A* and HDA [Z] on 100 randomly generated instances of the 15-puzzle on Intel Xeon E5410 2.33 GHz CPU with 16 GB RAM (Section 3.1). The experiments were run on an Intel Xeon E5-2650 v2 2.60 GHz CPU with 128 GB RAM, using up to 16 cores (Section 4.1). We ran experiments on a cluster of 6 machines, each with an 8-core Intel Xeon E5410 2.33 GHz CPU with 16 GB RAM, and 1000Mbps Ethernet interconnect (Section 5.2.1). In addition to the 48 core cluster, we evaluated GRAZHDA*/sparsity on an Amazon EC2 cloud cluster with 128 virtual cores (v CPUs) and 480GB aggregated RAM (a cluster of 32 m1.xlarge EC2 instances, each with 4 v CPUs, 3.75 GB RAM/core) (Section 6.3.2).
Software Dependencies Yes We implemented these HDA* variants on top of the Fast Downward classical planner using the merge&shrink heuristic (Helmert et al., 2014) (abstraction size =1000). We parallelized Fast Downward using MPICH3 (Section 5.2.1).
Experiment Setup Yes We implemented these HDA* variants on top of the Fast Downward classical planner using the merge&shrink heuristic (Helmert et al., 2014) (abstraction size =1000) (Section 5.2.1). For FAZHDA*, we ignored 30% of the variables with the highest fluency (we tested 10%, 20%, 30%, 50%, and 70% and found that 30% performed the best). DAHDA* uses at most 30% of the total number of features in the problem instance (we tested 10%, 30%, 50%, and 70% and found that 30% performed the best). We packed 100 states per MPI message in order to reduce the number of messages (Romein et al., 1999) (Section 5.2.1). For HDA [Hyperplane], we determined the plane thickness d using the tuning method in Kobayashi et al. (2011) where λ = 0.003, which yielded the best performance among 0.0003, 0.003, 0.03, and 0.3 (Section 4.1.3).