Parallel Model-Based Diagnosis on Multi-Core Computers

Authors: Dietmar Jannach, Thomas Schmitz, Kostyantyn Shchekotykhin

JAIR 2016 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this work, we therefore propose different schemes of parallelizing the diagnostic reasoning process in order to better exploit the capabilities of modern multi-core computers. We propose and systematically evaluate parallelization schemes for Reiter s hitting set algorithm for finding all or a few leading minimal diagnoses using two different conflict detection techniques. Furthermore, we perform initial experiments for a basic depth-first search strategy to assess the potential of parallelization when searching for one single diagnosis. Finally, we test the effects of parallelizing direct encodings of the diagnosis problem in a constraint solver.
Researcher Affiliation Academia Dietmar Jannach EMAIL Thomas Schmitz EMAIL TU Dortmund, Germany Kostyantyn Shchekotykhin EMAIL Alpen-Adria University Klagenfurt, Austria
Pseudocode Yes Algorithm 1: diagnose: Main algorithm loop. Input: A diagnosis problem (SD, Comps, Obs) Result: The set of diagnoses ... Algorithm 2: generate Node: Node generation logic. ... Algorithm 3: check And Add Path: Adding a new path label with a redundancy check. ... Algorithm 4: diagnose LW: Level-Wise Parallelization. ... Algorithm 5: diagnose FP: Full Parallelization. ... Algorithm 6: generate Node FP: Extended node generation logic. ... Algorithm 7: Quick Xplain (QXP) ... Algorithm 8: Merge Xplain (MXP) ... Algorithm 9: diagnose PRDFS: Parallelized random depth-first search. ... Algorithm 10: expand PRDFS: Parallel random depth-first node expansion. ... Algorithm 11: direct Diag: Computation of all diagnoses using a direct encoding.
Open Source Code No The mutated CSPs can be downloaded at http://ls13-www.cs.tu-dortmund.de/homepage/hp_ downloads/jair/csps.zip. All diagnosis algorithms evaluated in this paper were implemented in Java unless noted otherwise. (The provided URL is for datasets, not the methodology's source code, and there is no explicit statement of code release for the algorithms described.)
Open Datasets Yes We used electronic circuit benchmarks from the DX Competition 2011 Synthetic Track, faulty descriptions of Constraint Satisfaction Problems (CSPs), as well as problems from the domain of ontology debugging. ... The mutated CSPs can be downloaded at http://ls13-www.cs.tu-dortmund.de/homepage/hp_ downloads/jair/csps.zip. ... For this set of experiments, we selected the first five systems of the DX Competition 2011 Synthetic Track (see Table 1) (Kurtoglu & Feldman, 2011). ... In this set of experiments we used a number of CSP instances from the 2008 CP solver competition (Lecoutre, Roussel, & van Dongen, 2008) ... The same dataset to evaluate the performance gains when applying our parallelization schemes to the ontology debugging problem. The details of the different tested ontologies are given in Table 10. (Kalyanpur, Parsia, Horridge, & Sirin, 2007)
Dataset Splits Yes For each system, the competition specifies 20 scenarios with injected faults resulting in different faulty output values. ... We first generated a random solution using the original CSP formulations. From each solution, we randomly picked about 10% of the variables and stored their value assignments, which then served as test cases. ... The test cases are sets of logical sentences which must be entailed by the ontology (positive) or not entailed by the ontology (negative).
Hardware Specification Yes The hardware we used for the evaluation in this chapter a laptop with an Intel i7-3632QM CPU, 16GB RAM, running Windows 8 also had four cores with hyperthreading. The results of an evaluation on server hardware with 12 cores are reported later in this Section.
Software Dependencies No In the experiments we used Choco (Prud homme, Fages, & Lorca, 2015) as a constraint solver... For our evaluation we use the Gecode constraint solver (Schulte, Lagerkvist, & Tack, 2016). In particular, we use the parallelization option of Gecode to test its effects on the diagnosis running times. ... We use Mini Zinc as a constraint modeling language. ... As an underlying description logic reasoner, we used Pellet (Sirin, Parsia, Grau, Kalyanpur, & Katz, 2007). The manipulation of the knowledge bases during the diagnosis process was accomplished with the OWL-API (Horridge & Bechhofer, 2011). (No explicit version numbers like "Choco 2.1.5" or "Gecode 4.2.0" are provided for the named software components.)
Experiment Setup Yes For the parallel approaches, we used a thread pool of size four. ... For this test we compared FP with 4, 8, 10, and 12 threads to the sequential algorithm. ... We limited the search depth to 4 for all experiments to speed up the benchmark process. ... for the goal is to find all minimal diagnoses ... when the goal is to find a few leading diagnoses ... for scenarios in which we search for one single diagnosis.