The Impact of Treewidth on Grounding and Solving of Answer Set Programs
Authors: Bernhard Bliem, Michael Morak, Marius Moldovan, Stefan Woltran
JAIR 2020 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this paper, we aim to study how the performance of modern answer set programming (ASP) solvers is influenced by the treewidth of the input program and to investigate the consequences of this relationship. We first perform an experimental evaluation that shows that the solving performance is heavily influenced by treewidth... Our first main contribution is to carry out an extensive experimental evaluation of two top-of-the-line ASP solver implementations and investigate how the solving performance behaves when the solvers are presented with hard instances of uniform size and construction, but variable treewidth, using a carefully crafted ASP problem encoding and adequately generated, tree-like instances. Our experiments show that the solving time for programs of the same size and construction indeed increases drastically with the treewidth. This is an interesting result, which shows that similar results for SAT solvers and resolution-width (Atserias, Fichte, & Thurley, 2011) do indeed carry over to the more complex world of ASP. |
| Researcher Affiliation | Academia | Bernhard Bliem EMAIL University of Helsinki Constraint Reasoning and Optimization Group Helsinki, Finland Michael Morak EMAIL University of Klagenfurt Semantic Systems Group Klagenfurt, Austria Marius Moldovan EMAIL Stefan Woltran EMAIL TU Wien Databases and Artificial Intelligence Group Vienna, Austria |
| Pseudocode | Yes | Listing 1: Finding valid assignments in capacitated graphs. Listing 2: Grounding of Line 2 in Listing 1 for vertex v7. Figure 7: An encoding of Subgraph Isomorphism in connection-guarded ASP |
| Open Source Code | No | The paper mentions a "Full archive: http://dbai.tuwien.ac.at/proj/decodyn/ijcai17-benchmarks.zip" in the context of generated instances for benchmarking. However, it does not explicitly state that this archive contains source code for the methodology described in the paper, which primarily involves theoretical program classes and MSO transductions, in addition to experimental evaluation of existing solvers. Therefore, there is no concrete access information for the methodology's source code. |
| Open Datasets | Yes | Benchmark Setting. In our tests we used three batches of instances constructed as presented before, each batch for a different grounding size. The size of the ground program was determined by the number of variables reported by clasp. ... Full archive: http://dbai.tuwien.ac.at/proj/decodyn/ijcai17-benchmarks.zip |
| Dataset Splits | Yes | In each batch we generated 20 instances for each treewidth between 2 and 9 in order to account for fluctuations in the running time. |
| Hardware Specification | No | The paper mentions running time measurements for ASP solvers clasp and WASP, but does not provide any specific hardware details like CPU, GPU models, or memory specifications. It only states that the tests were performed using certain software versions. |
| Software Dependencies | Yes | Grounding was done using the grounder gringo 4.5.4 (Gebser, Kaufmann, Kaminski, Ostrowski, Schaub, & Schneider, 2011). We then measured the running time of the ASP solvers clasp 3.1.4 (Gebser et al., 2011) and WASP 2.0 (Alviano et al., 2015). |
| Experiment Setup | Yes | In our tests we used three batches of instances constructed as presented before, each batch for a different grounding size. The size of the ground program was determined by the number of variables reported by clasp. For the first batch the number of variables in the grounding is approximately 3750, for the second it is 10300, and for the third 16700, where each instance is allowed to deviate up to 100 variables. In each batch we generated 20 instances for each treewidth between 2 and 9 in order to account for fluctuations in the running time. |