Declarative Algorithms and Complexity Results for Assumption-Based Argumentation
Authors: Tuomo Lehtonen, Johannes P. Wallner, Matti Järvisalo
JAIR 2021 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show via an extensive empirical evaluation that our approach significantly improves on the empirical performance of current ABA reasoning systems. |
| Researcher Affiliation | Academia | Tuomo Lehtonen EMAIL Helsinki Institute for Information Technology HIIT, Department of Computer Science, University of Helsinki, Finland Johannes P. Wallner EMAIL Institute of Software Technology, Graz University of Technology, Austria Matti J arvisalo EMAIL Helsinki Institute for Information Technology HIIT, Department of Computer Science, University of Helsinki, Finland |
| Pseudocode | Yes | Listing 1: Module πcommon Listing 2: Module adm Listing 3: Module πgrd Listing 4: Module πprefs Listing 5: Module stb+ Listing 6: Module πgrd+ subroutine Algorithm 1 Computing the ideal assumption set (Dunne, 2009, Algorithm 3) Algorithm 2 Computing the <-grounded assumption set |
| Open Source Code | Yes | Our implementation is available at https://bitbucket.org/coreo-group/aspforaba. |
| Open Datasets | Yes | For comparing the direct ASP-based approach to ABAGRAPH and ABA2AF, we employed the 680 ABA frameworks, containing up to 90 sentences, and the associated queries used earlier by Craven and Toni (2016) and Lehtonen et al. (2017) in experiments on ABAGRAPH and ABA2AF (http: //robertcraven.org/proarg/experiments.html). |
| Dataset Splits | No | The paper describes how instances were filtered and generated, for example: 'We filtered the trivial instances out for the acceptance problems, leaving 1728 instances for credulous reasoning under admissible and grounded and 4613 for skeptical reasoning under stable semantics. For enumeration under preferred semantics, all of the 680 base frameworks were used.' However, it does not specify explicit training/validation/test splits, or reference pre-defined splits for model training or evaluation in a traditional machine learning sense. |
| Hardware Specification | Yes | All experiments were run on 2.83-GHz Intel Xeon E5440 quad-core machines with 32-GB RAM using a 600-second time limit per instance. |
| Software Dependencies | Yes | For our ASP-based approach as well as ABA2AF, we used CLINGO version 5.2.2 (Gebser et al., 2016) as the ASP solver. We used version 3 of ASPRIN for the ASP approach for preferred semantics. For ABAGRAPH, we used SICStus Prolog version 4.5. |
| Experiment Setup | Yes | All experiments were run on 2.83-GHz Intel Xeon E5440 quad-core machines with 32-GB RAM using a 600-second time limit per instance. For ABA complete and ideal semantics, we generated 20 frameworks for each number of sentences 10, 14, 18, 22, 26, 30, for a total of 120 frameworks. We tested credulous acceptance of 10 arbitrary query sentences per ABA framework under admissible, complete and stable semantics, giving a total of 100 instances per number of sentences. |