Towards Partial Order Reductions for Strategic Ability

Authors: Wojciech Jamroga, Wojciech Penczek, Teofil Sidoruk, Piotr Dembiński, Antoni Mazurkiewicz

JAIR 2020 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To evaluate the gains that the technique produces for asynchronous multi-agent systems, we have used three benchmarks: classical Train-Gate-Controller, Pipeline, and Asynchronous Simple Voting. We have implemented the corresponding asynchronous MAS in Promela, the modeling language of the Spin model checker (Holzmannn, 1997). Then, we used the partial order reduction functionality of Spin to produce reduced models, and compared their sizes to the full models. The results are presented in Section 7.2.
Researcher Affiliation Academia Wojciech Jamroga EMAIL Institute of Computer Science, Polish Academy of Sciences, Warsaw, Poland Interdisciplinary Centre for Security, Reliability, and Trust, University of Luxembourg, Luxembourg. Wojciech Penczek EMAIL Institute of Computer Science, Polish Academy of Sciences, Warsaw, Poland. Teofil Sidoruk EMAIL Institute of Computer Science, Polish Academy of Sciences, Warsaw, Poland Faculty of Mathematics and Information Science, Warsaw University of Technology, Warsaw, Poland. Piotr Dembi nski EMAIL Institute of Computer Science, Polish Academy of Sciences, Warsaw, Poland. Antoni Mazurkiewicz EMAIL Institute of Computer Science, Polish Academy of Sciences, Warsaw, Poland
Pseudocode Yes mcheck1(M, g, A γ): 1. Guess a joint ir-strategy σA; 2. Prune M according to σA, obtaining model M ; 3. Model-check the CTL formula AG false Aγ ( the set of paths is nonempty3 and, for all paths, γ ) in M , g, and return the answer. Since σA is of at most linear size with respect to |M|, and model checking of Aγ can be done in deterministic polynomial time w.r.t. |M|, both with and without fairness assumptions (Baier & Katoen, 2008), we obtain the bound.
Open Source Code No The paper states: 'We have implemented the corresponding asynchronous MAS in Promela, the modeling language of the Spin model checker (Holzmannn, 1997).' This indicates the authors used an existing tool (Spin) and implemented their models within its language (Promela), but does not explicitly state that the code for their novel contribution (partial order reduction for strategic abilities) is open-sourced, nor does it provide a link.
Open Datasets Yes We ran our experiments for three classes of scalable asynchronous multi-agent systems: Train-Gate-Controller, Pipeline, and Asynchronous Simple Voting. Train-Gate-Controller (TGC). The TGC benchmark, inspired by Alur et al. (1993, 1998), Hoek and Wooldridge (2002)... Pipeline. This is a benchmark adapted from Peled (1993)... Finally, we propose a new benchmark, inspired by the simple model of voting and coercion from Jamroga et al. (2017).
Dataset Splits No The paper focuses on model checking and partial order reduction for strategic abilities, evaluating the reduction size on different instances of benchmarks (e.g., TGCn, Pipeline with n processes). It does not involve training machine learning models or using typical datasets with explicit training, validation, or test splits. The concept of 'dataset splits' as described in the prompt is not applicable to this type of research.
Hardware Specification No The paper describes experimental evaluation using the Spin model checker but does not provide any specific details regarding the hardware (CPU, GPU, memory, etc.) on which these experiments were conducted.
Software Dependencies No We have implemented the corresponding asynchronous MAS in Promela, the modeling language of the Spin model checker (Holzmannn, 1997).
Experiment Setup No The paper describes the theoretical foundation and algorithms for partial order reduction and evaluates its effectiveness on benchmarks. It specifies input formulas (e.g., φ1, φ2, φ3) for verification, but it does not detail typical experimental setup parameters such as hyperparameters, learning rates, batch sizes, or specific training configurations, as these are concepts primarily relevant to machine learning or optimization, not the model checking context presented.