Reasoning with PCP-Nets

Authors: Cristina Cornelio, Judy Goldsmith, Umberto Grandi, Nicholas Mattei, Francesca Rossi, K. Brent Venable

JAIR 2021 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results allow us to identify the aggregation method that better represents the given set of CP-nets and the most efficient dominance procedure to be used in the multi-agent context. In Section 4 and 5 we address optimality and dominance tasks at the level of a single agent s preferences (single network) and provide empirical experiments for our algorithms. In Section 6 we move to the multi-agent setting, showing how to compute optimality and dominance when we have a collection of preferences (networks) again using both theoretical and empirical tools.
Researcher Affiliation Collaboration Cristina Cornelio EMAIL Samsung AI, Cambridge, United Kingdom; Judy Goldsmith EMAIL University of Kentucky, Lexington, KY, USA; Umberto Grandi EMAIL Institut de Recherche en Informatique de Toulouse (IRIT) University of Toulouse, France; Nicholas Mattei EMAIL Department of Computer Science Tulane University, New Orleans, LA, USA; Francesca Rossi EMAIL IBM Research, T.J. Watson Research Center Yorktown Heights, New York, USA; K. Brent Venable EMAIL IHMC and University of West Florida, Florida, USA
Pseudocode No The paper describes algorithms verbally, for example, 'the unique optimal outcome can be found in linear time by sweeping through the CP-net (Boutilier et al., 2004), assigning the most preferred values in the preference tables. We sweep through the CP-net, following the arrows in the dependency graph and assigning at each step the most preferred value in the preference table.' However, it does not present any formal pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about making source code available or a link to a code repository. It mentions future work related to learning PCP-nets, implying code for such tasks is not yet released: 'We already have initial results on learning PCP-nets with only independent features, but since a separable structure is not always compatible with the input data, we intend to define methods to also learn non-separable PCP-nets, which are more expressive.'
Open Datasets No For all our experiments we randomly generate collections of CP-nets, and PCP-nets. ... Therefore we use an approximation method for random generation of a CP-nets: ... To generate a profile of m CP-nets we use the method described above independently.
Dataset Splits No The paper generates synthetic data for its experiments but does not describe conventional training, validation, or test splits. Instead, it describes how instances are generated and how many instances are used for evaluation (e.g., 'We compute the mean of the dominance approximation interval over 100 PCP-nets for each set of parameters. For each PCP-net, we take the mean over 100 outcome pairs.')
Hardware Specification No The paper does not provide any specific details about the hardware used for running experiments. It mentions variability in runtime ('Some instances took seconds, others took days') but no hardware specifications.
Software Dependencies No The paper does not provide specific software names along with their version numbers required to replicate the experiments.
Experiment Setup No The paper details parameters for generating synthetic data instances and for evaluating algorithms (e.g., 'In this experiment we vary n P r0, 30s and fix the maximum k to n 1, n{2 and n{4.' and 'the profiles have 20 individual CP-nets and the number of features varies from 1 to 10, and each has at most 2 parents.'), but it does not specify hyperparameters, optimization settings, or other system-level training configurations typically associated with machine learning models.