Value Preferences Estimation and Disambiguation in Hybrid Participatory Systems

Authors: Enrico Liscio, Luciano C. Siebert, Catholijn M. Jonker, Pradeep K. Murukannaiah

JAIR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the proposed methods on a dataset of a large-scale survey on energy transition. The results show that explicitly addressing inconsistencies between choices and motivations improves the estimation of an individual s value preferences. The disambiguation strategy does not show substantial improvements when compared to similar baselines however, we discuss how the novelty of the approach can open new research avenues and propose improvements to address the current limitations.
Researcher Affiliation Academia Enrico Liscio EMAIL Luciano C. Siebert EMAIL Delft University of Technology, the Netherlands Catholijn M. Jonker EMAIL Delft University of Technology, the Netherlands and Leiden University, The Netherlands Pradeep K. Murukannaiah EMAIL Delft University of Technology, the Netherlands
Pseudocode Yes Algorithm 1: Method TB Algorithm 2: Method MC Algorithm 3: Method MO
Open Source Code Yes The code is available at https://github.com/enricoliscio/value-preferences-estimation
Open Datasets Yes We use data from a PVE conducted between April and May 2020 involving 1376 participants (Itten & Mouter, 2022)
Dataset Splits Yes As is common in AL settings, we warm up the NLP model by initializing the set of labeled participants with 10% of the available participants, and the set of labeled motivations with the motivations provided by those participants. [...] We iterate the procedure for 5 iteration steps and repeat it in a 10-fold cross-validation.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments. It mentions using NLP models but no GPU/CPU models or other hardware specifications.
Software Dependencies No The paper mentions using 'Rob BERT (Delobelle et al., 2020)', 'Ro BERTa model (Liu et al., 2019)', and 'XLNet (Yang et al., 2019)' but does not provide specific version numbers for these or any other ancillary software libraries or programming languages used.
Experiment Setup Yes For all models, we used a learning rate of 1e-5, a batch size of 16, and trained for 10 epochs. We used the AdamW optimizer with a warm-up ratio of 0.1 and a weight decay of 0.01.