Prosociality in Microtransit
Authors: Divya Sundaresan, Akhira Watson, Eleni Bardaka, Crystal Chen Lee, Christopher B. Mayhorn, Munindar P. Singh
JAIR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our contributions are these: (1) empirical support for the viability of prosociality in microtransit (and constraints on it) through interviews with drivers and focus groups of riders; (2) a prototype mobile app demonstrating how our prosocial intervention can be combined with the transportation backend; (3) a reinforcement learning approach to model a rider and determine the best interventions to persuade that rider toward prosociality; and (4) a cognitive model of rider personas to enable evaluation of alternative interventions. ... 7. Experiments and Results |
| Researcher Affiliation | Academia | Divya Sundaresan EMAIL Department of Computer Science NC State University Raleigh, NC, USA ... All authors are affiliated with NC State University, and their email addresses use the @ncsu.edu domain, indicating an academic affiliation. |
| Pseudocode | No | The paper describes methods like reinforcement learning and cognitive modeling using ACT-R, and provides mathematical equations, but does not present any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing the source code for their methodology, nor does it include a link to a code repository. It mentions using a prototype mobile app and a third-party tool (Arc GIS) but does not make its own implementation code available. |
| Open Datasets | No | The paper describes data collected through interviews and focus groups in Wilson, North Carolina, and presents demographic data in Appendix A, Table 7. However, it does not provide concrete access information (such as a link, DOI, or repository) for this collected dataset to be publicly available. |
| Dataset Splits | No | The paper describes experiments with simulated riders and interactions over episodes of 1,000 time steps. It does not refer to a fixed dataset split into training, validation, and test sets in the conventional sense for machine learning models evaluated on pre-existing datasets. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. It discusses hyperparameters but not the underlying computing infrastructure. |
| Software Dependencies | No | The paper mentions using a Python library for ACT-R and a Proximal Policy Optimization (PPO) reinforcement learning approach. However, it does not provide specific version numbers for Python, the ACT-R library (pyactr), or any other key software dependencies like machine learning frameworks used for the PPO implementation. |
| Experiment Setup | Yes | Appendix B.1 Hyperparameters: In our experiments, we used the hyperparameters specified in Tables 8 and 9 for our ACT-R agents and bandits respectively. For the CARS spatial tolerance learning agent, we experimented with the (default) PPO hyperparameters in Table 10 as well as with modified learning rate and number of steps (n steps). ... Table 10: Default PPO hyperparameters: Parameter Value: learning rate 0.0003, number of steps 2048, batch size 64, number of epochs 10, verbose 1 |