On Expected Value Strong Controllability
Authors: Niklas T. Lauffer, William B. Lassiter, Jeremy D. Frank
JAIR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the performance of this formulation on benchmark instances derived from the HEATlab benchmark of (Lund et al., 2017) and the MIT ROVERS benchmark of (Santana et al., 2016). We then show how to use this MILP to reschedule during execution, after time has passed and uncertainty is reduced. We describe different fixed-period rescheduling approaches, including time-based and event-based, and report on the most successful strategies compared to the expected value of the fixed schedule produced by the MILP. All of our methods are evaluated on problems with both symmetric and asymmetric (skewed) probability distributions. |
| Researcher Affiliation | Academia | Niklas T. Lauffer EMAIL University of California, Berkeley Berkeley, CA 94720 William B. Lassiter EMAIL Georgia Institute of Technology Atlanta, GA 30332 Jeremy D. Frank EMAIL NASA Ames Research Center Moffett Field, CA 94035 |
| Pseudocode | Yes | Algorithm 1: Simulated Rescheduling Approaches |
| Open Source Code | No | The paper does not provide a concrete link to source code or explicitly state that the code for the methodology is openly available or included in supplementary materials. |
| Open Datasets | Yes | We evaluate the correctness and computational effectiveness of our approach on the PSTNU instances of the ROVERS dataset generated by (Santana et al., 2016). We also evaluate our approach using HEATlab instances generated by (Lund et al., 2017). |
| Dataset Splits | No | The paper describes how derived benchmarks were created by modifying existing instances (e.g., 'reducing the makespan by up to 50%'), and that a 'representative subset (one in every five) of the instances' was used for some figures. For other analyses, 'Each instance is solved and rescheduled 20 times'. However, it does not provide specific training/test/validation dataset splits typically found in machine learning contexts. |
| Hardware Specification | Yes | All benchmarks are run on a Linux laptop with an Intel 4-core i7-8550U CPU with 16 Gb of RAM. |
| Software Dependencies | Yes | Our piecewise linear approximation of distribution functions and MILP were implemented in Python, and the MILP was solved using Gurobi 8.1.1. |
| Experiment Setup | Yes | Our piecewise linear approx of Fij used 50 pieces. We created new benchmarks from these instances by adding preferences to constraints, reducing the makespan, and adding more rejectable constraints to force tradeoffs among the constraints that are satisfied. Each instance is solved and rescheduled 20 times, and the values in the plot represent averages over all instances in the benchmark. |