Chance-constrained Static Schedules for Temporally Probabilistic Plans
Authors: Cheng Fang, Andrew J. Wang, Brian C. Williams
JAIR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform numerical experiments to show the advantages of reasoning over probabilistic uncertainty, by comparing the utility of schedules generated with risk allocation against those generated from reasoning over bounded uncertainty. We also empirically show that solution time is greatly reduced by incorporating conflict-directed risk allocation. (...) The algorithms proposed were evaluated on a set of 138 benchmark problems, inspired by autonomous underwater vehicle (AUV) scenarios. (...) Figure 15 shows the makespans of the networks obtained using each solution method, compared against the total number of constraints. |
| Researcher Affiliation | Academia | Cheng Fang EMAIL Andrew J. Wang EMAIL Brian C. Williams EMAIL MIT Computer Science and Artificial Intelligence Laboratory 32 Vassar Street Cambridge, MA 02139 USA |
| Pseudocode | Yes | Algorithm 1: Convex program encoding for strong controllability of cc-p STP (...) Algorithm 2: Rubato Feasibility (...) Algorithm 3: Rubato Optimizing during Risk Allocation (...) Algorithm 4: Rubato Feasible Seed |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code, nor does it provide links to a code repository. It mentions "supplemental material" in Section 6, but only for "full tables of results", not code. |
| Open Datasets | No | The paper describes generating its own benchmark problems: "The algorithms proposed were evaluated on a set of 138 benchmark problems, inspired by autonomous underwater vehicle (AUV) scenarios. (...) The promising locations were randomly generated from the region within 10km of (33.251, -121.555), in the North Pacific." There is no indication that these generated datasets are publicly available, nor are any existing public datasets used or cited with access information. |
| Dataset Splits | No | The paper describes generating "benchmark problems" or "scenarios" for evaluation, but the nature of these problems (scheduling optimization) does not involve traditional training/test/validation dataset splits as in machine learning tasks. The paper does not mention any splitting methodology for the generated scenarios. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments described in Section 6, "Numerical Results". |
| Software Dependencies | No | The paper mentions "a convex solver, for example SNOPT (Gill et al., 2005)" in Section 4.3.3. While it names a specific solver, it does not provide a version number for SNOPT or any other software dependencies, which is required for reproducibility. |
| Experiment Setup | Yes | In each of the scenarios, a number of AUVs must coordinate to explore a series of promising locations. (...) The vehicle traversal durations are modeled as normally distributed random variables with parameters derived from distance traveled and an average vehicle speed uniformly sampled between 10km/h and 20km/h, and each vehicle must spend a minimum amount of time exploring each area. The benchmark set contained 900 randomly generated scenarios. For each scenario, there were between 1 and 12 robots, up to 5 dives for each robot, and up to 4 activities per dive. For each scenario, we required chance-constrained schedules with risk bounds of 5%, 10%, 20% and 40% respectively. We used the makespan of the network as the objective function. |