Integrating Queueing Theory and Scheduling for Dynamic Scheduling Problems
Authors: D. Terekhov, T. T. Tran, D. G. Down, J.C. Beck
JAIR 2014 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically demonstrate that for a dynamic flowshop, the use of combinatorial reasoning has little impact on schedule quality beyond queueing approaches. In contrast, for the more complicated flexible queueing network, a novel algorithm that combines long-term guidance from queueing theory with short-term combinatorial decision making outperforms all other tested approaches. 3.3 Numerical Results We present experiments comparing the performance of FCFS, SPTsum, makespan and completion Time models for minimizing the mean flow time over a long time horizon. 4.3 Numerical Results We experimentally compare the mean flow time performance of our proposed models. |
| Researcher Affiliation | Academia | Daria Terekhov EMAIL Tony T. Tran EMAIL Department of Mechanical and Industrial Engineering University of Toronto, Toronto, Ontario, Canada Douglas G. Down EMAIL Department of Computing and Software Mc Master University, Hamilton, Ontario, Canada J. Christopher Beck EMAIL Department of Mechanical and Industrial Engineering University of Toronto, Toronto, Ontario, Canada |
| Pseudocode | No | The paper provides mathematical formulations for MIP models in Section 4.1.4 and 4.1.5, such as constraints and objective functions. However, these are presented as mathematical expressions within paragraphs, not as clearly labeled "Pseudocode" or "Algorithm" blocks with structured steps. |
| Open Source Code | No | The paper does not explicitly state that source code for the described methodology is being released, nor does it provide a link to a code repository. It mentions using third-party tools like "ILOG Scheduler 6.5" and "IBM ILOG CPLEX 12.1" but does not offer its own implementation code. |
| Open Datasets | No | The paper does not use pre-existing publicly available datasets. Instead, it describes generating synthetic data based on specified distributions: "We fixed the mean inter-arrival time, 1/λ, to 10, and varied the load on the system from 0.1 to 0.95 by changing the means of the processing time distributions from 1 to 9.496." and "Job arrivals follow a Poisson process. In order to maintain the relative processing times of a single job on each server, an amount of work for each job, wj, is generated using an exponential distribution with rate 1; it is then augmented linearly by the processing rate for a server, based on the job s class, to create the processing time dmj = wj µmk ." |
| Dataset Splits | No | The paper describes how problem instances are generated and used for simulation, but it does not specify any training/test/validation dataset splits. For example, in Section 3.3, it states: "Each point in the figure represents the mean flow time over 100 problem instances of 55,000 jobs each." In Section 4.3, it says: "At each load, 20 instances are run for 10,000 time units, resulting in a total of 100 simulations per model for each of the two systems." |
| Hardware Specification | Yes | When run on a Dual Core AMD 270 CPU with 1 MB cache, 4GB of main memory, and Red Hat Enterprise Linux 4, the completion Time model is able to solve 100% of sub-problems to optimality at loads 0.1 and 0.3, but this percentage decreases to an average of 81.3% for instances at the 0.95 load. Experiments with a time limit of 5 seconds showed very similar performance, with the mean flow time at the 0.95 load decreasing to 368.985 (from 371.434 for the completion Time model with the 1 second time limit) and the percentage of instances solved to optimality at the 0.95 load increasing to 83.8% on a machine with 4 Intel Core i7-2600 CPUs @ 3.40GHz with 8 MB cache and 9871 MB main memory, running Red Hat Enterprise Linux 6.3. All experiments are performed on a Dual Core AMD 270 CPU with 1 MB cache, 4GB of main memory, running Red Hat Enterprise Linux 4. |
| Software Dependencies | Yes | The completion Time model was implemented via constraint programming in ILOG Scheduler 6.5 and uses the completion global constraint (Kov acs & Beck, 2011). The LP and MIP models use IBM ILOG CPLEX 12.1. |
| Experiment Setup | Yes | We fixed the mean inter-arrival time, 1/λ, to 10, and varied the load on the system from 0.1 to 0.95 by changing the means of the processing time distributions from 1 to 9.496. In these experiments, the completion Time model was run with a time limit of 1 second per sub-problem in order to ensure reasonable run-times. Job arrivals follow a Poisson process. In order to maintain the relative processing times of a single job on each server, an amount of work for each job, wj, is generated using an exponential distribution with rate 1; it is then augmented linearly by the processing rate for a server, based on the job s class, to create the processing time dmj = wj µmk . Preliminary results showed that α = 0.6 provided the best performance for the hybrid method (Tran, 2011). For each test case, five different loads between 0.8 and 0.99 of the maximum theoretical load are simulated. At each load, 20 instances are run for 10,000 time units, resulting in a total of 100 simulations per model for each of the two systems. |