Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

Fast Learning for Renewal Optimization in Online Task Scheduling

Authors: Michael J. Neely

JMLR 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This section presents simulations of the proposed algorithm under the initial condition θ[0] = θmin and stepsize η[k] = 1 (k+2)Tmin . Fig. 3 compares the proposed algorithm with the greedy strategy for two different values of p. Data is averaged over 5000 independent sample paths.
Researcher Affiliation Academia Michael J. Neely EMAIL Department of Electrical Engineering University of Southern California Los Angeles, CA, 90089-2565, USA
Pseudocode No The paper describes the iterative algorithm in Section 4 using numbered steps and mathematical equations (24)-(26). It details the logic as: 'On each frame k {0, 1, 2, . . .} do: Observe S[k] ΩS and the current θ[k] value. Choose (T[k], R[k]) to solve: Maximize: R[k] θ[k]T[k] (24) Subject to: (T[k], R[k]) D(S[k]) (25) breaking ties arbitrarily. Update θ[k] via the iteration: θ[k + 1] = [θ[k] + η[k](R[k] θ[k]T[k])]θmax θmin (26)'. This is a textual description with equations, not a structured pseudocode or algorithm block.
Open Source Code No The paper does not contain any explicit statements or links indicating that the source code for the described methodology is publicly available.
Open Datasets No The paper describes hypothetical systems for simulation, generating data based on specified distributions and parameters, rather than utilizing or providing access to pre-existing public datasets. For instance, in Section 8.1, it states: 'On each frame k we receive N[k] new potential projects, where N[k] {0, 1, 2, 3} with P[N[k] = i] = pi and p0 = 0.1, p1 = 0.9 p, p2 = p/2, p3 = p/2 where p [0, 0.9] is a parameter varied in the simulations... The vectors (Tj, Rj) for j {1, . . . , i} are generated independently with Tj Uniform([1, 10]) and Rj = Aj Tj where Aj Unif([0, 50]).'
Dataset Splits No The paper uses synthetically generated data for simulations, with '5000 independent sample paths' being averaged. It does not use pre-collected datasets that would require explicit training/validation/test splits.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used to run the simulations or experiments.
Software Dependencies No The paper does not provide specific software names with version numbers or library dependencies used for implementing the algorithms and running simulations.
Experiment Setup Yes The simulation section (Section 8) provides specific details about the experimental setup for each system. For System 1, it states: 'The proposed algorithm uses [θmin, θmax] = [0, 50] and Tmin = 1.' For System 2: 'The proposed algorithm uses [θmin, θmax] = [1, 2] and Tmin = 1.' For System 3: 'We use [θmin, θmax] = [1, 3], Tmin = 1.' It also specifies the initial condition as 'initial condition θ[0] = θmin and stepsize η[k] = 1 (k+2)Tmin .'