Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

Learning Augmented Energy Minimization via Speed Scaling

Authors: Etienne Bamas, Andreas Maggiori, Lars Rohwedder, Ola Svensson

NeurIPS 2020 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we will test the LAS algorithm on both synthetic and real datasets. We will calculate the competitive ratios with respect to the offline optimum.
Researcher Affiliation Academia Etienne Bamas EPFL Switzerland EMAIL Andreas Maggiori EPFL Switzerland EMAIL Lars Rohwedder EPFL Switzerland EMAIL Ola Svensson EPFL Switzerland EMAIL
Pseudocode Yes Algorithm 1 LEARNING AUGMENTED SCHEDULING (LAS) Input: T, D, and wpred initially and wreal in an online fashion Output: A feasible schedule (si)T D i=0 Let δ > 0 with 1+δ 1 δ α = 1 + ε. Compute optimal offline schedule for (wpred, T, (1 δ)D) where the jobs wpred i are run at uniform speeds ci an disjoint intervals [ai, bi] using [17].
Open Source Code Yes We note that the code is publicly available at https://github.com/andreasr27/LAS.
Open Datasets Yes Real dataset. We provide additional evidence that the LAS algorithm outperforms purely online algorithms by conducting experiments on the login requests to Bright Kite [5]
Dataset Splits No The paper uses synthetic and real datasets but does not explicitly provide details on train/validation/test splits with specific percentages, counts, or a detailed splitting methodology for their experiments. For the real dataset, it describes using 'access patterns of the previous day as a prediction for the current day' which is a form of temporal split for the input, not a standard training/validation/testing split for model evaluation.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments.
Software Dependencies No The paper mentions that 'the code is publicly available' but does not list specific software dependencies with version numbers (e.g., Python version, library versions like TensorFlow, PyTorch, scikit-learn).
Experiment Setup Yes We fix α = 3 in all our experiments as this value models the power consumption of modern processors (see Bansal et al. [2]). For artificial datasets, 'We used m = 20, M = 80, s = 5, T = 220 and D = 20.' For the real dataset, 'The timeline was discretized in chunks of ten minutes and D was set to 20.' The paper also discusses performance for different values of ε (e.g., 'ε = 0.01' and 'ε = 0.8').