Near-Optimal Consistency-Robustness Trade-Offs for Learning-Augmented Online Knapsack Problems

Authors: Mohammadreza Daneshvaramoli, Helia Karisani, Adam Lechowicz, Bo Sun, Cameron N Musco, Mohammad Hajiesmaili

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we present a case study of our proposed algorithms. We compare against baselines that do not use predictions (i.e., ZCL (Zhou et al., 2008)), and the existing SENTINEL learning-augmented algorithm from prior work (Im et al., 2021), using both synthetic and real datasets. [...] Fig. 2(a) reports the cumulative distribution function (CDF) of empirical competitive ratios for six tested algorithms on 2,000 synthetic instances of OFKP.
Researcher Affiliation Academia 1Manning College of Information and Computer Sciences, University of Massachusetts Amherst, Amherst, MA, USA 2Cheriton School of Computer Science, University of Waterloo, Waterloo, ON, Canada.
Pseudocode Yes Algorithm 1 PP-a: An improved (1 + min{1, ˆω})-competitive algorithm with point prediction; Algorithm 2 IPA: An Interval-Prediction-Based Algorithm for OFKP; Algorithm 3 The Fr2Int algorithm; Algorithm 4 ZCL: An Online Threshold-Based Algorithm for OKP Without Prediction; Algorithm 5 PP-a: A Basic 2-Competitive Algorithm for OFKP with Trusted Prediction
Open Source Code Yes Our code is available at https://github.com/moreda-a/OKP.
Open Datasets Yes Finally, in 6, we evaluate our algorithms on synthetic and real data from Bitcoin and Google workload traces. [...] We use a historical data set of Bitcoin prices (Kottarathil, 2018) in 2017-2019 [...] constructs instances based on Google cluster traces (Reiss et al., 2012).
Dataset Splits No In the first set, we use synthetically generated data, where the value and weight of items are randomly drawn from a power-law distribution. [...] Each instance is constructed by randomly sampling 10,000 prices from one month of the data, setting all item sizes equal to 0.001. [...] Each instance includes 10,000 items generated as above, each with size 0.001. Specific train/test/validation splits are not provided; only instance generation details are mentioned.
Hardware Specification No No specific hardware details (GPU/CPU models, memory, etc.) are provided in the paper for running experiments.
Software Dependencies No No specific software dependencies with version numbers (e.g., Python libraries like PyTorch, TensorFlow, or scikit-learn with their versions) are explicitly mentioned in the paper.
Experiment Setup Yes To validate the performance of our algorithms, we conduct four sets of experiments. In the first set, we use synthetically generated data, where the value and weight of items are randomly drawn from a power-law distribution. Unless otherwise specified, the lowest unit value is L = 1, and the highest unit value is U = 1000. Weights are drawn from a power law and normalized to the range [0, 1]. [...] For IPA, we report the interval prediction range uℓ as a percentage of [L, U] and set it to 15%, 25%, and 40%. The notation MIXλ denotes instances of MIX under different values of trust parameter λ, which we set according to λ {0.3, 0.5, 0.9}.