Online Bin Packing with Predictions
Authors: Spyros Angelopoulos, Shahin Kamali, Kimia Shadkami
JAIR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform extensive experiments on our algorithms. Specifically, we evaluate them on a variety of publicly available benchmarks, such as the BPPLIB benchmarks maintained by Delorme, Iori, and Martello (2018), but also on distributions studied specifically in the context of offline bin packing, such as the Weibull distribution, as described by Casti neiras, Cauwer, and O Sullivan (2012). The results show that our algorithms outperform the known efficient algorithms without any predictions. |
| Researcher Affiliation | Academia | Spyros Angelopoulos EMAIL CNRS and LIP6 Sorbonne University 4 place Jussieu, Paris, France 75252 Shahin Kamali EMAIL Department of Electrical Engineering and Computer Science York University, Toronto, ON, Canada Kimia Shadkami EMAIL Department of Computer Science University of Manitoba, Winnipeg, MB, Canada |
| Pseudocode | Yes | Algorithm 1 describes PROFILEPACKING in pseudocode. Algorithm 1 PROFILEPACKING Input: σ: the input sequence wit n items in [1, k] f : predicted item frequencies ( x [1, k], f x [0, 1]) m: The parameter indicating the size of the profile Output: a packing of σ, that is, a set of bins that contain all items in σ |
| Open Source Code | Yes | 1. The code on which the experiments are based is available at https://github.com/shahink84/Bin Packing Predictions. |
| Open Datasets | Yes | Specifically, we evaluate them on a variety of publicly available benchmarks, such as the BPPLIB benchmarks maintained by Delorme, Iori, and Martello (2018), but also on distributions studied specifically in the context of offline bin packing, such as the Weibull distribution, as described by Casti neiras, Cauwer, and O Sullivan (2012). |
| Dataset Splits | No | The paper does not provide specific traditional dataset split information (e.g., train/test/validation percentages or counts) for model training or for partitioning a static dataset for algorithm evaluation. Instead, it describes the generation of online input sequences for evaluation and the use of prefixes of these online sequences for predictions. |
| Hardware Specification | No | The paper describes the experimental setup in terms of algorithms, parameters, and input generation but does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions various algorithms and heuristics like FIRSTFITDECREASING and FIRSTFIT but does not specify the version numbers of any programming languages, libraries, or solvers used for their implementation. |
| Experiment Setup | Yes | We fix the size of the sequence to n = 106. We set the bin capacity to k = 100, and we also scale down each item to the closest integer in [1, k]. ... We evaluate HYBRID(λ) for λ {0, 0.25, 0.5, 0.75, 1}, based on FIRSTFIT. ... We fix the size of the profile set to m = 5000. ... For a parameter b N+, we define the predictions f as fσ[1,b]. In our experiments, we consider 100 different prefix sizes. More precisely, we consider all b of the form b = 100 1.05i , with i [25, 125]. ... For Weibull benchmarks, the input sequence consists of items generated independently and uniformly at random, and the shape parameter is set to sh = 3, unless otherwise mentioned. ... The distribution of the input sequence changes every 50000 items. Namely, the input sequence is the concatenation of n/50000 subsequences. ... We evaluate ADAPTIVE(w) for 100 values of the sliding window w, equidistant in the range [100, 100000]. |