Fast Adaptive Non-Monotone Submodular Maximization Subject to a Knapsack Constraint

Authors: Georgios Amanatidis, Federico Fusco, Philip Lazos, Stefano Leonardi, Rebecca Reiffenhäuser

JAIR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental evaluation of our algorithms showcases their improved performance on real and synthetic data.
Researcher Affiliation Academia Georgios Amanatidis EMAIL Department of Mathematical Sciences, University of Essex, UK. Federico Fusco EMAIL Philip Lazos EMAIL Stefano Leonardi EMAIL Rebecca Reiffenh auser EMAIL Department of Computer, Control, and Management Engineering Antonio Ruberti , Sapienza University of Rome, Italy.
Pseudocode Yes Sample Greedy(E, v, c, B) ... Adaptive Greedy
Open Source Code No The paper does not provide concrete access to source code. It describes algorithms and experimental results but does not include a repository link or an explicit statement of code release.
Open Datasets Yes We evaluate Sample Greedy on an instance based on the latest version of the Movie Lens dataset (Harper and Konstan, 2015)... We evaluate Adaptive Greedy on an instance based on the You Tube graph (Yang and Leskovec, 2015)... In Figure 2 we evaluate Adaptive Greedy on an instance based on the Epinions network graph (Richardson et al., 2003)...
Dataset Splits No The paper describes how experimental instances were generated (e.g., costs drawn from U(0,1), graph sizes, budget as a fraction of total cost) and how many repetitions were performed, but it does not specify traditional training/validation/test dataset splits typically found in supervised machine learning tasks.
Hardware Specification No The paper mentions computation times (e.g., 'total computation time was around 3 hours') but does not provide any specific details about the hardware used for running the experiments (e.g., GPU models, CPU types, or memory specifications).
Software Dependencies No The paper mentions using 'lazy evaluations with ε = 0.01' but does not specify any software names with version numbers (e.g., programming languages, libraries, or solvers).
Experiment Setup Yes For all algorithms involved, we use lazy evaluations with ε = 0.01. ... We do not micro-optimize for p but rather choose uniformly at randomly from [0.9, 1]. ... We use λ = 3, µ = 7 and set the parameters α, β so that the two terms are of comparable size. ... We have set q = 0.2 and the cost of each node is drawn from U[0, 1].