Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

Optimal Algorithms for Continuous Non-monotone Submodular and DR-Submodular Maximization

Authors: Rad Niazadeh, Tim Roughgarden, Joshua R. Wang

JMLR 2020 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We further run experiments to verify the performance of our proposed algorithms in related machine learning applications. Our experiments are implemented in python. The results of our experiments are in Table 2, Table 3, and Table 4, and the corresponding box-and-whisker plots are in Figure 3.
Researcher Affiliation Collaboration Rad Niazadeh EMAIL Chicago Booth School of Business, University of Chicago, 5807 S Woodlawn Ave, Chicago, IL 60637, USA. Tim Roughgarden EMAIL Department of Computer Science, Columbia University, 500 West 120th Street, Room 450, New York, NY 10027, USA. Joshua R. Wang EMAIL Google Research, 1600 Amphitheatre Pkwy, Mountain View, CA 94043, USA.
Pseudocode Yes Algorithm 1: (Vanilla) Continuous Randomized Bi-Greedy; Algorithm 2: Approximate One-Dimensional Optimization; Algorithm 3: Approximate Annotated Upper-Concave Envelope; Algorithm 4: Binary-Search Continuous Bi-greedy
Open Source Code No No explicit statement about code availability or a repository link for the methodology described in this paper is provided. The paper states 'Our experiments are implemented in python.' but does not offer access to the code.
Open Datasets No We generated synthetic functions of the form F(x) = 1/2xT Hx + hT x + c. We generated H Rn n as a matrix with every entry uniformly distributed in [ 1, 0], and then symmetrized H. The choice of the uniform distribution is just for the purpose of exposition. We then generated h Rn as a vector with every entry uniformly distributed in [0, +1]. Finally, we solved for the value of c to make F( 0) + F( 1) = 0. ... We generated synthetic functions of the form F(x) = log det(diag(x)(L I) + I), where L needs to be positive semidefinite.
Dataset Splits No The paper describes the generation of synthetic functions and running 20 instances for each experiment (i.e. 'each experiment consists of 20 such instances (i.e. a 20 sample Monte Carlo experiment)'). However, it does not provide specific training/test/validation dataset splits, percentages, or sample counts, nor does it refer to predefined splits.
Hardware Specification No The paper states 'Our experiments are implemented in python.' but does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper states 'Our experiments are implemented in python.' but does not specify the version of Python or any other software dependencies, libraries, or solvers with their respective version numbers.
Experiment Setup No The paper specifies general experimental parameters such as 'n = 100 dimensional functions' and 'each experiment consists of 20 such instances'. It also describes the generation of synthetic data. However, it does not provide specific hyperparameters (e.g., learning rates, batch sizes, number of epochs, optimizer settings) or other detailed configuration steps for the algorithms when running the experiments, which are necessary for full reproducibility.