Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Zeroth-order Stochastic Approximation Algorithms for DR-submodular Optimization
Authors: Yuefang Lian, Xiao Wang, Dachuan Xu, Zhongrui Zhao
JMLR 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To validate the effectiveness of our proposed algorithms, we conduct experiments on both synthetic and real-world problems. The results demonstrate the superior performance and efficiency of our methods in solving DR-submodular optimization problems. |
| Researcher Affiliation | Academia | Yuefang Lian EMAIL Institute of Operations Research and Information Engineering Beijing University of Technology Beijing, 100124, China Xiao Wang EMAIL Pengcheng Laboratory Shenzhen, 518066, China Dachuan Xu EMAIL Institute of Operations Research and Information Engineering Beijing University of Technology Beijing, 100124, China Zhongrui Zhao EMAIL College of Science and Engineering James Cook University Queensland, 4814, Australia |
| Pseudocode | Yes | Algorithm 1 Zeroth-order Stochastic Approximation (ZOSA) Algorithm Framework Algorithm 2 Mini-batch Zeroth-order Gradient Estimator for Problem (2) |
| Open Source Code | No | The paper does not provide an explicit statement or a link to source code for the methodology described. |
| Open Datasets | Yes | Corporate leaderships network dataset KONECT, 2017. URL http://konect.cc/networks/brunson_corporate-leadership. |
| Dataset Splits | No | The paper describes generating synthetic data and setting parameters for optimization problems but does not specify training/test/validation splits typically used for machine learning models. |
| Hardware Specification | No | The paper states, "All experiments were implemented in Py Charm 2024.1 x64 using Python 3.10.9." This describes software, not specific hardware components like GPUs or CPUs. |
| Software Dependencies | Yes | All experiments were implemented in Py Charm 2024.1 x64 using Python 3.10.9. |
| Experiment Setup | Yes | In our experiments, we set N = 500, d = 3, m = 2, b = u = 1 and ht = HT t u... N = 1000 and d = 20 in our experiments... we set ci is randomly chosen from [2, 10], N = 200, |S| = 20, |T| = 24... we choose the iterate step-size ηk = 1 / k+1 in the experiments. |