Practical Parallel Algorithms for Non-Monotone Submodular Maximization

Authors: Shuang Cui, Kai Han, Jing Tang, Xueying Li, Aakas Zhiyuli, Hanxiao Li

JAIR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, the effectiveness of our algorithms is demonstrated by extensive experiments on real-world applications.
Researcher Affiliation Collaboration Shuang Cui EMAIL School of Computer Science and Technology, Soochow University, Suzhou, Jiangsu, China; Kai Han (Corresponding Author) EMAIL School of Computer Science and Technology, Soochow University, Suzhou, Jiangsu, China; Jing Tang EMAIL The Hong Kong University of Science and Technology (Guangzhou), The Hong Kong University of Science and Technology, China; Xueying Li EMAIL Aakas Zhiyuli EMAIL Alibaba Group, Hangzhou, Zhejiang, China; Hanxiao Li EMAIL Qingdao University of Science and Technology, Qingdao, Shandong, China
Pseudocode Yes Algorithm 1: Rand Batch(ρ, I, M, p, ϵ, f( ), c( )); Algorithm 2: Get SEQ(A, I); Algorithm 3: Par SKP(α, ϵ, B, f( ), c( )); Algorithm 4: Probe(ρ, N1, N2, ϵ, B, f( ), c( )); Algorithm 5: Par SSP(p, ϵ, f( ))
Open Source Code No The paper does not contain an explicit statement or link providing access to the source code for the described methodology.
Open Datasets Yes Revenue Maximization. ...we use the top 5,000 communities of the You Tube network (Leskovec & Krevl, 2014) to construct the network G...; Image Summarization. ...we randomly select 1,000 images from the CIFAR-10 dataset (Krizhevsky & Hinton, 2009) to construct N...; Movie Recommendation. ...We use the Movie Lens dataset (Badanidiyuru et al., 2020; Haba et al., 2020) which contains 1,793 movies from three genres Adventure , Animation and Fantasy .
Dataset Splits No The paper describes the selection of data subsets for experiments (e.g., 'top 5,000 communities of the You Tube network', 'randomly select 1,000 images from the CIFAR-10 dataset', '1,793 movies from three genres') but does not specify any training, validation, or test dataset splits.
Hardware Specification Yes All experiments are run on a Linux server with Intel Xeon Gold 6126 @ 2.60GHz CPU and 256GB memory.
Software Dependencies No The paper does not provide specific version numbers for any software libraries, frameworks, or solvers used in the experiments.
Experiment Setup Yes For all the algorithms tested, the accuracy parameter ϵ is set to 0.1. Each randomized algorithm is executed independently for 10 times, and the average result is reported. For the ENE algorithm, we use 5,000 samples to simulate an oracle for F( ) or F( ) (i.e., the multi-linear extension of f( ) and its gradient).