Faster Global Minimum Cut with Predictions

Authors: Helia Niaparast, Benjamin Moseley, Karan Singh

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct three sets of experiments ranging from synthetic to real datasets. The first involves synthetic settings where we can explicitly control the fidelity of predictions to study the performance of the algorithm quantitatively. The second is a setting where Karger s algorithm is used to repeatedly find minimum cuts on a family of instances that organically arise from trying to solve traveling salesperson (TSP) instances. We conclude with experimental comparisons on real data.
Researcher Affiliation Academia 1Tepper School of Business, Carnegie Mellon University, Pittsburgh, USA. Correspondence to: Helia Niaparast <EMAIL>.
Pseudocode Yes Algorithm 1 Boosted Karger s Algorithm; Algorithm 2 FPZ(G, n); Algorithm 3 BOOSTEDFPZ(G, n, p)
Open Source Code Yes 1The Python implementations of the experiments are available at https://github.com/helia-niaparast/global-minimum-cut-with-predictions.
Open Datasets Yes Finally, we compare the performance of the Boosted Karger s algorithm and the standard variant on three real datasets from Rossi & Ahmed (2015). For each dataset, the predictions are obtained by first randomly sampling half of the edges and then performing k parallel runs of Karger s algorithm on the sampled edges. The predicted edge set is formed by the union of the edges of the k cuts found by Karger s algorithm. As a heuristic, we pick k to be close to the minimum degree of the graph. This process is repeated 100 times in Figure 2B. We observe that for all three datasets the Boosted Karger s algorithm requires discernibly fewer number of trials to find the minimum cut. Rossi & Ahmed (2015). The network data repository with interactive graph analytics and visualization. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015. URL http: //networkrepository.com.
Dataset Splits No The paper describes generating synthetic graphs and using repetitions for experiments. For real datasets, it mentions 'randomly sampling half of the edges' for prediction generation, but does not provide specific training/test/validation splits for the datasets themselves in the traditional sense, or any specific percentages or counts.
Hardware Specification No The paper does not provide any specific details about the hardware used to run its experiments.
Software Dependencies No The paper mentions 'Python implementations' for their experiments but does not provide specific version numbers for Python or any libraries used (e.g., PyTorch, NumPy, NetworkX versions).
Experiment Setup Yes We build G with n = 600, k = 100, ℓ= 10. We note the number of trials that Karger s algorithm needs on G to find the minimum cut. This is our baseline. Next, we fix a value of ρ {0, 10, 100}, and for each value of η {0, 0.05, 0.1, . . . , 1}, we measure the number of trials Boosted Karger s needs to find the minimum cut with input (G, b Cη,ρ) with (B, t) = (n, 2). In Figure 1, this process is repeated 30 times. [...] We set (B, t) = (log n, 2).