Approximation algorithms for combinatorial optimization with predictions

Authors: Antonios Antoniadis, Marek Elias, Adam Polak, Moritz Venzin

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results. The full version of the paper (Antoniadis et al., 2024) contains an experimental evaluation1 of our refined Steiner Tree algorithm on graphs from the PACE 2018 Challenge (Bonnet and Sikora, 2018), using as a benchmark the winning Steiner Tree solver from that challenge (Ruiz et al., 2018). These experiments allow us to better understand the impact of the hyperparameter α on the performance of our algorithm. They also demonstrate that (for a sufficiently concentrated input distribution) we can find near-optimal solutions in time in which conventional algorithms can achieve only rough approximations.
Researcher Affiliation Academia Antonios Antoniadis University of Twente, Enschede, Netherlands EMAIL; Marek Eliáš & Adam Polak & Moritz Venzin Bocconi University, Milan, Italy EMAIL
Pseudocode Yes Algorithm 1: Steiner tree with predictions
Open Source Code Yes 1Our source code is available at github.com/adampolak/steiner-tree-with-predictions.
Open Datasets Yes The full version of the paper (Antoniadis et al., 2024) contains an experimental evaluation1 of our refined Steiner Tree algorithm on graphs from the PACE 2018 Challenge (Bonnet and Sikora, 2018)
Dataset Splits No The paper mentions using graphs from the PACE 2018 Challenge but does not specify any training/test/validation splits, nor does it provide details on how the dataset was partitioned for the experiments.
Hardware Specification No The paper describes experimental evaluation but does not provide specific details regarding the hardware (e.g., GPU/CPU models, memory) used for these experiments.
Software Dependencies No The paper mentions using a 'winning Steiner Tree solver' as a benchmark but does not provide specific version numbers for any software dependencies or libraries used for their own algorithm's implementation.
Experiment Setup Yes Our refined algorithm is guided by a hyperparameter α in order to detect and avoid false positives with high weight. It is based on a 2-approximation algorithm called the MST heuristic (Kou et al., 1981; Mehlhorn, 1988). The contribution of false positives to our algorithm s performance guarantee depends only on their number, and not on their weights, and it is capped by the cost of individual connections made by the MST heuristic (without predictions). The final approximation guarantee follows from a careful analysis of how the output of our algorithm converges to the prediction as its hyperparameter α increases. We also show that our analysis of this algorithm is tight. The algorithm receives as input a graph G = (V, E), set T V of k terminals, a weight function w : E R 0, and a set b X E of predicted edges. First, it scales down the weight of edges in b X by dividing them by the parameter α. Then, it uses the algorithm of Mehlhorn (1988) to compute a minimum spanning tree MSTα of the metric closure of the terminals with respect to the scaled edge weights.