Improving the Efficiency of Dynamic Programming on Tree Decompositions via Machine Learning

Authors: Michael Abseher, Nysret Musliu, Stefan Woltran

JAIR 2017 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Moreover, we report on extensive experiments in different problem domains which show a significant speedup when choosing the tree decomposition according to this concept over simply using an arbitrary one of the same width. ... We create the appropriate training sets by conducting extensive experiments on different problem domains, instances and tree decompositions. Moreover, we run our experiments on a state-of-the-art system that applies DP algorithms on tree decompositions, namely D-FLAT (Abseher, Bliem, Charwat, Dusberger, Hecher, & Woltran, 2014).
Researcher Affiliation Academia Michael Abseher EMAIL Nysret Musliu EMAIL Stefan Woltran EMAIL Institute of Information Systems 184/2 TU Wien Favoritenstraße 9 11, 1040 Vienna, Austria
Pseudocode No The paper describes the Dynamic Programming approach and provides an example with tables (Figure 2), but it does not contain any structured pseudocode or algorithm blocks for its proposed machine learning method or feature extraction.
Open Source Code Yes The full benchmark setup including instances, programs, configurations, problem encodings and all results can, together with a description on how to reproduce the results of the paper, be downloaded under the following link: www.dbai.tuwien.ac.at/research/project/dflat/features_2016_03.zip
Open Datasets No For these random instances, we used three graph sizes and three different edge probabilities per problem to construct the corresponding training set. ... The graphs chosen for these experiments are shown in Table 2 and represent the metro systems of some cities around the world. (Table 2 lists: Tokyo (JPN), Osaka (JPN), Singapore (SGP), Santiago (CHL), Vienna (AUT)). While the paper describes the generation of random instances and mentions real-world metro system graphs, it does not provide concrete access information (link, DOI, specific repository, or formal citation) to these datasets.
Dataset Splits Yes For each of the ten iterations, we select randomly 20 problem instances (leading to 800 tree decompositions) for the training set and the remainder of the pool is put into the evaluation set. The analysis then proceeds as described in the paragraph dedicated to the evaluation set. This process is repeated ten times to rule out bias as good as possible.
Hardware Specification Yes All our experiments were performed on a single core of an Intel Xeon E5-2637@3.5GHz processor running Debian GNU/Linux 8.3 and each test run was limited to a runtime of at most six hours and 64 GB of main memory.
Software Dependencies Yes The machine learning tasks were carried out with WEKA 3.6.13 (Hall, Frank, Holmes, Pfahringer, Reutemann, & Witten, 2009). The full benchmark setup including instances, programs, configurations, problem encodings and all results can, together with a description on how to reproduce the results of the paper, be downloaded under the following link: www.dbai.tuwien.ac.at/research/project/dflat/features_2016_03.zip
Experiment Setup No The initial evaluation which was used to find the exact configuration for the regression algorithms considers all parameters provided by WEKA and was done on a separate benchmark set consisting of 500 tree decompositions (50 instances of 3-Colorability with 10 decompositions for each instance). For each parameter available in WEKA we experimented with different values and used 10-fold cross validation to determine the performance of each configuration. The fixed parameters that can be found in the report by Abseher et al. (2016) were used for all problem domains investigated in this paper. While the paper describes hyperparameter tuning was performed, it defers the specific 'fixed parameters' used to an external technical report, rather than providing them directly in the main text.