Approximate Counting of Linear Extensions in Practice

Authors: Topi Talvitie, Mikko Koivisto

JAIR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We report on experimental results in Section 6. We conclude in Section 7 by discussing the lessons learned and directions for future research. In this section, we present empirical results that compare the performance of the algorithms and their implementations (summarized in Section 6.1) on various problem instances (described in Section 6.2). We first compare the generic algorithm designs ARMC, Telescope product, and TPA (Section 6.3) and then the performance of different optimizations of TPA (Section 6.4).
Researcher Affiliation Academia Topi Talvitie EMAIL Mikko Koivisto EMAIL Department of Computer Science University of Helsinki, Finland
Pseudocode Yes Algorithm 1 Monotone coupling from the past Algorithm 2 The ARMC scheme Algorithm 3 The Telescope Tree scheme
Open Source Code Yes Our code is available at https://github.com/ttalvitie/linext.
Open Datasets Yes In addition, we generated random posets of varying sizes from the four largest benchmark Bayesian network structures (DAGs) Diabetes, Link, Pigs, and Munin available in the Bayesian Network Repository (https://www.cs.huji.ac.il/~galel/Repository/);
Dataset Splits No For every poset type and size 8 n 1024, in roughly 26 % geometric increments, we generated five independent benchmark posets. We ran every algorithm variant for every poset with a time limit of 6 hours.
Hardware Specification Yes The experiments were run on identical high-end desktop computers, each with a 4-core 3.6 GHz Intel Xeon W-2123 CPU and an NVIDIA Ge Force RTX 2080 GPU.
Software Dependencies No We implemented each algorithm variant in C++ (for GPUTPA we used CUDA C++); the implementations are available in https://github.com/ttalvitie/linext. Each implementation is multithreaded to take advantage of all the cores of the CPU. Apart from the vectorized implementations AVX2TPA, AVX512TPA, and the GPU implementation GPUTPA, none of the implementations use CPU vector extensions or GPUs, allowing fair comparison of their running times. In the CFTP, we use the PCG32 random number generator, which can skip backwards, to avoid storing generated random numbers.
Experiment Setup Yes In the experiments, we set the user parameters as follows: k0 = 20, k1 = 5 and α = 0.1, β = 10; we observed these values to perform well on average. As a practical optimization, we also keep the previous relaxation found in step 4 if it has fewer linear extensions than the new relaxation. We restart 50 times to get slightly better relaxations. In the factorization phase, we use a fixed number of samples per comparison; in preliminary experiments, 300 was found to give a good tradeoff in most cases.