Towards Real-Time Approximate Counting

Authors: Yash Pote, Kuldeep S. Meel, Jiong Yang

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In an evaluation over 2,247 instances, Approx MC7 solved 271 more instances and achieved a 2 speedup against Approx MC. [...] Section 4 provides an empirical evaluation of Approx MC7 against Approx MC, and finally, we conclude in Section 5.
Researcher Affiliation Academia 1 National University of Singapore 2 University of Toronto 3 Georgia Institute of Technology
Pseudocode Yes Algorithm 1: Approx MC(φ, ε, δ) [...] Algorithm 2: Approx MC7(φ, ε, δ) [...] Algorithm 3: Compute Iter(ε, δ) [...] Algorithm 4: Approx MC7Core(φ, ε)
Open Source Code Yes 1The resulting tool Approx MC7 is available open-source at https://github.com/meelgroup/approxmc
Open Datasets Yes We evaluated the runtime performance of Approx MC7 and Approx MC6 over a comprehensive set of 2,247 instances (Yang, Pote, and Meel 2024) [...] Yang, J.; Pote, Y.; and Meel, K. S. 2024. Benchmark used for AAAI25 paper: Towards Real-Time Approximate Counting. https://doi.org/10.5281/zenodo.14533501.
Dataset Splits No The paper uses a set of 2,247 instances for evaluation and discusses using exact model counter Ganak for comparison on a subset of 698 instances. However, it does not describe specific training/test/validation splits for machine learning models, as the work focuses on model counting problems rather than training predictive models.
Hardware Specification Yes We conducted our experiments on a high-performance compute cluster, with each node consisting of AMD EPYCMilan processor featuring 2 64 real cores and 512GB of RAM.
Software Dependencies No The paper mentions using 'an efficient pre-processor Arjun (Soos and Meel 2022)' but does not provide a specific version number for Arjun or any other software components used in the experiments.
Experiment Setup Yes In our experiments, we set δ = 0.2 and ε = 13 and used an efficient pre-processor Arjun (Soos and Meel 2022) to simplify the function. [...] We ran each job on a single core with a 100-second time limit and 4GB memory.