The Gradient of Algebraic Model Counting

Authors: Jaron Maene, Luc De Raedt

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, our algebraic backpropagation exhibits considerable speed-ups as compared to existing approaches. In our experiments, our implementation outperforms Py Torch and Jax, which are the de facto standard for neurosymbolic learning, by several orders of magnitude. Table 2 contains the results. Py Torch and Jax perform poorly, as these frameworks are not optimized for very large computational graphs. Jax does not run within the 16GB of RAM, even on comparatively small circuits (ca. 100MB). Even though real-world circuits are far from the worst-case quadratic complexity, the dynamic programming considerably outperforms the naive variants of Kompyle. The semiring optimizations yield smaller but still considerable speed-ups, depending on the semiring.
Researcher Affiliation Academia Jaron Maene1, Luc De Raedt1,2 1KU Leuven, Belgium 2 Orebro University, Sweden EMAIL, EMAIL
Pseudocode Yes Algorithm 1: Algebraic Backpropagation. Algorithm 2: Algebraic backpropagation through a product, exploiting cancellation and ordering.
Open Source Code Yes Code https://github.com/ML-KULeuven/amc-grad
Open Datasets Yes As a benchmark, we take 100 formulas of the 2021 Model Counting Competition (Fichte, Hecher, and Hamiti 2021) and compile them to d-DNNF circuits using the d4 knowledge compiler (Lagniez and Marquis 2017).
Dataset Splits No As a benchmark, we take 100 formulas of the 2021 Model Counting Competition (Fichte, Hecher, and Hamiti 2021) and compile them to d-DNNF circuits using the d4 knowledge compiler (Lagniez and Marquis 2017). We randomly generate weights for the formulas, with on average 1% of the weights being set to zero. All experiments were performed on the CPU of a 2022 M2 Mac Book Pro. We include Py Torch (Paszke et al. 2019) and Jax (Frostig, Johnson, and Leary 2018) as a baseline for the regular gradient, as they are frequently used for neurosymbolic learning in practice. The paper uses 100 instances for evaluation but does not specify any training/validation/test splits.
Hardware Specification Yes All experiments were performed on the CPU of a 2022 M2 Mac Book Pro.
Software Dependencies No We implemented the algebraic backpropagation in a Rust library called Kompyle, and empirically demonstrate the runtime performance of the algebraic backpropagation algorithm on several semirings. We include Py Torch (Paszke et al. 2019) and Jax (Frostig, Johnson, and Leary 2018) as a baseline for the regular gradient. Specific version numbers for Rust, Kompyle, PyTorch, or Jax used in the experiments are not provided in the text.
Experiment Setup Yes We randomly generate weights for the formulas, with on average 1% of the weights being set to zero. We include Py Torch (Paszke et al. 2019) and Jax (Frostig, Johnson, and Leary 2018) as a baseline for the regular gradient, as they are frequently used for neurosymbolic learning in practice. Furthermore, we include ablations for Kompyle. Firstly, a naive variant, as was used by Du et al. (2023). Secondly, a variant which also exploits cancellation, as was proposed by Shih et al. (2019). Thirdly, a variant which applies the dynamic programming described in Algorithm 1. Finally, the full version of Kompyle with all optimizations (using ordering and cancellation).