Approximate Lifted Model Construction

Authors: Malte Luttermann, Jan Speller, Marcel Gehrke, Tanya Braun, Ralf Möller, Mattis Hartwig

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We test the practicality of the ε-ACP algorithm in a series of experiments. ε-ACP is not only required to make ACP applicable in practice but also allows for more compression (and thus faster inference) if we are willing to trade the exactness of query results for additional speedup. We thus report the run time gain and the resulting approximation error to get a better understanding of the trade-off between the exactness and the compactness of the lifted representation obtained by ε-ACP. For our experiments, we generate a variety of FGs with different graph structures and graph sizes (i.e., numbers of randvars and factors). More specifically, we generate FGs containing between 2k + 1 and 2k + k log2(k) + 1 Boolean randvars as well as between 2k and k+k log2(k) +1 factors, where k {2, 4, 8, 16, 32, 64, 128} is the domain size. The domain size k controls the number of objects in the models and thus the size of the FGs. We provide all data set generators along with our source code in the supplementary material. In every FG, we modify a proportion of x {0.1, 0.3, 0.5, 0.7, 0.9, 1.0} of the factors such that their potential tables differ by at most factor (1 ε) from their original potential tables, where ε {0.001, 0.01, 0.1}. For each setting, we pose multiple queries to each FG. We report the average run time of lifted variable elimination (the state-of-the-art lifted inference algorithm) on the output of ACP and ε-ACP, respectively, over all settings for each choice of k in the left plot of Fig. 3 and show the distribution of PM (r | e) / PM(r | e) over all queries for each choice of k in the right plot of Fig. 3. Taking a look at the left plot in Fig. 3, it becomes evident that ε-ACP yields a speedup of up to factor 100 compared to ACP. The question now is at what cost ε-ACP achieves this speedup. The right plot in Fig. 3 demonstrates that the price ε-ACP pays for the speedup is close to zero: Most of the quotients are nearly equal to one (i.e., most query results hardly differ from their original value).
Researcher Affiliation Academia 1German Research Center for Artificial Intelligence (DFKI), L ubeck, Germany 2Data Science Group, University of M unster, Germany 3Institute for Humanities-Centered Artificial Intelligence, University of Hamburg, Germany EMAIL, EMAIL, EMAIL
Pseudocode Yes Algorithm 1 ε-Advanced Colour Passing
Open Source Code Yes We provide all data set generators along with our source code in the supplementary material.
Open Datasets Yes In addition to the generated FGs, we learn an FG from the MIMIC-IV dataset [Johnson et al., 2023] and apply ε-ACP with ε = 0.1 to it.
Dataset Splits No The paper mentions generating FGs with varying parameters and considering a subset of 4000 patients from the MIMIC-IV dataset, but it does not specify any training/test/validation splits for these datasets.
Hardware Specification No The paper does not provide any specific hardware details such as GPU/CPU models, memory, or types of computing resources used for the experiments.
Software Dependencies No The paper mentions using 'lifted variable elimination' as the inference algorithm but does not specify any software libraries or their version numbers used in the implementation.
Experiment Setup Yes In every FG, we modify a proportion of x {0.1, 0.3, 0.5, 0.7, 0.9, 1.0} of the factors such that their potential tables differ by at most factor (1 ε) from their original potential tables, where ε {0.001, 0.01, 0.1}.