Lifted Relational Neural Networks: Efficient Learning of Latent Relational Structures
Authors: Gustav Sourek, Vojtech Aschenbrenner, Filip Zelezny, Steven Schockaert, Ondrej Kuzelka
JAIR 2018 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we describe experiments performed on 78 datasets about organic molecules: the Mutagenesis dataset (Lodhi & Muggleton, 2005), four datasets from the predictive toxicology challenge, and 73 NCI datasets (Ralaivola, Swamidass, Saigo, & Baldi, 2005). We compare the performance of LRNNs with the state-of-the-art relational learners k FOIL (Landwehr, Passerini, De Raedt, & Frasconi, 2006) and n FOIL (Landwehr, Kersting, & Raedt, 2007), which respectively combine relational rule learning with support vector machines and with naive Bayes learning. |
| Researcher Affiliation | Academia | Gustav ˇSourek EMAIL Faculty of Electrical Engineering Czech Technical University in Prague Prague, Czech Republic Vojtˇech Aschenbrenner EMAIL Faculty of Mathematics and Physics Charles University Prague, Czech Republic Filip ˇZelezn y EMAIL Faculty of Electrical Engineering Czech Technical University in Prague Prague, Czech Republic Steven Schockaert EMAIL School of Computer Science & Informatics CardiffUniversity Cardiff, United Kingdom Ondˇrej Kuˇzelka EMAIL Department of Computer Science KU Leuven Leuven, Belgium |
| Pseudocode | No | The paper describes the weight learning algorithm textually in Section 3.4 'Weight Learning' but does not provide a clearly labeled pseudocode or algorithm block. |
| Open Source Code | No | The paper does not contain any explicit statements or links indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | In this section, we describe experiments performed on 78 datasets about organic molecules: the Mutagenesis dataset (Lodhi & Muggleton, 2005), four datasets from the predictive toxicology challenge, and 73 NCI datasets (Ralaivola, Swamidass, Saigo, & Baldi, 2005). |
| Dataset Splits | Yes | Figure 5: Prediction errors of LRNNs, k FOIL, n FOIL, MLN-boost and RDN-boost measured by cross-validation on 78 datasets about organic molecules. |
| Hardware Specification | No | The time for training an LRNN on a standard commodity machine with one CPU was in the order of a few hours for the larger NCI-GI datasets, and in the order of a few minutes for the smaller datasets such as Mutagenesis. |
| Software Dependencies | No | The paper mentions various methods and frameworks (e.g., backpropagation, stochastic gradient descent, MLNs, Problog, CILP++, k FOIL, n FOIL, MLN-boost, RDN-boost, Aleph), but does not provide specific version numbers for the software dependencies used in their implementation of LRNNs. |
| Experiment Setup | Yes | For all the reported experiments, we set the learning rate to 0.3 and training epochs to 3000. |