Factor Graph-based Interpretable Neural Networks

Authors: Yicong Li, Kuanjiu Zhou, Shuo Yu, Qiang Zhang, Renqiang Luo, Xiaodong Li, Feng Xia

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are conducted on three datasets and experimental results illustrate the superior performance of AGAIN compared to state-of-the-art baselines.
Researcher Affiliation Academia 1Dalian University of Technology, 2Jilin University, 3RMIT University
Pseudocode Yes The overall algorithm for the interactive intervention switch is summarized in Algorithm 1.
Open Source Code Yes 1Source codes are available at https://github.com/yushuowiki/AGAIN.
Open Datasets Yes We evaluated AGAIN on two real-world datasets, CUB, MIMIC-III EWS, and one synthetic dataset, Synthetic-MNIST. CUB (Caltech-UCSD Birds-200-2011) dataset ... Footnote 3: http://www.vision.caltech.edu/visipedia/CUB-200.html MIMIC-III EWS dataset ... Footnote 4: https://physionet.org/content/mimiciii/1.4/ Synthetic-MNIST dataset is a composite dataset derived from the original MNIST dataset. Footnote 2: http://yann.lecun.com/exdb/mnist/
Dataset Splits No The paper mentions using a "test set" in Section 5.2 and a "training set" in Appendix C.5, but does not provide specific percentages or counts for training/test/validation splits.
Hardware Specification Yes All data processing and experiments are executed on a server with two Xeon-E5 processors, two RTX4000 GPUs and 64G memory.
Software Dependencies Yes AGAIN is implemented in Py Torch 1.1.0 based on Python 3.7.13. We construct G by instantiating the G as a Markov logic network in Pracmln 1.2.4.
Experiment Setup Yes We train the concept predictor (real-world datasets) for 500 epochs, the concept predictor (Synthetic-MNIST) for 30 epochs, and the category predictor for 15 epochs. We leverage the sgd optimizer with a learning rate of 0.01 to optimize the model. We to mitigate the overfitting, weight decay of 0.00004 was configured. In the experiment, is set to 0.9.