Efficient Rectification of Neuro-Symbolic Reasoning Inconsistencies by Abductive Reflection
Authors: Wen-Chao Hu, Wang-Zhou Dai, Yuan Jiang, Zhi-Hua Zhou
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that ABL-Refl outperforms state-of-the-art Ne Sy methods, achieving excellent accuracy with fewer training resources and enhanced efficiency. We validate the effectiveness of ABL-Refl in solving Sudoku Ne Sy benchmarks in both symbolic and visual forms. |
| Researcher Affiliation | Academia | Wen-Chao Hu1,2, Wang-Zhou Dai1,3, Yuan Jiang1,2, Zhi-Hua Zhou1,2 1National Key Laboratory for Novel Software Technology, Nanjing University, China 2School of Artificial Intelligence, Nanjing University, China 3School of Intelligence Science and Technology, Nanjing University, China EMAIL |
| Pseudocode | No | The paper describes the method using architectural diagrams (Figure 1 and Figure 2) and textual explanations, but no explicit pseudocode or algorithm blocks are present. |
| Open Source Code | No | Abductive Learning (ABL) (Zhou 2019; Zhou and Huang 2022) attempts to integrate machine learning and logical reasoning in a balanced and mutually supporting way. It features an easy-to-use open-source toolkit (Huang et al. 2024) with many practical applications (Huang et al. 2020; Cai et al. 2021; Wang et al. 2021; Gao et al. 2024). This refers to a toolkit for the general ABL framework, not explicitly for the ABL-Refl method proposed in this paper. There is no explicit statement about releasing the code for ABL-Refl. |
| Open Datasets | Yes | We use datasets from a publicly available Kaggle site (Vopani 2019). We use the dataset provided in SATNet (Wang et al. 2019) and use 9K Sudoku boards for training and 1K for testing. We use several datasets from the TUDatasets (Morris et al. 2020). |
| Dataset Splits | Yes | training time (for a total of 100 epochs using 20K training data), inference time and accuracy (on 1K test data) on solving Sudoku. We use 9K Sudoku boards for training and 1K for testing. We use 80% of the data for training and 20% for testing. |
| Hardware Specification | Yes | All experiments are performed on a server with Intel Xeon Gold 6226R CPU and Tesla A100 GPU. |
| Software Dependencies | Yes | We express KB in the form of propositional logic and utilize the Mini SAT solver (S orensson 2010), an open-source SAT solver, as the symbolic solver to leverage KB and perform abduction. The reference Sorensen (2010) specifies 'Minisat 2.2 and minisat++ 1.1'. |
| Experiment Setup | Yes | In our experiments, we simply set hyperparameters α and β in Eq. (3) to 1, since adjusting them does not have a noticeable impact on the results. For the hyperparameter C in (2), we set it to 0.8, and have provided discussions in Appendix C, demonstrating that setting it to a value within a broad moderate range (e.g., 0.6-0.9) would always be a recommended choice. For the neural network f, we use a simple graph neural network (GNN): the body block f1 consists of one embedding layer and eight iterations of message-passing layers, resulting in a 128-dimensional embedding for each number, and then connects to both a linear output layer f2 to obtain the intuitive output ˆy and a linear reflection layer R to obtain the reflection vector r. We use the cross-entropy loss as Llabeled. |