Human Cognition-Inspired Hierarchical Fuzzy Learning Machine

Authors: Junbiao Cui, Qin Yue, Jianqing Liang, Jiye Liang

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Systematic experiments verify that the proposed method can achieve significant gains in interpretability and generalization performance. Extensive experiments verify the effectiveness of the proposed method in improving interpretability and generalization performance.
Researcher Affiliation Academia 1Key Laboratory of Computational Intelligence and Chinese Information Processing of Ministry of Education, School of Computer and Information Technology, Shanxi University, Taiyuan 030006, Shanxi, China. Correspondence to: Jianqing Liang <EMAIL>, Jiye Liang <EMAIL>.
Pseudocode Yes Algorithm 1 Generate Class Tree Structure from Word Net, d Denoted as Word Net2Class Tree( ) Algorithm 2 Generate Hierarchical Structure from Class Tree Structure, denoted as Class Tree2Hie Stru( ) Algorithm 3 Generate FER from Class Hierarchical Structure, denoted as Class Hie Stru2FER( ) Algorithm 4 Human Cognition-Inspired Hierarchical Fuzzy Learning Machine, denoted as HC-HFLM
Open Source Code No The paper does not provide an explicit statement about the release of its own source code, nor does it provide a direct link to a code repository for the methodology described. The provided link (https://pytorch.org/) refers to the PyTorch library used, not the authors' implementation.
Open Datasets Yes We show the working mechanism of the HC-HFLM on the data set MNIST (Le Cun et al., 1998). This section verifies the generalization performance of the HC-HFLM on 6 public data sets. For Image Net1K data set, we adopt common data partition... The remaining 5 data sets are evaluated using 5-fold cross-validation...
Dataset Splits Yes We adopt handwritten digit data set MNIST (Le Cun et al., 1998), consisting of 60,000 training samples and 10,000 test samples. For Image Net1K data set, we adopt common data partition, which contains 1,281,167 training samples and 50,000 test samples... The remaining 5 data sets are evaluated using 5-fold cross-validation for each method, and the mean accuracy are recorded.
Hardware Specification Yes We conduct all experiments on an NVIDIA A100-PCIE-40GB GPU.
Software Dependencies No We implement the code based on Pytorch5 and conduct all experiments on an NVIDIA A100-PCIE-40GB GPU. For the first four methods, we adopt the implementations by the Scikit-learn library6. The remaining methods are implemented based on Pytorch. While PyTorch and Scikit-learn are mentioned, specific version numbers for these software dependencies are not provided.
Experiment Setup Yes Feature extraction network h : R28 28 R10 + adopts a 5-layer convolutional neural network. FSR network g : R10 + R10 + [0, 1] adopts cosine similarity. We adopt case 2 in formula (6) to obtain the final FSR T. The fuzziness parameters α = 0.7, β = 0.9. No regularization term was used in the experiment. We adopt Adam (Kingma & Ba, 2015) to solve formula (1) and (9), where the learning rate is set as 10 3. We also provide details for CEC, FLM, and HC-HFLM settings in Appendix D.2, including optimizer, batch size, number of epochs, initial learning rates, and learning rate decay schedules.