A Robust Backpropagation-Free Framework for Images

Authors: Timothy Zee, Alex Ororbia, Ankur Mali, Ifeoma Nwogu

TMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The efficacy of EKDAA is demonstrated by performing visual-recognition tasks on the Fashion MNIST, CIFAR-10 and SVHN benchmarks, along with demonstrating its ability to extract visual features from natural color images. Furthermore, in order to demonstrate its non-reliance on gradient computations, results are presented for an EKDAA-trained CNN that employs a non-differentiable activation function.
Researcher Affiliation Academia Timothy Zee EMAIL Department of Computer Science Rochester Institute of Technology Alexander G. Ororbia EMAIL Department of Computer Science, Department of Psychology Rochester Institute of Technology Ankur Mali EMAIL Department of Computer Science and Engineering University of South Florida Ifeoma Nwogu EMAIL Department of Computer Science University at Buffalo, SUNY
Pseudocode Yes Algorithm 1 EKDAA for a CNN with max-pooling and fully-connected maximum entropy output. Algorithm 2 EKDAA for fully-connected and maximum entropy output layers.
Open Source Code Yes Our library implementation can be found at: https: // github. com/ tzee/ EKDAA-Release .
Open Datasets Yes To understand learning capacity for model fitting and generalization under EKDAA, we design and train several models and test them against three standard datasets, Fashion MNIST (Xiao et al., 2017) (FMNIST), CIFAR-10 (Krizhevsky et al., 2014), and SVHN (Netzer et al., 2011). In addition, for analysis we use the publicly available Fashion MNIST, SVHN, and CIFAR-10 datasets, all of which have licenses permitting unlimited use and modification.
Dataset Splits No The paper mentions using Fashion MNIST, CIFAR-10, and SVHN datasets, and presents 'Train Acc' and 'Test Acc' in Table 2, implying train/test splits were used. However, it does not explicitly state the percentages, sample counts, or reference predefined splits for these datasets within the text.
Hardware Specification Yes Furthermore, all models were trained on a single Tesla P4 GPU with 8GB of GPU RAM and ran on Linux Ubuntu 18.04.5 LTS, using Tensorflow 2.1.0. ... Specifically, we used one Ubuntu 18.04 server with 8GB Tesla P4, an Intel Xeon CPU E5-2650, and 256GB of RAM.
Software Dependencies Yes Furthermore, all models were trained on a single Tesla P4 GPU with 8GB of GPU RAM and ran on Linux Ubuntu 18.04.5 LTS, using Tensorflow 2.1.0. We build our codebase on top of Tensor Flow 2.0 for fundamental functionality.
Experiment Setup Yes Each model was tuned for optimal performance and several hyper-parameters were adjusted, i.e., batch size, learning rate, filter size, number of filters per layer, number of fully-connected nodes per layer, weight initialization, optimizer choice, and dropout rate (details can be found in Appendix). The final meta-parameter setting for EKDAA was a learning rate of 0.5e 3, 0.9 momentum rate, tanh for activation, and a dropout rate of 0.1 for filters and 0.3 for fully-connected layers (complete model specifications are in the Appendix). Appendix A: Meta-parameter Tuning: We report our grid search ranges for each model s meta-parameters in Tables 2 (EKDAA), 7 (DFA), 6 (FA), 9 (RDFA), and 8 (SDFA), respectively. Furthermore, in the Best column, we report the final values selected/used for the models reported in the main paper.