Fixed points of nonnegative neural networks
Authors: Tomasz J. Piotrowski, Renato L. G. Cavalcante, Mateusz Gabor
JMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In Section 6, "we illustrate the main theoretical results via numerical simulations, where we evaluate the reconstruction performance of various nonnegative autoencoders." And further details such as "The networks were trained for 30 epochs using the ADAM optimization algorithm with a learning rate of 0.005. The batch size was set to 64, and, as a loss function, the mean squared error was chosen. To enforce nonnegativity of the weights and biases, the negative values were clipped to zero after each iteration of the ADAM algorithm." These indicate empirical studies and data analysis. |
| Researcher Affiliation | Academia | Tomasz J. Piotrowski EMAIL Faculty of Physics, Astronomy and Informatics, Nicolaus Copernicus University, Grudziądzka 5/7, 87-100 Toruń, Poland Renato L. G. Cavalcante EMAIL Fraunhofer Heinrich Hertz Institute, Einsteinufer 37, 10587 Berlin, Germany Mateusz Gabor EMAIL Faculty of Electronics, Photonics, and Microsystems, Wrocław University of Science and Technology, Wybrzeze Wyspianskiego 27, 50-370 Wrocław, Poland |
| Pseudocode | No | The paper does not contain any explicit pseudocode or algorithm blocks. Procedures are described in narrative text. |
| Open Source Code | Yes | The source code is available at the following link: https://github.com/mateuszgabor/nn_networks. |
| Open Datasets | Yes | The experiments were performed on both the entire MNIST dataset and a subset containing only the digit zero, which we refer to as the ZERO dataset. |
| Dataset Splits | No | The paper states: "The experiments were performed on both the entire MNIST dataset and a subset containing only the digit zero, which we refer to as the ZERO dataset." It mentions using the full dataset or a subset but does not provide specific training/test/validation splits (e.g., percentages, sample counts, or references to predefined splits). |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments. It mentions training neural networks but no CPU, GPU, or other accelerator models are specified. |
| Software Dependencies | No | The paper mentions the "ADAM optimization algorithm" but does not specify any programming languages, libraries, or frameworks with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | The networks were trained for 30 epochs using the ADAM optimization algorithm with a learning rate of 0.005. The batch size was set to 64, and, as a loss function, the mean squared error was chosen. To enforce nonnegativity of the weights and biases, the negative values were clipped to zero after each iteration of the ADAM algorithm. |