Universal Approximation in Dropout Neural Networks
Authors: Oxana A. Manita, Mark A. Peletier, Jacobus W. Portegies, Jaron Sanders, Albert Senen-Cerda
JMLR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We prove two universal approximation theorems for a range of dropout neural networks. These are feed-forward neural networks in which each edge is given a random {0, 1}-valued filter, that have two modes of operation: in the first each edge output is multiplied by its random filter, resulting in a random output, while in the second each edge output is multiplied by the expectation of its filter, leading to a deterministic output. |
| Researcher Affiliation | Academia | Oxana A. Manita EMAIL Mark A. Peletier EMAIL Jacobus W. Portegies EMAIL Jaron Sanders EMAIL Albert Senen Cerda EMAIL Department of Mathematics & Computer Science Eindhoven University of Technology Eindhoven, The Netherlands |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. It focuses on theoretical proofs and mathematical derivations. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to code repositories. |
| Open Datasets | No | The paper is theoretical and does not conduct experiments using datasets. It focuses on proving universal approximation theorems for neural networks. |
| Dataset Splits | No | The paper is theoretical and does not use datasets for experiments, therefore, it does not specify any dataset splits. |
| Hardware Specification | No | The paper is theoretical and does not describe any experiments that would require specific hardware. No hardware specifications are mentioned. |
| Software Dependencies | No | The paper is theoretical and does not describe any experiments that would require specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and focuses on mathematical proofs and derivations, not empirical experiments. Therefore, no experimental setup details, hyperparameters, or training configurations are provided. |