Distributed Sequence Memory of Multidimensional Inputs in Recurrent Networks

Authors: Adam S. Charles, Dong Yin, Christopher J. Rozell

JMLR 2017 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4. Simulation To empirically verify that these theoretical STM scaling laws are representative of the empirical behavior, we generated a number of random networks and evaluated the recovery of (sparse or low-rank) input sequences in the presence of noise. ... In Figure 2 we show the relative mean-squared error of the input recovery as a function of the sparsity-to-network size ratio ̑ = K/M and the network size-to-input ratio ̑ = M/NL. Each pixel value represents the average recovery relative mean-squared error (r MSE), as calculated by RMSE = bs s 2 2 s 2 2 , over 20 randomly generated trials with a noise level of ̑ 2 0.01.
Researcher Affiliation Academia Adam S. Charles EMAIL Princeton Neuroscience Institute Princeton University Princeton, NJ 08544, USA Dong Yin EMAIL Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA 94720-1776, USA Christopher J. Rozell EMAIL School of Electrical and Computer Engineering Georgia Institute of Technology Atlanta, GA 30332-0250, USA
Pseudocode No The paper includes mathematical equations and proofs (e.g., in the Appendix), but does not present any structured pseudocode or algorithm blocks. It focuses on theoretical derivations and simulations described in paragraph text.
Open Source Code No The paper does not contain any explicit statement about providing source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets No For each simulation we generate a M M random orthogonal connectivity matrix W 4 and a M L random Gaussian feed-forward matrix Z. In both cases we fixed the number of inputs to L = 40 and the number of time-steps to N = 100 while varying the network size M and underlying dimensionality of the input (i.e., the sparsity level or the input matrix rank). For the sparse input simulations, inputs were chosen with a uniformly random support pattern with random Gaussian values on the support. For low-rank simulations, the right singular vectors were chosen to be Gaussian random vectors, and the left singular values were chosen at random from a number of different basis sets.
Dataset Splits No The paper conducts simulations by generating synthetic data for each trial, rather than using a pre-existing dataset with specified training, validation, and test splits. The evaluation involves '20 randomly generated trials' for calculating RMSE, which does not constitute dataset splitting in the conventional sense.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used for running the simulations or experiments. It only mentions 'generating a number of random networks' and 'simulation'.
Software Dependencies No The paper describes generating random networks and performing simulations, but does not list any specific software or library names with version numbers used for the implementation or analysis.
Experiment Setup Yes For each simulation we generate a M M random orthogonal connectivity matrix W 4 and a M L random Gaussian feed-forward matrix Z. In both cases we fixed the number of inputs to L = 40 and the number of time-steps to N = 100 while varying the network size M and underlying dimensionality of the input (i.e., the sparsity level or the input matrix rank). ... Each pixel value represents the average recovery relative mean-squared error (r MSE), as calculated by RMSE = bs s 2 2 s 2 2 , over 20 randomly generated trials with a noise level of ̑ 2 0.01.