Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

LINOCS: Lookahead Inference of Networked Operators for Continuous Stability

Authors: Noga Mudrik, Eva Yezerets, Yenho Chen, Christopher John Rozell, Adam Shabti Charles

TMLR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate LINOCS ability to recover the ground truth dynamical operators underlying synthetic time-series data for multiple dynamical systems models (including linear, piecewise linear, time-changing linear systems decomposition, and regularized linear time-varying systems) as well as its capability to produce meaningful operators with robust reconstructions through various real-world examples.Our contributions in this paper notably include:We demonstrate that applying LINOCS improves the ability to recognize ground-truth operators.We show LINOCS efficacy across a diverse range of dynamical systems, including linear, periodically linear, linear time-varying (LTV), and decomposed linear.Finally, we demonstrate LINOCS ability to work on real-world brain recordings, resulting in better long-term reconstruction compared to baselines.
Researcher Affiliation Academia Noga Mudrik EMAIL Biomedical Engineering, Kavli NDI, CIS The Johns Hopkins University Baltimore, MD, 21218.Eva Yezerets EMAIL Biomedical Engineering, CIS The Johns Hopkins University Baltimore, MD, 21218Yenho Chen EMAIL Department of Biomedical Engineering Georgia Institute of Technology Atlanta, GA 30332.Christopher J. Rozell EMAIL School of Electrical and Computer Engineering Georgia Institute of Technology Atlanta, GA 30332.Adam S. Charles EMAIL Biomedical Engineering, Kavli NDI, CIS The Johns Hopkins University Baltimore, MD, 21218.
Pseudocode Yes Algorithm 1 Linear-LINOCS Inputs: Observations e X and maximum order K. Build K Multi-step reconstructions to get the set of {ψk} as described in Appendix A.5. Infer Transition A via Eq. (8) as described in Sec. A.5 Infer Offset b via Eq. (16) Algorithm 2 SLDS-LINOCS Input: Observations f X, maximum order K R+, number of iterations. Initial parameters Initialize: {f}J j=1 Uniform Start with a uniform distribution for each iteration until a defined number of iterations do Algorithm 3 d LDS-LINOCS Input: Observations f X, maximum order K R+ Initialize: C RJ T and {f}J j=1 Uniform fj fj max(λ(fj)) j = 1, . . . , J Normalizing each fj to unit spectral norm while not converged do Algorithm 4 LTV-LINOCS Input: Observations f X, maximum order K R+, number of iterations M. Initialize: {At}T t=1 Uniform Initialize from uniform distribution for each iteration m = 1 . . . M do
Open Source Code Yes The code is shared on Git Hub and available at https://github.com/Noga Mudrik/LINOCS, with tutorial notebooks available here (for the Linear-LINOCS example) and at https://github.com/Noga Mudrik/ LINOCS/blob/main/run_d LDS_Lorenz_example.ipynb for the d LDS-LINOCS example.
Open Datasets Yes The human neural recordings data we used to exemplify LINOCS is available at (Kyzar et al., 2024). We loaded the data from the DANDI Archive in an NWB (Neurodata Without Borders) format ( Rübel et al. (2022)), and used a single session from it. This session includes recordings of a 63-year-old male subject (Subject 10 in the data) recorded in June 2023 while performing the Sternberg task.
Dataset Splits No The paper describes synthetic data generation with parameters like 'T = 500 time points' and 'noise η N(0, σ2) = N(0, 0.32)' but does not specify explicit training, validation, or test splits. For real-world neural data, it states using 'a single session from it' and 'the initial 850 samples' but does not detail how these samples were partitioned into reproducible dataset splits for training and evaluation in the conventional machine learning sense.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types, or memory specifications) used to run the experiments.
Software Dependencies No The paper mentions using 'Sci Py s implementation, by Crouse (2016)' for the linear sum assignment problem and references code repositories for baselines. However, it does not provide specific version numbers for key software components such as Python, main machine learning libraries (e.g., PyTorch, TensorFlow), or other numerical computing packages used for the LINOCS implementation itself.
Experiment Setup Yes Section A.3, titled 'Hyperparamters used in experiments', contains tables (Table 1, Table 2, Table 3, Table 4, Table 5, Table 6) that list specific hyperparameter settings for various experiments, including maximum order (K, Kb), weights styles (exponential), regularization weights (λc, λ), number of iterations (Niterations, max_iters), initializations, and other specific parameters for linear, SLDS, d LDS, and LTV systems. For example, Table 4 for 'LINOCS in linear experiment' details K=80, weights_style=exponential, σw, etc.