Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
ReservoirComputing.jl: An Efficient and Modular Library for Reservoir Computing Models
Authors: Francesco Martinuzzi, Chris Rackauckas, Anas Abdelrehim, Miguel D. Mahecha, Karin Mora
JMLR 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Figure 2 illustrates the superior computational performance of the library. The speed of the Julia code is calculated without considering the JIT compilation. ... Reservoir Computing.jl shows 1.5 times higher computational speed in the worst case scenario and 14.3 times higher in the best case scenario, both compared with the most performant times for reservoir size for the CPU. For GPU calculations the library ranges from being 1.4 to 3.0 times faster. |
| Researcher Affiliation | Collaboration | Francesco Martinuzzi 1, 2, 3 EMAIL ... 1Center for Scalable Data Analytics and Artificial Intelligence, Leipzig University, Germany ... 3Julia Computing Chris Rackauckas 3, 4 EMAIL Anas Abdelrehim 3 EMAIL Miguel D. Mahecha 1, 2, 5 EMAIL Karin Mora 1, 5 EMAIL ... 4Massachusetts Institute of Technology |
| Pseudocode | No | For brevity no code snippets are shown in this paper, but the documentation provides a large number of examples. |
| Open Source Code | Yes | We introduce Reservoir Computing.jl, an open source Julia library for reservoir computing models. ... The code and documentation are hosted on Github under an MIT license https://github.com/Sci ML/Reservoir Computing.jl. |
| Open Datasets | No | The performance test task is a next step prediction of the Mackey-Glass system (Glass and Mackey, 2010) with time delay τ = 17. The paper does not provide concrete access information for a public dataset. |
| Dataset Splits | Yes | Training and prediction lengths are both equal to 4999. |
| Hardware Specification | Yes | All the simulations were run on a Dell XPS 9510 fitted with an Intel Core i7-11800H CPU, a Nvidia Ge Force RTX 3050 Ti GPU and 16 GB of RAM. |
| Software Dependencies | No | Multiple training methods to obtain the output layer ψ can be obtained from open source libraries such as MLJLinear Models.jl (Blaom and Vollmer, 2020), Gaussian Processes.jl (Fairbrother et al., 2021) and LIBSVM.jl (Kornblith and Pastell, 2021), a Julia porting of LIBSVM (Chang and Lin, 2011). These are mentioned, but specific version numbers used by the authors for these ancillary libraries are not provided. The versions listed refer to compared libraries or their own library version, not external dependencies. |
| Experiment Setup | Yes | The performance test task is a next step prediction of the Mackey-Glass system (Glass and Mackey, 2010) with time delay τ = 17. The dense reservoir matrix and the dense input matrix are generated with uniform distribution sampled from [ 1, 1]. The spectral radius of the reservoir matrix is scaled by 1.25. The ridge regression parameter is set to 10 8. Training and prediction lengths are both equal to 4999. The time reported is the sum of training and prediction. For central processing unit (CPU) computations the precision is set to float64, for the graphics processing unit (GPU) computations it is float32. ... The point timings for each reservoir size are obtained by averaging over 100 runs with different random initializations. |