Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

Avalanche: A PyTorch Library for Deep Continual Learning

Authors: Antonio Carta, Lorenzo Pellegrini, Andrea Cossu, Hamed Hemati, Vincenzo Lomonaco

JMLR 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Avalanche provides a large set of predefined benchmarks and training algorithms and it is easy to extend and modular while supporting a wide range of continual learning scenarios. Avalanche is thoroughly tested with a battery of unit tests. Each pull request is tested on a subset of the unit tests by the continuous integration pipeline on Github. A subset of continual-learning-baselines (link) is executed with a regular cadence to ensure that Avalanche baselines are in line with expected results from the literature.
Researcher Affiliation Academia Antonio Carta EMAIL University of Pisa; Lorenzo Pellegrini EMAIL University of Bologna; Andrea Cossu EMAIL Scuola Normale Superiore; Hamed Hemati EMAIL University of St. Gallen; Vincenzo Lomonaco EMAIL University of Pisa
Pseudocode No The paper describes the architecture and functionalities of the Avalanche library using prose and block diagrams (Figure 1, Figure 2) but does not include any explicit pseudocode or algorithm blocks.
Open Source Code Yes Avalanche is an open source library maintained by the Continual AI non-profit organization that extends Py Torch by providing firstclass support for dynamic architectures, streams of datasets, and incremental training and evaluation methods. Official Avalanche website: https://avalanche.continualai.org. https://github.com/ContinualAI/avalanche
Open Datasets Yes Benchmarks in Avalanche provide the data needed to train and evaluate CL models. Benchmarks are a collection of streams (e.g., a train and test stream for Split MNIST (Lomonaco et al., 2021)). Scenarios Class-Incr. Domain-Incr. Split, Permuted CORe50 Generators Stream51 Endless CL.
Dataset Splits No The paper describes the functionality of the Avalanche library for handling datasets and streams, mentioning 'a train and test stream for Split MNIST', but it does not specify concrete dataset split percentages or sample counts for any experiments conducted within this paper. It refers to 'standard benchmark definitions' without detailing them.
Hardware Specification No The paper describes a software library for continual learning and its features, including tracking system metrics like 'memory occupation and CPU usage'. However, it does not provide any specific details about the hardware (e.g., GPU/CPU models, memory amounts) used to develop or test the library's functionality.
Software Dependencies No Avalanche is a library built on top of Py Torch (Paszke et al., 2019). While PyTorch is clearly identified as a core dependency, a specific version number for PyTorch or any other ancillary software is not provided in the paper.
Experiment Setup No The paper focuses on describing the design and functionalities of the Avalanche library for continual learning, mentioning 'standard training algorithms' and 'training strategies'. However, it does not provide concrete experimental setup details such as specific hyperparameter values, model initialization, or training schedules for any experiments conducted within the paper.