Supervised Learning with Evolving Tasks and Performance Guarantees
Authors: Verónica Álvarez, Santiago Mazuelas, Jose A. Lozano
JMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on benchmark datasets show the performance improvement of the proposed methodology in multiple scenarios and the reliability of the presented performance guarantees. |
| Researcher Affiliation | Academia | Veronica Alvarez EMAIL Basque Center for Applied Mathematics (BCAM) Bilbao 48009, Spain Santiago Mazuelas EMAIL Basque Center for Applied Mathematics (BCAM) IKERBASQUE-Basque Foundation for Science Bilbao 48009, Spain Jose A. Lozano EMAIL Intelligent Systems Group, University of the Basque Country UPV/EHU Basque Center for Applied Mathematics (BCAM) San Sebasti an, Spain, Bilbao 48009, Spain |
| Pseudocode | Yes | Algorithm 1 MDA Input: D1, D2, . . . , Dk Output: µk k, R(Uk k ) if |Dk| > 0 and µk 1 k , R(Uk 1 k ) if |Dk| = 0. for j = 1, 2, ..., k 1 do Obtain forward mean and MSE vectors τ j j, sj j as in (8)-(9) ... Algorithm 5 details the implementation of the optimization step of the proposed methodology. |
| Open Source Code | Yes | The methods presented can be implemented using MRCpy library (Bondugula et al., 2021) and the specific code used in the experimental results is provided on the web https://github.com/MachineLearningBCAM/Supervised-learning-evolving-task-JMLR-2025. |
| Open Datasets | Yes | We utilize 13 public datasets that have been often used as benchmark for tasks that are in a sequence (see Table 4 in Appendix I). The datasets used in Section 8 are publicly available Ginosar et al. (2015); Zhang et al. (2017); Peng et al. (2019); Lin et al. (2021), and http://yann.lecun.com/exdb/mnist/. |
| Dataset Splits | Yes | The samples in each task are randomly splitted in 100 samples for test and the rest of samples for training. In each repetition, the samples used for training are randomly sampled from the pool of training samples for each task. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. It mentions using 'a feature mapping defined by multiple features over instances together with one-hot encoding of labels as described in (2)' and 'the last layer of the Res Net18 pre-trained network' but does not specify the hardware on which these operations were performed. |
| Software Dependencies | No | The paper mentions that the methods can be implemented using 'MRCpy library (Bondugula et al., 2021)' but does not provide specific version numbers for this library or any other key software components, programming languages, or frameworks used in the experiments (e.g., Python, PyTorch, TensorFlow, CUDA versions). |
| Experiment Setup | Yes | The confidence vector λ in equation (6) is obtained with λ0 = 0.7, vector σ2 j in equation (6) is given by the variance of nj samples, vector dj in equation (13) is estimated using W = 2, variance dj of the noise process wj in (25) is estimated using the recursive approach presented in Akhlaghi et al. (2017); and the proposed methodology applied to CL in Section 6 is implemented using b = 3 backward steps. |