Lightweight Learner for Shared Knowledge Lifelong Learning
Authors: Yunhao Ge, Yuecheng Li, Di Wu, Ao Xu, Adam M. Jones, Amanda Sofie Rios, Iordanis Fostiropoulos, shixian wen, Po-Hsuan Huang, Zachary William Murdock, Gozde Sahin, Shuo Ni, Kiran Lekkala, Sumedh Anand Sontakke, Laurent Itti
TMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | On a new, very challenging SKILL-102 dataset with 102 image classification tasks (5,033 classes in total, 2,041,225 training, 243,464 validation, and 243,464 test images), we achieve much higher (and SOTA) accuracy over 8 LL baselines, while also achieving near perfect parallelization. |
| Researcher Affiliation | Collaboration | 1 Thomas Lord Department of Computer Science, University of Southern California 2 Neuroscience Graduate Program, University of Southern California 3 Intel Labs 4 Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences 5 Dornsife Department of Psychology, University of Southern California |
| Pseudocode | No | The paper describes the algorithms and methods in detail using natural language and mathematical equations (e.g., equations 1-5), and Figure 3 provides a diagram of the algorithm design and overall pipeline. However, there is no explicit section or block labeled 'Pseudocode' or 'Algorithm' with structured, code-like steps. |
| Open Source Code | Yes | Code and data can be found at https://github.com/gyhandy/Shared-Knowledge-Lifelong-Learning |
| Open Datasets | Yes | On a new, very challenging SKILL-102 dataset with 102 image classification tasks (5,033 classes in total, 2,041,225 training, 243,464 validation, and 243,464 test images), we achieve much higher (and SOTA) accuracy over 8 LL baselines, while also achieving near perfect parallelization. Code and data can be found at https://github.com/gyhandy/Shared-Knowledge-Lifelong-Learning |
| Dataset Splits | Yes | On a new, very challenging SKILL-102 dataset with 102 image classification tasks (5,033 classes in total, 2,041,225 training, 243,464 validation, and 243,464 test images) |
| Hardware Specification | Yes | Agents are implemented in py Torch and run on desktop-grade GPUs (e.g., n Vidia 3090, n Vidia 1080). |
| Software Dependencies | No | The paper states 'Agents are implemented in py Torch' but does not provide a specific version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | Pretrained backbone: We use the xception (Chollet, 2017) pretrained on Image Net (Deng et al., 2009)... We use k = 25 clusters for every task (ablation studies in Appendix)... In our experiments, we use m = 5 images/class for every task... Finally, we trained the concatenated vector with Adam optimizer, 0.001 learning rate, and 100 epochs. |