Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Task-Specific Preconditioner for Cross-Domain Few-Shot Learning
Authors: Suhyun Kang, Jungwon Park, Wonseok Lee, Wonjong Rhee
AAAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical evaluations on the Meta-Dataset show that TSP achieves state-of-the-art performance across diverse experimental scenarios. |
| Researcher Affiliation | Collaboration | 1 Samsung Research, Seoul, South Korea 2 Department of Intelligence and Information, Seoul National University, Seoul, South Korea 3 IPAI, Seoul National University, Seoul, South Korea |
| Pseudocode | Yes | The algorithm for the training and testing procedures is provided in Appendix B. |
| Open Source Code | No | The paper states that baseline methods like TSA and TA2-Net are publicly available as open-source, but it does not provide an explicit statement or link for the code of the proposed method (TSP). |
| Open Datasets | Yes | In the experiments, we use Meta-Dataset (Triantafillou et al. 2019) that is the standard benchmark for evaluating the performance of CDFSL. |
| Dataset Splits | Yes | In all experiments, we follow the standard protocol described in (Triantafillou et al. 2019). |
| Hardware Specification | No | The paper mentions using ResNet-18 as the backbone for the feature extractor but does not provide specific details about the hardware (e.g., GPU models, CPU types) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software components or libraries used (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | For the Dataset Classifier Loss, weighting factor λ is set to 0.1, as it performs best compared to other values, as shown in Appendix D.1. Details of the Meta-Dataset, hyper-parameters, and additional implementation are available in Appendix E. |