Learnware Specification via Dual Alignment

Authors: Wei Chen, Jun-Xiang Mao, Xiaozheng Wang, Min-Ling Zhang

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To demonstrate the superior specification quality of the proposed DALI approach in the learnware paradigm, we conduct comparative experiments under different (homogeneous or heterogeneous) label space settings and mixed task settings. Furthermore, we perform additional tests to verify the privacy protection and provide a visualization of the specification. Additionally, an ablation study is carried out to analyze each alignment component of the DALI approach.
Researcher Affiliation Collaboration 1School of Computer Science and Engineering, Southeast University, Nanjing, China 2Key Lab. of Computer Network and Information Integration (Southeast University), MOE, China 3Information Technology and Data Management Department of China Mobile Communications Group Zhejiang Co., Ltd. Correspondence to: Min-Ling Zhang <EMAIL>.
Pseudocode Yes The overall procedure of the DALI approach is outlined in Algorithm 1 of Appendix A. ... The detailed procedures for the submitting and deploying stages based on the DALI approach are outlined in Algorithm 2 and Algorithm 3.
Open Source Code No The paper does not contain any explicit statement about releasing code for the methodology described, nor does it provide a link to a code repository.
Open Datasets Yes Dataset. We evaluate the learnware paradigm by extracting and reconstructing datasets from Domain Net (Peng et al., 2019) and NICO (He et al., 2021). These two image datasets are commonly used to assess the effectiveness of model reuse.
Dataset Splits No The paper describes how developer tasks and user requirements are generated from the datasets (e.g., "11 domains * 2 label spaces (A and B) = 22 developer tasks... The data not included in the 22 tasks is treated as the 22 user requirements."), but it does not specify explicit train/test/validation splits with percentages or sample counts for the underlying datasets themselves.
Hardware Specification No The paper mentions that experiments were performed but does not specify any hardware details such as GPU models, CPU types, or memory.
Software Dependencies No Implementation details. In this experiment, we use RKME, RKME-W (Guo et al., 2023), and LANE specification methods as baselines, with parameters set to their optimal choices. To ensure the fairness of the comparative experiments, the global feature extractor and the well-established models included in the learnware dock system are derived from Dense Net201 (Huang et al., 2017) and Res Net18 (He et al., 2016). The random neural network ψ in the DALI approach is set to Conv Net BN (Rawat & Wang, 2017). The detailed implementation can be found in Appendix E.
Experiment Setup Yes The parameters involved in the LANE method and the proposed DALI approach are identical, including a batch size set to 64, a Conv Net BN (Rawat & Wang, 2017) network architecture, an activation function set to Re LU, and a normalization layer set to Group Norm.