On the Generalization of Feature Incremental Learning

Authors: Chao Xu, Xijia Tang, Lijun Zhang, Chenping Hou

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, the comprehensive experimental and theoretical results mutually validate each other, underscoring the reliability of our conclusions. . . . Comprehensive experimental results corroborate the theoretical findings, enhancing their reliability and demonstrating the feasibility of applying these theoretical insights to model design. . . . In this section, soft margin SVM [Cortes and Vapnik, 1995] and logistic regression (LR) [Berger et al., 1996] are applied as demonstrations, aiming to form mutual verification through experiments and theories.
Researcher Affiliation Academia Chao Xu1 , Xijia Tang1 , Lijun Zhang2 and Chenping Hou1 1College of Science, National University of Defense Technology, Changsha, 410073, China. 2Nanjing University, Nanjing, China. EMAIL, {zljzju}@gmail.com
Pseudocode No The paper describes mathematical formulations and strategies but does not include any explicitly labeled pseudocode blocks or algorithms in a structured, step-by-step format typical of pseudocode.
Open Source Code No The paper does not contain any explicit statements about releasing source code or provide links to a code repository.
Open Datasets Yes We adopt 8 datasets from UCI Repository 1 and LIBSVM Library 2 to carry out the experiments. 1http://archive.ics.uci.edu/ml 2http://www.csie.ntu.edu.tw/~cjlin/libsvm
Dataset Splits Yes As for the parameters selection of the algorithms, we conduct K-fold cross-validation on the training set.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments, such as GPU or CPU models.
Software Dependencies No The paper mentions applying soft margin SVM [Cortes and Vapnik, 1995] and logistic regression (LR) [Berger et al., 1996] but does not specify the versions of any software libraries, frameworks, or programming languages used.
Experiment Setup Yes As for the parameters selection of the algorithms, we conduct K-fold cross-validation on the training set. Specifically, we use the grid search method to obtain the optimal parameter combination, and the search range of each parameter is 10^-3, 10^-2, 10^-1, 10^0, 10^1, 10^2, 10^3.