On the Learning with Augmented Class via Forests
Authors: Fan Xu, Wuyang Chen, Wei Gao
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Theoretically, we present the convergence analysis for our augmented Gini impurity, and we finally conduct experiments to evaluate our approaches. We conduct experiments on 15 benchmark datasets and 5 image datasets, and the details are summarized in Table 1. |
| Researcher Affiliation | Academia | Fan Xu , Wuyang Chen and Wei Gao National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China School of Artificial Intelligence, Nanjing University, Nanjing, China EMAIL. All authors are affiliated with Nanjing University, and their email addresses use the .edu.cn domain, indicating an academic affiliation. |
| Pseudocode | Yes | Algorithm 1 Our LACForest approach Algorithm 2 Deep Neural LACForest |
| Open Source Code | Yes | The code is available at https://github.com/nju-xuf/LACForest. |
| Open Datasets | Yes | We conduct experiments on 15 benchmark datasets and 5 image datasets, and the details are summarized in Table 1. Most datasets have been well-studied in previous works on learning with augmented class. Table 1 lists well-known public datasets such as 'mnist', 'fmnist', 'kuzushiji', 'svhn', 'cifar10', etc. |
| Dataset Splits | Yes | For each dataset, we randomly select half of classes as the augmented class with the rest as known classes, following [Zhang et al., 2020]. We then randomly sample 500 examples of known classes as labeled data Sl, and 1000 instances as unlabeled data Su and 100 instances as testing data. We take θ = 0.5 in Eqn. (1), and more experimental settings could be found in [Xu et al., 2025]. For our deep neural LACForest, we randomly select four classes as the augmented class and take the rest as known classes, and set θ = 0.4 similarly to [Shu et al., 2023]. |
| Hardware Specification | No | The paper mentions using a three-layer convolutional neural network and VGG16 as backbones for image datasets, but it does not provide any specific details about the CPU, GPU models, memory, or other hardware used for training or inference. |
| Software Dependencies | No | The paper does not provide specific version numbers for any programming languages, libraries, or frameworks used in the experiments. |
| Experiment Setup | Yes | We take θ = 0.5 in Eqn. (1), and more experimental settings could be found in [Xu et al., 2025]. We randomly select four classes as the augmented class and take the rest as known classes, and set θ = 0.4 similarly to [Shu et al., 2023]. Algorithm 1 and Algorithm 2 list hyperparameters such as m, τ, γ, l, T, and λce. Figure 5 shows that our approach is insensitive to parameter λce and generally works well for λce [0.2, 2]. Figure 6 shows the influence of the depth of neural trees, and our method takes stable results when the tree depth l 5. |