A Hybrid Weighted Nearest Neighbour Classifier for Semi-Supervised Learning
Authors: Stephen M. S. Lee, Mehdi Soleymani
JMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Simulation studies and real data examples are presented to support our theoretical findings and illustrate the empirical performance of the hybrid classifiers constructed using uniform weights. We also explore the effects of pseudo-labelling by hypothesized class probabilities as a supplement to our main findings. |
| Researcher Affiliation | Academia | Stephen M. S. Lee EMAIL Department of Statistics and Actuarial Science The University of Hong Kong Pokfulam Road, Hong Kong |
| Pseudocode | No | The paper describes the steps of the procedure in paragraph form (e.g., 'To construct the hybrid classifier, we first train a weighted nearest neighbour classifier...', 'Step 1. Select randomly a subsample...'), but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any specific links to source code repositories, nor does it explicitly state that code will be released or is available in supplementary materials for the methodology described. |
| Open Datasets | Yes | Six real data sets are selected from the UCI repository (Lichman, 2013) to benchmark the revised hybrid weighted nearest neighbour classifier T U n,m,ς,k against the k-nearest neighbour and optimally weighted nearest neighbour classifiers. |
| Dataset Splits | Yes | The data points in each set are all labelled, scaled and then randomly assigned to a learning set and a test set with probabilities 0.3 and 0.7, respectively. |
| Hardware Specification | No | The paper does not specify any particular hardware (e.g., CPU, GPU models, or cloud computing instances) used for conducting the experiments or simulations. |
| Software Dependencies | No | The paper does not mention any specific software or library dependencies with their version numbers that were used in the implementation or experimentation. |
| Experiment Setup | Yes | The mixing coefficient ς is set to be 1 + (k1m)/(k2n), an optimal choice for the two-dimensional case as derived in Section 4.3. The tuning parameters (k1, k2) are selected from a grid of pilot values, which may differ between different methods. |