Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Near-Neighbor Methods in Random Preference Completion
Authors: Ao Liu, Qiong Wu, Zhenming Liu, Lirong Xia4336-4343
AAAI 2019 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on synthetic data verify our theoretical findings, and demonstrate that our algorithm is robust in highdim spaces. Experiments on Netflix data shows that our anchor-based algorithm is superior to the KT-k NN algorithm and a standard collaborative filter (using the cosinesimilarities to determine neighbors). |
| Researcher Affiliation | Academia | Ao Liu Department of Computer Science Rensselaer Polytechnic Institute Troy, NY 12180, USA EMAIL Qiong Wu, Zhenming Liu Department of Computer Science College of William and Mary Williamsburg, VA 23187, USA EMAIL and EMAIL Lirong Xia Department of Computer Science Rensselaer Polytechnic Institute Troy, NY 12180, USA EMAIL |
| Pseudocode | Yes | Algorithm 1: KT-k NN (it produces incorrect results) and Algorithm 2: Anchor-k NN. |
| Open Source Code | No | The paper does not contain any statement or link indicating the availability of its source code. |
| Open Datasets | Yes | We examine the performance of Anchork NN using the standard Netflix dataset (?; ?). |
| Dataset Splits | No | The paper mentions choosing an "optimal k (chosen by using cross-validations for Ground-Truth-k NN)" but does not specify the splits for training, validation, or testing for its own experiments. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers. |
| Experiment Setup | No | The paper mentions using an "optimal k (chosen by using cross-validations for Ground-Truth-k NN)" and refers to "different k [101, 1601]" and "k = 751" for Ground-truth kNN in experiments. However, it does not provide explicit hyperparameters like learning rates, batch sizes, or other training configurations. |