Joint Feature and Differentiable $ k $-NN Graph Learning using Dirichlet Energy
Authors: Lei Xu, Lei Chen, Rong Wang, Feiping Nie, Xuelong Li
NeurIPS 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate the effectiveness of our model with extensive experiments on both synthetic and real-world datasets. |
| Researcher Affiliation | Academia | Lei Xu School of Computer Science & School of Artificial Intelligence, OPtics and Electro Nics (i OPEN) Northwestern Polytechnical University Xi an 710072, P.R. China EMAIL Lei Chen School of Computer Science Nanjing University of Posts and Telecommunications Nanjing 210003, P.R. China EMAIL Rong Wang School of Artificial Intelligence, OPtics and Electro Nics (i OPEN) Northwestern Polytechnical University Xi an 710072, P.R. China EMAIL Feiping Nie School of Artificial Intelligence, OPtics and Electro Nics (i OPEN) & School of Computer Science Northwestern Polytechnical University Xi an 710072, P.R. China EMAIL Xuelong Li School of Artificial Intelligence, OPtics and Electro Nics (i OPEN) Northwestern Polytechnical University Xi an 710072, P.R. China EMAIL |
| Pseudocode | Yes | Algorithm 1 UFS |
| Open Source Code | No | The paper uses or references code for competing methods and a differentiable top-k selector component, but does not provide an explicit statement or link to the source code for their own proposed methodology. For example: "the implementation of differentiable top-k selector is based on the code provided by [26] in https://papers.nips.cc/paper_files/paper/2020/hash/ ec24a54d62ce57ba93a531b460fa8d18-Abstract.html" |
| Open Datasets | Yes | Table 1 exhibits the details of these datasets, which include many high-dimensional datasets to test the performance of our method. |
| Dataset Splits | Yes | We partition each dataset into training data and testing data using an 8:2 ratio and identify useful features using training data. |
| Hardware Specification | Yes | All experiments are conducted on a server equipped with an RTX 3090 GPU and an Intel Xeon Gold 6240 (18C36T) @ 2.6GHz x 2 (36 cores in total) CPU. |
| Software Dependencies | No | The paper mentions software frameworks like PyTorch and scikit-learn but does not provide specific version numbers for these dependencies, which are necessary for reproducible setup. For example: "Our method is implemented using the Py Torch framework [48]." and "scikit-learn library [47]". |
| Experiment Setup | Yes | We train our method using the Adam optimizer for 1000 epochs on all datasets, with the learning rate searched from {10 4, 10 3, 10 2, 10 1, 100, 101}. We search the parameter γ in {10 3, 10 2, 10 1} and the parameter k in {1, 2, 3, 4, 5}. |