TopicNet: Semantic Graph-Guided Topic Discovery
Authors: Zhibin Duan, Yi.shi Xu, Bo Chen, dongsheng wang, Chaojie Wang, Mingyuan Zhou
NeurIPS 2021 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on widely used benchmarks show that Topic Net outperforms related deep topic models on discovering deeper interpretable topics and mining better document representations. |
| Researcher Affiliation | Academia | Zhibin Duan, Yishi Xu, Bo Chen , Dongsheng Wang, Chaojie Wang National Laboratory of Radar Signal Processing, Xidian University, Xi an, China EMAIL, EMAIL Mingyuan Zhou Mc Combs School of Business, The University of Texas at Austin EMAIL |
| Pseudocode | No | The paper does not contain any sections or figures explicitly labeled as 'Pseudocode' or 'Algorithm'. |
| Open Source Code | Yes | Our code is available at https://github.com/Bo Chen Group/Topic Net. |
| Open Datasets | Yes | Our experiments are conducted on four widely-used benchmark datasets, including 20Newsgroups (20NG), Reuters Corpus Volume I (RCV1), Wikipedia (Wiki), and a subset of the Reuters-21578 dataset (R8), varying in scale and document length. |
| Dataset Splits | No | The paper specifies a train/test split ('randomly select 80% of the word tokens from each document to form a training matrix X, holding out the remaining 20% to form a testing matrix Y') but does not explicitly mention a validation set or split for hyperparameter tuning. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers, such as libraries or frameworks used in the implementation. |
| Experiment Setup | Yes | Note that, in our experiments, the hyper-parameters are set as m = 10.0 and β = 1.0. ... For a 15-layer model, the topic size from bottom to top is set as K = [256, 224, 192, 160, 128, 112, 96, 80, 64, 56, 48, 40, 32, 16, 8], and the detailed description can be found in the Appendix. |