Transfer Learning for Latent Variable Network Models
Authors: Akhil Jalan, Arya Mazumdar, Soumendu Sundar Mukherjee, Purnamrita Sarkar
NeurIPS 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we empirically demonstrate our algorithm s use on real-world and simulated network estimation problems. |
| Researcher Affiliation | Academia | Akhil Jalan Department of Computer Science UT Austin EMAIL Arya Mazumdar Halıcıo glu Data Science Institute & Dept of CSE UC San Diego EMAIL Soumendu Sundar Mukherjee Statistics and Mathematics Unit (SMU) Indian Statistical Institute, Kolkata EMAIL Purnamrita Sarkar Department of Statistics and Data Sciences UT Austin EMAIL |
| Pseudocode | Yes | Algorithm 1 b Q-Estimation for Latent Variable Models |
| Open Source Code | Yes | We submit our code as a supplementary zip file in accordance with the Neur IPS code and data submission guidelines. |
| Open Datasets | Yes | Metabolic Networks. We access metabolic models from King et al. (2016) at http://bigg.ucsd.edu. (...) EMAIL-EU. We use the email-EU-core-temporal dataset at https://snap.stanford.edu/data/email-Eu-core-temporal.html, as introduced in Paranjape et al. (2017). |
| Dataset Splits | No | The paper describes using source and target data for estimation, but does not explicitly mention or specify training, validation, and test dataset splits or cross-validation procedures for model evaluation. |
| Hardware Specification | No | As described in Appendix C, 'We run all experiments on a personal Linux machine with 378GB of CPU/RAM.' This description does not include specific CPU or GPU models, or other detailed hardware specifications. |
| Software Dependencies | No | The paper does not explicitly list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or specific libraries). |
| Experiment Setup | Yes | Hyperparameters. We do not tune any hyperparameters. For Algorithm 1 we use the quantile cutoff hn = qn Q in all experiments. |