A Random Matrix Perspective on Random Tensors
Authors: José Henrique de M. Goulart, Romain Couillet, Pierre Comon
JMLR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The paper relies on tools from random matrix theory to characterize maximum likelihood (ML) estimation performance. It states: "For d = 3, the solution to this equation matches the existing results. We conjecture that the same holds for any order d, based on numerical evidence for d {4, 5}." Also, Figure 1 illustrates a "Tensor power iteration method applied to find a local maximum of the ML problem (2), with d = 3 and N = 500." These instances of numerical evidence and simulation to illustrate theoretical concepts classify it as experimental. |
| Researcher Affiliation | Academia | All authors are affiliated with universities and public research institutions: 'Universit e de Toulouse, Toulouse INP, IRIT', 'Universit e Grenoble Alpes, Inria, CNRS, LIG', and 'Universit e Grenoble Alpes, CNRS, GIPSA-lab'. Their email domains also correspond to these academic institutions (e.g., 'irit.fr', 'univ-grenoble-alpes.fr', 'gipsa-lab.grenoble-inp.fr'). |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. The methodologies are described through mathematical derivations and theoretical frameworks. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code, nor does it include links to code repositories or mention code in supplementary materials. |
| Open Datasets | No | The paper focuses on a theoretical model, specifically a 'symmetric dth-order rank-one model with Gaussian noise' and a 'spiked rank-one tensor model'. It does not use or provide access information for any publicly available or open datasets. |
| Dataset Splits | No | The paper does not conduct experiments on real-world datasets, but rather uses theoretical models and numerical simulations. Therefore, no dataset splits for training, validation, or testing are provided. |
| Hardware Specification | No | The paper discusses theoretical analysis and numerical evidence for its models (e.g., 'numerical evidence for d {4, 5}' and 'd = 3 and N = 500' for a tensor power iteration illustration). However, it does not specify any hardware details like GPU models, CPU types, or cloud resources used for these computations. |
| Software Dependencies | No | Appendix D mentions 'Maple solution of fixed-point equation for d = 3' and provides Maple code snippets. However, it does not specify the version of Maple or any other software dependencies with version numbers used for the main research or simulations. It only shows how one specific equation can be solved with a symbolic math tool without specifying its version or any other software used in the research itself. |
| Experiment Setup | No | The paper provides details for numerical evidence, such as 'd = 3 and N = 500' for the tensor power iteration method in Figure 1, and refers to theoretical parameter 'lambda' (λ). However, it does not include specific experimental setup details like hyperparameter values (e.g., learning rates, batch sizes), optimizer settings, or other training configurations typically found in empirical machine learning papers. |