Sharp analysis of power iteration for tensor PCA

Authors: Yuchen Wu, Kangjie Zhou

JMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive numerical experiments verify our theoretical results. Keywords: Spiked model, tensor PCA, power iteration, approximate message passing, non-convex optimization
Researcher Affiliation Academia Yuchen Wu EMAIL Department of Statistics and Data Science University of Pennsylvania Philadelphia, PA 19104-6303, USA Kangjie Zhou EMAIL Department of Statistics Stanford University Stanford, CA 94305-2004, USA
Pseudocode No The paper describes the tensor power iteration algorithm mathematically and analyzes its dynamics, but does not present it in a structured pseudocode or algorithm block. For example, 'Tensor power iteration initialized at v0 is defined recursively as follows: vt+1 = T[( vt) (k 1)] = λn v, vt k 1v + W [( vt) (k 1)], vt+1 = vt+1 vt+1 2 , (2)' is a mathematical definition.
Open Source Code No The paper does not contain any explicit statements about releasing source code or links to code repositories.
Open Datasets No The numerical experiments use synthetically generated tensor data according to a model, not a publicly available dataset. For example, 'generate the tensor data according to Eq. (1).' and 'For each tensor realization, we run tensor power iteration from a random initialization...'
Dataset Splits No The paper uses synthetically generated data for numerical experiments, repeating the process '1000 times independently' for various configurations. This approach does not involve predefined training/test/validation splits of a fixed dataset.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the numerical experiments, such as CPU or GPU models, or cloud computing specifications.
Software Dependencies No The paper does not mention any specific software or library versions used for implementation or experiments (e.g., Python, PyTorch, TensorFlow, or specific numerical libraries with versions).
Experiment Setup Yes To set the stage, we choose n = 200, k = 3, λn = n(k 1)/2, and generate the tensor data according to Eq. (1). We then run tensor power iteration with random initialization and compare the marginal distributions of αt and Xt, for all t {1, 2, 3, 4}. We repeat this procedure 1000 times independently, and collect the realized values of αt to form the corresponding empirical distributions. ... For this part we let λn = n(k 1)/2, k = 3, and use different values of n. For each n {25, 50, 100, 200, 400, 800}, we repeat this procedure independently for 1000 times and compute the empirical convergence probability. ... Tstop := inf t N+ : vt 2, vt 3 1/2 .