A Random Matrix Approach to Low-Multilinear-Rank Tensor Approximation

Authors: Hugo Lebeau, Florent Chatelain, Romain Couillet

JMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Figure 2 plots, for an order-3 tensor, as a function of the signal-to-noise ratio (SNR) ω = P 2 F/σN, the alignments between the singular subspace of the signal P spanned by X(ℓ) and the dominant singular subspace of the observation T spanned by ˆU (ℓ). Solid curves are the alignments predicted by Theorem 9 while dotted curves are empirical alignments computed on a 100 200 300 tensor with signal-rank (3, 4, 5). If the SNR ω is too small, there is no alignment, meaning that truncated MLSVD fails to recover P the signal is masked by the noise. When it exceeds a critical value (see Theorem 9 and Section 3.2 for details), a phase transition phenomenon occurs6: the alignment starts to grow i.e., truncated MLSVD now partially recovers P and converges to 1 as ω + .
Researcher Affiliation Academia Hugo Lebeau EMAIL Université Grenoble Alpes, CNRS, Inria, Grenoble INP, LIG Grenoble, 38000, France Florent Chatelain EMAIL Université Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab Grenoble, 38000, France Romain Couillet EMAIL Université Grenoble Alpes, CNRS, Inria, Grenoble INP, LIG Grenoble, 38000, France
Pseudocode Yes Algorithm 1: Higher-Order Orthogonal Iteration (De Lathauwer et al., 2000a) for ℓ= 1, . . . , d do U (ℓ) 0 rℓdominant left singular vectors of T (ℓ) for ℓ= 1, . . . , d do U (ℓ) t+1 rℓdominant left singular vectors of T (ℓ) ℓ =ℓU (ℓ ) t until convergence at t = T GHOOI T(U (1) T , . . . , U (d) T )
Open Source Code No The paper does not contain any explicit statement about releasing source code or provide links to a code repository.
Open Datasets No The paper analyzes a 'general spiked tensor model' and its experiments are based on simulations using generated data. For example, 'Experimental setting: d = 3, (n1, n2, n3) = (100, 200, 300), N = n1 + n2 + n3 and (r1, r2, r3) = (3, 4, 5)'. It does not refer to any external or publicly available datasets.
Dataset Splits No The paper's experiments are based on a 'general spiked tensor model' with 'additive Gaussian noise tensor N whose entries are independent N(0, 1) random variables'. The data is generated based on specified parameters (e.g., tensor dimensions), not split from a pre-existing dataset. Therefore, traditional training/test/validation splits are not applicable or provided.
Hardware Specification No The paper does not contain any specific details about the hardware (e.g., GPU, CPU models, memory) used to run the simulations or experiments.
Software Dependencies No The paper mentions 'MATLAB toolbox Tensorlab (Vervliet et al., 2016)' as an example of a tool for CPD, but it does not state that this specific software with a version number was used for the implementation or experiments described in the paper, nor does it list any other software dependencies with version numbers.
Experiment Setup Yes Experimental setting: d = 3, (n1, n2, n3) = (100, 200, 300), N = n1 + n2 + n3 and (r1, r2, r3) = (3, 4, 5). Figure 3: ... Experimental setting: d = 3, (n1, n2, n3) = (300, 500, 700), N = n1 + n2 + n3, (r1, r2, r3) = (3, 4, 5) and P 2 F/σN = 15. Figure 4: ... Experimental setting: d = 3, (n1 = 6), N = n1 + n2 + n3 and (r1, r2, r3) = (3, 4, 5).