Faster Randomized Methods for Orthogonality Constrained Problems

Authors: Boris Shustin, Haim Avron

JMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the effect of preconditioning on the computational costs and asymptotic convergence and demonstrate empirically the utility of our approach. ... Finally, we demonstrate numerically our randomized preconditioning approach in Section 6. ... In the following section, we present our numerical experiments illustrating our randomized preconditioning approach. ... We report experiments with our proposed preconditioned Riemannian optimization algorithms.
Researcher Affiliation Academia Boris Shustin EMAIL Haim Avron EMAIL Department of Applied Mathematics Tel Aviv University Tel Aviv, 69978, Israel
Pseudocode Yes Algorithm 1 Sketched Riemannian Iterative CCA with warm-start. Algorithm 2 Sketched Riemannian Iterative FDA with warm-start.
Open Source Code No The paper mentions using third-party libraries like 'manopt' and 'pymanopt' for implementation: 'Many of the Riemannian algorithms and common manifolds are implemented in manopt, which is a matlab library (Boumal et al., 2014). There is also a python parallel for manopt called pymanopt (Townsend et al., 2016). The experiments reported in Section 6 use the manopt library.' However, it does not explicitly state that the authors' own code for the methods described in this paper is publicly available.
Open Datasets Yes We use in our experiments three popular data sets: MNIST (Figures 1 and 2), MEDIANILL (Figure 3) and COVTYPE (Figure 4). ... data sets were downloaded for libsvm s website: https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/.
Dataset Splits No The paper mentions using specific datasets (MNIST, MEDIANILL, COVTYPE) and how they are used (e.g., 'MNIST is used for testing CCA and FDA'). However, it does not provide specific details on how these datasets were split into training, validation, or test sets (e.g., percentages, sample counts, or explicit references to standard splits used for their specific experiments).
Hardware Specification No The paper mentions using MATLAB for implementation but does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments. It only states that 'wall clock time is not an appropriate metric for performance' and describes the computational cost in terms of operations rather than actual execution time on specific hardware.
Software Dependencies No We use MATLAB for our implementations, relying on the manopt library (Boumal et al., 2014) for Riemannian optimization. ... There is also a python parallel for manopt called pymanopt (Townsend et al., 2016). While software names are mentioned, specific version numbers for MATLAB, manopt, or pymanopt are not provided.
Experiment Setup Yes The experiments we present here are with p = 3 and N = diag (3, 2.75, 2). ... We use manopt s default stopping criteria: the optimization process terminates if the norm of the Riemannian gradient drops below 10 6. We cap the number of iterations by 1000. ... We use a small regularization of 10 6 multiplied by the average eigenvalue of the Gram matrices of X and Y correspondingly.