In-depth Analysis of Low-rank Matrix Factorisation in a Federated Setting
Authors: Constantin Philippenko, Kevin Scaman, Laurent Massoulié
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We complete our analysis with experiments on both synthetic and real data. |
| Researcher Affiliation | Academia | Inria Paris D epartement d informatique de l ENS, PSL Research University EMAIL |
| Pseudocode | Yes | Algorithm 1: Distributed Randomized Power Iteration; Algorithm 2: GD w.r.t. U with a power init. |
| Open Source Code | Yes | Code https://github.com/philipco/matrix factorization |
| Open Datasets | Yes | Real datasets. We consider three real datasets: mnist (Le Cun, Cortes, and Burges 2010), celeba-200k (Liu et al. 2015) and w8a (Chang and Lin 2011). |
| Dataset Splits | No | The paper describes how datasets are split across clients and mentions 'training dataset size' in Table 1, but it does not provide specific training/test/validation splits (percentages or counts) for model evaluation. For example, it states 'For w8a, the dataset is split randomly across clients' but no further details on train/test/validation splits for the learning task. |
| Hardware Specification | Yes | Experiments have been run on a 13th Gen Intel Core i7 processor with 14 cores. |
| Software Dependencies | No | The paper mentions 'Truncated SVD class of Scikit-learn (Pedregosa et al. 2011)' and 'svd lowrank function of Py Torch (Paszke et al. 2019)' as related work or tools used for comparison, but it does not specify version numbers for the software dependencies of its own described methodology. |
| Experiment Setup | Yes | Input: Number of iteration α in N, step-size γ. Output: (U i)N i=1. Run Algorithm 1 to compute V = (S S)αS Φ. for each client i in {1, . . . , N} without any communication do Sample a random matrix Ui 0 in Rni r. for t {1, . . . , T } do Compute UF(Ui t 1, V) = (Ui t 1V Si)V. Ui t = Ui t 1 γ UF(Ui t 1, V). ... Table 1: Settings of the experiments. ... latent dimension r 20 20 20 ... We run a single gradient descent after having sampled m = 20 random matrices Φ to take the one resulting in the best condition number κ(V). ... We run experiments w./w.o. a momentum βk = k/(k + 3), with k the iteration index. |