TensorLy: Tensor Learning in Python

Authors: Jean Kossaifi, Yannis Panagakis, Anima Anandkumar, Maja Pantic

JMLR 2019 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We generated random third order tensors of size 500 500 500 (125 million elements). We then compared the decomposition speed for a rank 50 CANDECOMP-PARAFAC (CP) and rank (50, 50, 50) Tucker decomposition with Tensor Ly on CPU (Num Py backend) and Tensor Ly on GPU (MXNet, Py Torch, Tensor Flow and Cu Py backends), and Scikit-Tensor (Sktensor), Fig. 2. In all cases we fixed the number of iterations to 100 to allow for a fair comparison. The experiment was repeated 10 times, with the main bar representing the average CPU time and the tip on the bar the standard deviation of the runs.
Researcher Affiliation Collaboration Jean Kossaifi1 EMAIL Yannis Panagakis1,2 EMAIL Anima Anandkumar3,4 EMAIL Maja Pantic1 EMAIL 1Imperial College London 2Middlesex University 3 NVIDIA 4 California Institute of Technology
Pseudocode No The paper describes the functionalities and implementation of the TensorLy library, but it does not include any structured pseudocode or algorithm blocks. Figure 1 provides a high-level overview of operations and methods, but it is not pseudocode.
Open Source Code Yes Tensor Ly is available at https://github.com/tensorly/tensorly
Open Datasets No We generated random third order tensors of size 500 500 500 (125 million elements).
Dataset Splits No The paper describes generating random tensors for performance testing and does not involve typical machine learning experiments requiring training, validation, or test splits. Therefore, no dataset split information is provided.
Hardware Specification Yes Experiments were done on an Amazon Web Services p3 instance, with a NVIDIA VOLTA V100 GPU and 8 Intel Xeon E5 (Broadwell) processors.
Software Dependencies No The paper mentions several software components like Num Py, Sci Py, MXNet, Py Torch, Cu Py, and Tensor Flow. It cites their foundational papers but does not provide specific version numbers for these software components as used in their experiments (e.g., 'NumPy 1.x' or 'PyTorch 1.x').
Experiment Setup Yes We generated random third order tensors of size 500 500 500 (125 million elements). We then compared the decomposition speed for a rank 50 CANDECOMP-PARAFAC (CP) and rank (50, 50, 50) Tucker decomposition... In all cases we fixed the number of iterations to 100 to allow for a fair comparison.