Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Tensor Regression Networks
Authors: Jean Kossaifi, Zachary C. Lipton, Arinbjorn Kolbeinsson, Aran Khanna, Tommaso Furlanello, Anima Anandkumar
JMLR 2020 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on Image Net show that, applied to VGG and Res Net architectures, TCLs and TRLs reduce the number of parameters compared to fully connected layers by more than 65% while maintaining or increasing accuracy. In particular, we demonstrate significant performance improvements over comparable architectures on three tasks associated with the UK Biobank dataset. ... Section 6. Experiments |
| Researcher Affiliation | Collaboration | Jean Kossaifi EMAIL NVIDIA & Imperial College London; Zachary C. Lipton EMAIL Carnegie Mellon University; Aran Khanna EMAIL Amazon AI; Anima Anandkumar EMAIL NVIDIA & California Institute of Technology |
| Pseudocode | No | The paper describes mathematical formulations and theoretical concepts but does not include any explicitly labeled pseudocode or algorithm blocks. Algorithm steps are described in prose within the text. |
| Open Source Code | No | The paper states: 'We implemented all models using the MXNet library (Chen et al., 2015) as well as the Py Torch library Paszke et al. (2017). For all tensor methods, we used the Tensor Ly library Kossaifiet al. (2019)s.' This refers to third-party libraries used, but there is no explicit statement or link indicating that the authors' own source code for the methodology described in the paper is openly available. |
| Open Datasets | Yes | The paper uses the 'Image Net-1K dataset' also referred to as 'The ILSVRC dataset (Deng et al., 2009) (Image Net)'. It also uses the 'UK biobank MRI dataset (Sudlow et al., 2015)'. |
| Dataset Splits | Yes | For ImageNet: 'The ILSVRC dataset (Deng et al., 2009) (Image Net) is composed of 1.2 million images for training and 50, 000 for validation, all labeled for 1,000 classes.' For UK Biobank: 'We split the data into a training set containing 11, 500 scans, a validation set of 3, 800 scans and 3, 800 scans for a held-out test set.' |
| Hardware Specification | Yes | The models were trained with data parallelism across multiple GPUs on Amazon Web Services, with 4 NVIDIA k80 GPUs. ... trained was done on a Tesla P100 GPU. |
| Software Dependencies | No | The paper mentions using 'MXNet library', 'Py Torch library' and 'Tensor Ly library'. However, it does not specify the version numbers for these software dependencies, which is required for a reproducible description. |
| Experiment Setup | No | The paper mentions using 'the same data augmentation procedure as in the original Residual Networks (Res Nets) paper', adding 'a batch normalization layer ... before and after the TCL/TRL', and applying 'ℓ2 normalization ... to the factors of the Tucker decomposition'. However, it does not provide specific hyperparameter values such as learning rates, batch sizes, number of epochs, or optimizer configurations, which are crucial for a reproducible experimental setup. |