Factor Augmented Tensor-on-Tensor Neural Networks

Authors: Guanhao Zhou, Yuefeng Han, Xiufan Yu

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we investigate the finite-sample performance of our proposed FATTNN methods via simulation studies and real-world applications. Specifically, we implement the proposed FATTNN with TIPUP algorithm to estimate the factor structures. For benchmark methods, we consider one traditional statistical model the multiway tensor-on-tensor regression (Lock 2018) (denoted by Multiway), and four state-of-the-art deep learning approaches, including the temporal convolutional network (Bai, Kolter, and Koltun 2018) of Y regressed on X (denoted by TCN), the long short-term memory network (Hochreiter and Schmidhuber 1997) (denoted by LSTM) , the convolutional tensor-train LSTM (Su et al. 2020) (denoted by Conv-TT-LSTM), and tensor regression layer (Kossaifi et al. 2020) (denoted by TRL). Evaluation Metrics. We evaluate the performance of various methods with emphasis on two aspects: prediction accuracy and computational efficiency. To evaluate prediction accuracy, we compute the Mean Squared Error (MSE) over the testing data, i.e., MSE = (ntest p1 pq) 1 P i Dtest Y(obs) i Y(pred) i 2 F , where Dtest denotes the testing set, ntest is the number of samples in the testing set Dtest, and (Y(obs) i , Y(pred) i ) are the observed and predicted values of the i-th tensor response. In addition, we record the computational time of different methods to evaluate computational efficiency.
Researcher Affiliation Academia Guanhao Zhou, Yuefeng Han, Xiufan Yu Department of Applied and Computational Mathematics and Statistics University of Notre Dame gzhou4, yuefeng.han, EMAIL
Pseudocode Yes Algorithm 1: Factor Augmented Tensor-on-Tensor Neural Network (FATTNN)
Open Source Code Yes Our code is available in the supplementary material.
Open Datasets Yes (1) The United Nations Food and Agriculture Organization (FAO) Crops and Livestock Products Data. The database provides agricultural statistics (including crop, livestock, and forestry sub-sectors) collected from countries and territories since 1961. It is publicly available at https://www.fao.org. (2) New York City (NYC) Taxi Trip Data. The data contains 24-hour taxi pick-up and drop-off information of 69 areas in New York City for all the business days in 2014 and 2015. It is publicly available at https://www.nyc.gov. (3) Functional Magnetic Resonance Imaging (FMRI) Data. We consider the Haxby dataset (Haxby et al. 2001), a well-known public dataset for research in brain imaging and cognitive neuroscience, which can be retrieved from the Ni Learn Python library.
Dataset Splits Yes In each setting, we use 70% for training the models and 30% for model evaluation. For the prediction tasks using the New York Taxi data and FMRI image datasets, we split the data into training sets (70%) and testing (30%). For the prediction tasks using the FAO dataset, we use 80% data for training and 20% for testing due to its relatively small sample size.
Hardware Specification No Experiments were run on the research computing cluster provided by the University Center for Research Computing. The paper does not provide specific hardware details such as GPU/CPU models or memory amounts.
Software Dependencies No The paper mentions using the 'Ni Learn Python library' for the FMRI data but does not specify its version or any other software dependencies with version numbers.
Experiment Setup No The paper provides details on dataset splits (70/30 or 80/20), data transformations (log transformation), and the inclusion of lag-1 response as a covariate. However, it does not specify concrete hyperparameters for the neural networks used (e.g., learning rates, batch sizes, specific optimizer configurations, number of epochs, or detailed architectures of the TCN and other deep learning models).