Low-Rank Tensor Transitions (LoRT) for Transferable Tensor Regression
Authors: Andong Wang, Yuning Qiu, Zhong Jin, Guoxu Zhou, Qibin Zhao
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Theoretical analysis and experiments on tensor regression tasks, including compressed sensing and completion, validate the robustness and versatility of the proposed methods. These findings indicate the potential of Lo RT as a robust method for tensor regression in settings with limited data and complex distributional structures. ... We evaluate the proposed methods, Lo RT and D-Lo RT, on both synthetic and real-world datasets in the context of transferable tensor regression. |
| Researcher Affiliation | Academia | 1RIKEN AIP, Tokyo, Japan. 2 Department of Computer, China University of Petroleum Beijing at Karamay, Karamay, China. 3School of Automation, Guangdong University of Technology, Guangzhou, China. 4Key Laboratory of Intelligent Detection and the Internet of Things in Manufacturing, Ministry of Education, Guangdong University of Technology, Guangzhou, China. Correspondence to: Qibin Zhao <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 PGD for Joint Low-Rank Estimation (Lo RT Step 1) ... Algorithm 2 PGD for Target-Specific Refinement (Lo RT Step 2) ... Algorithm 3 PGD for Task-Only Tensor Regression |
| Open Source Code | Yes | A simulated implementation is available at: https://github.com/pingzaiwang/Lo RT, including an example for simulating distributed computation. |
| Open Datasets | Yes | In real-world experiments, we investigate the impact of transfer learning on tensor completion tasks. We evaluate Lo RT and D-Lo RT using YUV RGB video datasets (akiyo, bridge, grandma, and hall), where each video frame is represented as a 128 128 3 tensor. ... Available at http://trace.eas.asu.edu/yuv/ ... In particular, we incorporate two additional video sequences Apply Eye Make-up and Blowing Candles from the UCF-101 benchmark, as used in Wang & Zhao (2024) |
| Dataset Splits | Yes | In the synthetic experiments, we investigate the impact of transfer learning on tensor compressed sensing under Gaussian design tensor regression models. Following the setup in 3, we generate low-tubal-rank target task parameters W(0) using NT = 200 target samples and NS = 2000 source samples per task. ... The sampling rate source tensors and target tensor are set to 80% and 5%, respectively. ... The SR values for the target task were selected from the set {5%, 8%, 10%, 12% 15%, 18%, 20%}, representing a range from sparse to relatively dense sampling of the target task. ... Each source task contains 3000 measurements, and the target task is measured with only 1500 observations, reflecting a highly undersampled regime. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory amounts) were provided in the paper for running experiments. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9) were provided in the paper. |
| Experiment Setup | Yes | In the synthetic experiments, we investigate the impact of transfer learning on tensor compressed sensing under Gaussian design tensor regression models. ... we generate low-tubal-rank target task parameters W(0) using NT = 200 target samples and NS = 2000 source samples per task. We vary the number of source tasks (K), model shift magnitude (hk), and covariate shift level (σS) to evaluate the robustness of the proposed methods. ... The target parameter tensor W(0) Rd1 d2 d3 with a tubal rank r is generated as W(0) = P M Q, where P Rd1 r d3 and Q Rr d2 d3 are i.i.d. samples from N(0, 1). We consider a high-dimensional tensor regression problem with dimensions d1 = d2 = 20, d3 = 3, and a low-rank level r = 2. We generate NT = 200 independent target samples (y(0) i , X(0) i ) using y(0) i = X(0) i , W(0) + ϵ(0) i , where vec(X(0) i ) N(0, I) and ϵ(0) i N(0, 0.1). ... The source sample size is set to NS = 2000, with the number of source tasks K varying from 1 to 9. The parameter hk is chosen from values ranging between 10 and 200. To simulate model and covariate shifts, the source tasks are configured as follows: Model Shift: Model shifts are simulated by setting W(k) = W(0) + E(k) for k [K], where E(k) = Pk M Qk, with Pk Rd1 r d3 and Qk Rr d2 d3 sampled i.i.d. from N(0, 1). If E(k) > hk, then E(k) is rescaled as E(k) = hk E(k)/ E(k) . Covariate Shift: To assess robustness to covariate shifts, a heterogeneous design is used where vec(X(k) i ) N(0, σ2 SI) for all k [K]. The value of σS is selected from the set {0.3, 0.6, 0.9, 1.2, 1.5, 1.8}. |