Regularized Estimation of High-dimensional Factor-Augmented Vector Autoregressive (FAVAR) Models
Authors: Jiahe Lin, George Michailidis
JMLR 2020 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The performance of the proposed estimators is evaluated on synthetic data. Further, the model is applied to commodity prices and reveals interesting and interpretable relationships between the prices and the factors extracted from a set of global macroeconomic indicators. In Section 4, we introduce an empirical implementation procedure for obtaining the estimates and present its performance evaluation based on synthetic data. An application of the model on interlinkages of commodity prices and the influence of world macroeconomic indicators on them is presented in Section 5 |
| Researcher Affiliation | Academia | Jiahe Lin EMAIL Department of Statistics University of Michigan Ann Arbor, MI 48109, USA George Michailidis EMAIL Department of Statistics and the Informatics Institute University of Florida Gainesville, FL 32611, USA |
| Pseudocode | Yes | Algorithm 1: Computational procedure for estimating A, Γ and Λ. |
| Open Source Code | No | The paper does not provide an explicit statement about the release of source code for the described methodology or a link to a code repository. |
| Open Datasets | Yes | The commodity price data (Xt) are retrieved from the International Monetary Fund, comprising of 16 commodity prices in the following categories: Metal, Energy (oil) and Agricultural. The set of economic indicators (Yt) contain core macroeconomic variables and stock market composite indices from major economic entities including China, EU, Japan, UK and US, with a total number of 54 indicators. Data source: International Monetary Fund. Data source: Fred St.Louis, ECB Statistical Data Warehouse, UK Office for National Statistics, Bank of England, National Bureau of Statistics of China, YAHOO!. |
| Dataset Splits | Yes | For all time series considered, we use monthly data spanning the January 2001 to December 2016 period. Further, based on previous empirical findings in the literature related to the global financial crisis of 2008 (Stock and Watson, 2017), we break the analysis into the following three subperiods (Stock and Watson, 2017): pre-crisis (2001–2006), crisis (2007–2010) and post-crisis (2011–2016), each having sample size (available time points) 72, 48, and 72, respectively. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running its experiments. |
| Software Dependencies | No | The paper mentions 'Lasso regression' and information criteria like 'Panel Information Criterion (PIC)' and 'Bayesian Information Criterion (BIC)' but does not specify any software names with version numbers. |
| Experiment Setup | Yes | Simulation setup. Throughout, we assume ΣXw , ΣFX and Σe are all diagonal matrices, and the sample size is fixed at 200, unless otherwise specified. ... For the calibration equation, the density level of the sparse coefficient matrix Γ Rq p2 is fixed at 5/p2 for each regression; ... Table 1 lists the simulation settings and their parameter setup. Algorithm 1: Computational procedure for estimating A, Γ and Λ. Input: Time series data {xi}n i=1 and {yi}n i=1, (λΓ, r), and λA. ... We select the optimal pair based on the Panel Information Criterion (PIC) proposed in Ando and Bai (2018), which searches for (λΓ, r) over a lattice... Analogously, the implementation of Stage II requires λA as input, and we select λA over a grid of values that minimizes the Bayesian Information Criterion (BIC): |