Locally Adaptive Factor Processes for Multivariate Time Series

Authors: Daniele Durante, Bruno Scarpa, David B. Dunson

JMLR 2014 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The performance is assessed in simulations and illustrated in a financial application. In Section 4 we compare our model to BCR and to some of the most widely used models for multivariate stochastic volatility, through simulation studies. Finally in Section 5 an application to National Stock Market Indices across countries is examined.
Researcher Affiliation Academia Daniele Durante EMAIL Bruno Scarpa EMAIL Department of Statistical Sciences University of Padua Padua 35121, Italy David B. Dunson EMAIL Department of Statistical Science Duke University Durham, NC 27708-0251, USA
Pseudocode Yes Appendix A. Posterior Computation For a fixed truncation level L and a latent factor dimension K the detailed steps of the Gibbs sampler for posterior computations are: 1. Define the vector of the latent states and the error terms in the state space equation...
Open Source Code No The paper does not contain any explicit statement about providing open-source code, nor does it provide a link to a code repository. Code is not mentioned as being available in supplementary materials or upon request.
Open Datasets Yes In this application we focus our attention on the multivariate weekly time series of the main 33 (i.e. p = 33) National Stock Market Indices from 12/07/2004 to 25/06/2012. Figure 5 shows the main features in terms of stationarity, mean patterns and volatility of two selected National Stock Market Indices downloaded from http://finance.yahoo. com/.
Dataset Splits Yes To analyze the performance of the online updating algorithm in LAF model, we simulate 50 new observations {yi}150 i=101 with ti 2 T o = {101, . . . , 150}, considering the same and 0 used in the generating mechanism for the first simulated data set and taking the 50 subsequent observations of the bumps functions for the dictionary elements { (ti)}150 i=101; finally the additional latent mean dictionary elements { (ti)}150 i=101 are simulated as before maintaining the continuity with the previously simulated functions { (ti)}100 i=1. According to the algorithm described in Subsection 3.3, we apply the online updating algorithm presented in Subsection 3.3, to the new set of weekly observations {yi}422 i=416 from 02/07/2012 to 13/08/2012 conditioning on posterior estimates of the Gibbs sampler based on observations {yi}415 i=1 available up to 25/06/2012.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory specifications) used for running its experiments or simulations.
Software Dependencies No The paper refers to statistical models and algorithms like GARCH(1,1), PC-GARCH, GO-GARCH, and DCC-GARCH, but it does not specify the software libraries or packages used to implement them, nor does it provide any version numbers for any software.
Experiment Setup Yes Posterior computation for LAF is performed by using truncation levels L = K = 2 (at higher level settings we found that the shrinkage prior on results in posterior samples of the elements in the additional columns concentrated around 0). We place a Ga(1, 0.1) prior on the precision parameters σ 2 j and choose a1 = a2 = 2. As regards the n GP prior for each dictionary element lk(t) with l = 1, . . . , L and k = 1, . . . , K , we choose di use but proper priors for the initial values by setting σ2 11 = 1000 and place an Inv Ga(2, 108) prior on each σ2 Alk in order to allow less smooth behavior according to a previous graphical analysis of (ti) estimated via EWMA. Similarly we set σ2 k = 100 in the prior for the initial values of the latent state equations resulting from the n GP prior for k(t), and consider a = a B = b = b B = 0.005 to balance the rough behavior induced on the nonparametric mean functions by the settings of the n GP prior on lk(t), as suggested from previous graphical analysis. Note also that for posterior computation, we first scale the predictor space to (0, 1], leading to δi = 1/100, for i = 1, . . . , 100. For inference in BCR we consider the same previous hyperparameters setting for and 0 priors as well as the same truncation levels K and L , while the length scale in GP prior for lk(t) and k(t) has been set to 10 using the data-driven heuristic outlined in Fox and Dunson (2011). In both cases we run 50,000 Gibbs iterations discarding the first 20,000 as burn-in and thinning the chain every 5 samples.