Sparse and low-rank multivariate Hawkes processes

Authors: Emmanuel Bacry, Martin Bompaire, Stéphane Gaïffas, Jean-Francois Muzy

JMLR 2020 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this Section we conduct experiments on synthetic datasets to evaluate the performance of our method, based on the proposed data-driven weighting of the penalizations, compared to unweighted penalizations (Zhou et al., 2013).
Researcher Affiliation Collaboration Emmanuel Bacry EMAIL CEREMADE, CNRS UMR 7534, Universit e Paris-Dauphine, Paris, France Martin Bompaire EMAIL Criteo, Paris, France St ephane Ga ıffas EMAIL LPSM, CNRS UMR 8001, Universit e Paris-Diderot, Paris, France DMA, CNRS UMR 8553, Ecole Normale Sup erieure, Paris, France Jean-Francois Muzy EMAIL Laboratoire Sciences Pour l Environnement, CNRS UMR 6134, Universit e de Corse, Cort e, France
Pseudocode No The paper describes methods like "standard batch proximal gradient descent algorithms" and "Fista (Beck and Teboulle, 2009)" and "GFB (generalized forward backward, see Pino et al. (1999))", but it does not contain a structured pseudocode or algorithm block.
Open Source Code Yes All experiments are done using our tick library for Python3, see Bacry et al. (2018), its Git Hub page is https://github.com/X-Data Initiative/tick and documentation is available here https://x-datainitiative.github.io/tick/.
Open Datasets No We generate Hawkes processes using Ogata s thinning algorithm (Ogata, 1981) with d = 30 nodes.
Dataset Splits No For each simulated data, we increase the length of the time interval T = 5000, 7000, 10000, 15000, 20000, and fit each time the procedures. An overall averaging of the results is computed on 100 separate simulations.
Hardware Specification No The paper discusses computational complexity and time (e.g., 'computations for least-squares can be orders of magnitude faster'), but does not provide any specific details about the hardware used to run the experiments, such as CPU or GPU models.
Software Dependencies No All experiments are done using our tick library for Python3, see Bacry et al. (2018)...
Experiment Setup Yes We generate Hawkes processes using Ogata s thinning algorithm (Ogata, 1981) with d = 30 nodes. Baseline intensities µj are constant on blocks, we use K = 3 basis kernels hj,j ,k(t) = αke αkt with α1 = 0.5, α1 = 2 and α3 = 5. ... The tensor A is rescaled so that the operator norm of the matrix P3 k=1 A , ,k is equal to 0.8... For each simulated data, we increase the length of the time interval T = 5000, 7000, 10000, 15000, 20000, and fit each time the procedures. An overall averaging of the results is computed on 100 separate simulations. ... We use first-order optimization algorithms, based on proximal gradient descent. Namely, we use Fista (Beck and Teboulle, 2009) for problems with a single penalization on A... and GFB (generalized forward backward, see Pino et al. (1999))... We choose a fixed gradient step equal to 1/L where L is the Lipschitz constant of the loss... We limit our algorithms to 25, 000 iterations and stop when the objective relative decrease is less than 10 10 for Fista and 10 7 for GFB. ... The data-driven weights used in our procedures are the ones derived from our analysis, see (21) and (23), where we simply put x = log T.