Fast and Flexible Temporal Point Processes with Triangular Maps

Authors: Oleksandr Shchur, Nicholas Gao, Marin Biloš, Stephan Günnemann

NeurIPS 2020 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 6 Experiments", "Table 1: Average test set NLL on synthetic and real-world datasets (lower is better).", "Table 2: MMD between the hold-out test set and the generated samples (lower is better).
Researcher Affiliation Academia Oleksandr Shchur, Nicholas Gao, Marin Biloš, Stephan Günnemann Technical University of Munich, Germany EMAIL
Pseudocode No The paper describes algorithms and methods but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code and datasets are available under www.daml.in.tum.de/triangular-tpp
Open Datasets Yes We use 6 synthetic datasets from Omi et al. [10]: Hawkes1&2 [7], self-correcting (SC) [16], inhomogeneous Poisson (IPP), renewal (RP) and modulated renewal (MRP) processes. ... We also consider 7 real-world datasets: PUBG (online gaming), Reddit-Comments, Reddit-Submissions (online discussions), Taxi (customer pickups), Twitter (tweets) and Yelp1&2 (check-in times). See Appendix D for more details.
Dataset Splits Yes We partitioned the sequences in each dataset into train/validation/test sequences (60%/20%/20%). We trained the models by minimizing the NLL of the train set using Adam [57]. We tuned the following hyperparameters: ... We used the validaiton set for hyperparameter tuning, early stopping and model development.
Hardware Specification Yes We used a machine with an Intel Xeon E5-2630 v4 @ 2.20 GHz CPU, 256GB RAM and an Nvidia GTX1080Ti GPU.
Software Dependencies No The paper mentions 'Py Torch [53]' for implementation but does not specify its version number, nor does it list versions for other software dependencies.
Experiment Setup Yes We tuned the following hyperparameters: L2 regularization {0, 10 5, 10 4, 10 3}, number of spline knots {10, 20, 50}, learning rate {10 3, 10 2}, hidden size {32, 64} for RNN, number of blocks {2, 4} and block size {8, 16} for Tri TPP.