EV-GAN: Simulation of extreme events with ReLU neural networks

Authors: Michaël Allouche, Stéphane Girard, Emmanuel Gobet

JMLR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The above results are illustrated on simulated data and real financial data. It appears that our approach outperforms the classical GAN in a wide range of situations including high-dimensional and dependent data.
Researcher Affiliation Academia Michaël Allouche EMAIL Centre de Mathématiques Appliquées (CMAP), CNRS, Ecole Polytechnique Institut Polytechnique de Paris, Route de Saclay, 91128 Palaiseau Cedex, France Stéphane Girard EMAIL Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LJK 38000 Grenoble, France Emmanuel Gobet EMAIL Centre de Mathématiques Appliquées (CMAP), CNRS, Ecole Polytechnique Institut Polytechnique de Paris, Route de Saclay, 91128 Palaiseau Cedex, France
Pseudocode No The paper describes methods and constructions but does not present any explicit pseudocode blocks or algorithm sections.
Open Source Code No The paper states: "All the code was implemented in Python 3.8.2 and using the library Py Torch 1.7.1 for the GANs training." but does not provide a direct link to the source code repository or an explicit statement of code release for the methodology described.
Open Datasets Yes Our approach is tested on closing prices of daily financial stock market indices taken from https://stooq.com/db/h/ on the October 1st, 2020. This database includes 61 world indices from their first day of quotation.
Dataset Splits No The paper mentions that for simulated data "n = 10,000 i.i.d data {X1, . . . , Xn} are simulated from the resulting bivariate model" and for real data it describes processing steps like "positive returns were discarded" and using specific indices, but it does not explicitly provide details about training/test/validation dataset splits (percentages, counts, cross-validation, etc.) for model evaluation or reproduction.
Hardware Specification Yes The numerical experiments presented in the next two sections have been conducted on the Cholesky computing cluster from Ecole Polytechnique http://meso-ipp.gitlab.labos. polytechnique.fr/user_doc/. It is composed by 2 nodes, where each one includes 2 CPU Intel Xeon Gold 6230 @ 2.1GHz, 20 cores and 4 Nvidia Tesla v100 graphics card.
Software Dependencies Yes All the code was implemented in Python 3.8.2 and using the library Py Torch 1.7.1 for the GANs training.
Experiment Setup Yes The neural network training is done by alternating generator and discriminator steps. The ranges of hyperparameters that are explored in order to find the best model for each data configuration are reported in Table 5. Note that, in order to respect the architecture (6), the generator is restricted to be a one hidden layer NN. Additionally, we use the optimizer Adam (Kingma and Ba, 2014) with default parameters β1 = 0.9 and β2 = 0.999 for all tests performed during 1, 000 iterations. No additional normalization techniques are used. Every 5 iterations, two metrics (see Section 3.2 below) are computed and, for each metric, the NN parameters associated with the best results among the 200 checkpoints are selected.