Deep Optimal Stopping
Authors: Sebastian Becker, Patrick Cheridito, Arnulf Jentzen
JMLR 2019 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We test the approach on three problems: the pricing of a Bermudan max-call option, the pricing of a callable multi barrier reverse convertible and the problem of optimally stopping a fractional Brownian motion. In all three cases it produces very accurate results in highdimensional situations with short computing times. |
| Researcher Affiliation | Collaboration | Sebastian Becker EMAIL Zenai AG, 8045 Zurich, Switzerland Patrick Cheridito EMAIL Risk Lab, Department of Mathematics ETH Zurich, 8092 Zurich, Switzerland Arnulf Jentzen EMAIL SAM, Department of Mathematics ETH Zurich, 8092 Zurich, Switzerland |
| Pseudocode | No | The paper describes methods and equations, but does not provide a clearly labeled pseudocode or algorithm block. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | No | The paper describes experiments based on financial models (Bermudan max-call options, callable multi barrier reverse convertibles) and a fractional Brownian motion, which are simulated rather than relying on pre-existing public datasets. The text states: "But our approach works for any asset dynamics as long as it can efficiently be simulated." |
| Dataset Splits | No | The paper describes generating 'batches of 8,192 paths' for training and using 'KL = 4,096,000 trial paths' to estimate lower bounds and 'KU = 1,024 paths' with 'J = 16,384' continuation paths for upper bounds. These are simulation parameters and sample sizes for Monte Carlo estimation and training, not descriptions of pre-defined train/test/validation splits of a fixed dataset. |
| Hardware Specification | Yes | All computations were performed in single precision (float32) on a NVIDIA Ge Force GTX 1080 GPU with 1974 MHz core clock and 8 GB GDDR5X memory with 1809.5 MHz clock rate. The underlying system consisted of an Intel Core i7-6800K 3.4 GHz CPU with 64 GB DDR4-2133 memory running Tensorflow 1.11 on Ubuntu 16.04. |
| Software Dependencies | Yes | All computations were performed in single precision (float32) on a NVIDIA Ge Force GTX 1080 GPU with 1974 MHz core clock and 8 GB GDDR5X memory with 1809.5 MHz clock rate. The underlying system consisted of an Intel Core i7-6800K 3.4 GHz CPU with 64 GB DDR4-2133 memory running Tensorflow 1.11 on Ubuntu 16.04. |
| Experiment Setup | Yes | We conducted 3,000+d training steps, in each of which we generated a batch of 8,192 paths of (Xn)N n=0. ... we employed mini-batch gradient ascent with Xavier initialization (Glorot and Bengio, 2010), batch normalization (Ioffe and Szegedy, 2015) and Adam updating (Kingma and Ba, 2015). |