Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

Deep Backward and Galerkin Methods for the Finite State Master Equation

Authors: Asaf Cohen, Mathieu Laurière, Ethan Zell

JMLR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conclude the paper with numerical experiments on benchmark problems from the literature up to dimension 15, and a comparison with solutions computed by a classical method for fixed initial distributions.
Researcher Affiliation Academia Asaf Cohen EMAIL Department of Mathematics University of Michigan Ann Arbor, MI 48104, USA Mathieu Lauri ere EMAIL Shanghai Frontiers Science Center of Artificial Intelligence and Deep Learning NYU-ECNU Institute of Mathematical Sciences New York University Shanghai Shanghai, 200000, China Ethan Zell EMAIL Department of Mathematics University of Michigan Ann Arbor, MI 48104, USA
Pseudocode Yes Algorithm 1 DBME 1: Input: A vector of initial parameters θ := (θi)N 1 i=0 . 2: Output: A grid of neural networks (Ui)i=0,...,N approximating the solution to (1) on π. 5: Ui initialized via θi. 6: for i from N − 1 to 0 do 7: Recalling (22), compute: ˆθi ∈ arg min θi ∈ Θ Li(θi) Li(θi) := max (x,κ)∈[d]×P([d]) ˆUi+1(x, Mθi i (κ)) − Ui(x, κ; θi) + (∆ti)H(x, κ, xUi(, κ; θi)) 9: ˆUi(, ) ∈ Ui(, ; ˆθi) ∧T 10: end for Algorithm 2 DGME 1: Input: An initial vector θ. 2: Output: A trained vector ˆθ such that U(, , ; ˆθ) approximately solves (1). 3: Compute: ˆθ ∈ arg min θ∈Rδ L(θ) L(θ) := max (t,x,η)∈[0,T)×[d]×P([d]) n t U(t, x, η; θ) + H(x, η, x U(t, , η; θ)) −∑y∈[d]ηy Dη y U(t, x, η; θ) γ (y, y U(t, , η; θ)) + |U(T, x, η; θ) − g(x, η)| o
Open Source Code Yes Code for the DGME and DBME algorithms, data for models used, and code used to create the visualizations in this section can be found on Git Hub at https://github.com/ethanzell/DGME-and-DBME-Algorithms.
Open Datasets No This paper proposes and analyzes two neural network methods to solve the master equation for finite-state mean field games (MFGs). Solving MFGs provides approximate Nash equilibria for stochastic, differential games with finite but large populations of agents. Numerical experiments on benchmark problems from the literature.
Dataset Splits No The paper describes experiments on 'benchmark problems from the literature' and does not mention specific dataset splits like training, validation, or test sets.
Hardware Specification Yes Both programs were run on the Great Lakes computing cluster, a highperformance computing cluster available for University of Michigan research. All algorithms were run on the cluster s standard nodes, each of which consists of thirty-six cores per node.
Software Dependencies Yes In this section, both the DBME and DGME algorithms were implemented in Python using Tensor Flow 2.
Experiment Setup Yes For all networks featured, we used four layers of sixty nodes each, with sigmoid activation function, excluding the input and output layers. The output layers used ELU. ... In practice, we found that training UN 1 for more epochs than the other networks resulted in better results for networks closer to the initial time.