Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Infinite-dimensional optimization and Bayesian nonparametric learning of stochastic differential equations
Authors: Arnab Ganguly, Riten Mitra, Jinpu Zhou
JMLR 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical examples are discussed in Section 4. Finally, some concluding remarks can be found in Section 5. Keywords: Reproducing kernel Hilbert spaces (RKHS), infinite-dimensional optimization, representer theorem, nonparametric learning, stochastic differential equations, diffusion processes, Bayesian methods |
| Researcher Affiliation | Academia | Arnab Ganguly EMAIL Department of Mathematics Louisiana State University Baton Rouge, LA 70820, USA Riten Mitra EMAIL Department of Bioinformatics and Biostatistics University of Louisville Louisville, KY 40202, USA Jinpu Zhou EMAIL Department of Mathematics Louisiana State University Baton Rouge, LA 70820, USA |
| Pseudocode | Yes | Algorithm 1: Gibb s algorithm for high frequency data. Algorithm 2: Gibb s algorithm for high frequency data with Horseshoe prior. |
| Open Source Code | No | The paper does not contain any direct statements about the availability of source code or links to a code repository for the methodology described. |
| Open Datasets | No | Our data points come from the above SDE with ς = 1, and we use Algorithm 1 and Algorithm 2 to estimate the entire drift function b, and the diffusion parameter ς. We consider two cases, ς = 1 and ς = 0.5. Case: ς = 1: We first consider (discrete) observations from (4.1) with true ς = 1, and use Algorithm 1 and Algorithm 2 to estimate the drift function b and the diffusion parameter ς. Case: ς = 0.5: We also consider data points from (4.1) with ς = 0.5 over the interval [0, 40] (with = 0.05). Given a set of discrete observations from a stochastic version of this differential equation driven by additive Brownian noise ς3 3B, with ς = 0.1I over time-range [0, 40] generated by taking = 0.04 and the conservation constant, XE(0)+XES(0) = 2, we use Algorithm 1 and Algorithm 2 to estimate the entire drift function b and the (constant) diffusion matrix ς. |
| Dataset Splits | No | From a discrete path from each of the SDE models, we use our algorithms to generate samples of β from the posterior distribution. The (posterior) mean of these β-samples gives the estimated function ˆb via equation (3.6), which is plotted against the true b. We next used multiple samples from the posterior distribution to calculate empirical 95% Bayesian credible bands around b. The paper describes generating data from SDE models and then learning from these paths, rather than using traditional train/test/validation splits for existing datasets. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU/CPU models or processor types. |
| Software Dependencies | No | The paper mentions using 'Gibbs algorithms' and 'MCMC algorithms' but does not specify any particular software libraries or their version numbers that were used for implementation. |
| Experiment Setup | Yes | For this we use a scaled t( |ν = 2, c = 1, µ = 0)-prior on the weights βk (that is, βk N( |0, λ2 k), λ2 k IG(1, 2)) in Algorithm 1 (with inverse-gamma replacing inverse-Wishart), and we use the parameters αi = α0 = a = a0 = 1/2, b = b0 = 1 (that is, classical HS prior) for Algorithm 2. For both the algorithms we use IG(1, 2)-prior on the diffusion-parameter ς2. We use F(ν1 = 1, ν2 = 0.3, c = 1)-distribution on λ2 k and the usual F(ν1 = 1, ν2 = 1, c = 1)-distribution on τ 2, that is, the following values of hyperparameters: αi = 0.5, α0 = a = a0 = 1/2, b = b0 = 1. For Algorithm 1, we use the hyperparameter values, ν = 5, U = 8I, and IW3(1 + dim, V = 2I3 3)-prior (where, dimension, dim= 3) on ςςT . For Algorithm 1 we use the (multidimensional version of) classical HS prior, and the same inverse-wishart prior on ςςT . |