Estimation and Optimization of Composite Outcomes

Authors: Daniel J. Luckett, Eric B. Laber, Siyeon Kim, Michael R. Kosorok

JMLR 2021 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We derive inference procedures for the proposed estimators under mild conditions and demonstrate their finite sample performance through a suite of simulation experiments and an illustrative application to data from a study of bipolar depression.
Researcher Affiliation Collaboration Daniel J. Luckett EMAIL Genospace Boston, MA 02108, USA Eric B. Laber EMAIL Department of Statistical Science Duke University Durham, NC 27708, USA Siyeon Kim EMAIL Department of Biostatistics University of North Carolina at Chapel Hill Chapel Hill, NC 27607, USA Michael R. Kosorok EMAIL Departments of Biostatistics and Statistics & Operations Research University of North Carolina at Chapel Hill Chapel Hill, NC 27599, USA
Pseudocode Yes Algorithm 1: Pseudo-likelihood estimation of fixed utility function. Algorithm 2: Pseudo-likelihood estimation of patient-dependent utility function
Open Source Code No The paper mentions using the `metrop` function in the R package `mcmc` but does not provide a specific link or statement about releasing their own code for the methodology described.
Open Datasets Yes The Systematic Treatment Enhancement Program for Bipolar Disorder (STEP-BD) was a landmark study of the effects of antidepressants in patients with bipolar disorder (Sachs et al., 2007). ... We apply the proposed method to the observational data from the STEPBD standard care pathway... We also gratefully acknowledge the National Institute of Mental Health for providing access to the STEP-BD data set.
Dataset Splits Yes To evaluate each estimated policy, we used five-fold cross-validation of the inverse probability weighted estimator (IPWE) of the value for each outcome; i.e., for each fold, we used the training portion to estimate the optimal policy and propensity score, and we used the testing portion to compute the IPWE of the value; taking the average of the IPWE value estimates across folds yields the reported values.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, memory, or cloud instance types used for running experiments. It focuses on the methodology and simulation results.
Software Dependencies Yes As before, the pseudo-likelihood given in (3) is non-smooth in θ and standard gradientbased optimization methods cannot be used. It is again straightforward to compute the profile pseudo-likelihood estimator bβn(θ) = arg maxβ Rp b Ln(θ, β) for any θ Rp. However, because it is computationally infeasible to compute bβn(θ) for all θ on a grid if θ is of moderate dimension, we generate a random walk through the parameter space using the Metropolis algorithm as implemented in the metrop function in the R package mcmc (Geyer and Johnson, 2017) and compute the profile pseudo-likelihood for each θ on the random walk.
Experiment Setup Yes Each replication is based on a simulated Markov chain of length 10,000 as described in Section 2.2. ... Standard practice is to choose the variance of the proposal distribution, σ2, so that the acceptance proportion is between 0.25 and 0.5 (Geyer and Johnson, 2017).